Image Mining in fMRI Reports: a Meta-research Study

 

 Articles 

 Figures 

 Images 

 Blobs

Healthy

132

217

1200

5303

Alzheimer

29

44

184

573

Schizophrenia

18

23

103

307
 
183

284

1487

6183





2.2 fMRI Activity


Consider typical images of fMRI activity, as shown in Fig. 1. In a brief glance, it is easy to identify several features of relevance, such as the kind of section of the image (axial in this case, as opposed to sagittal or coronal), various anatomical features of the section, as well as the functional activity regions or ‘blobs’ within the section. We do that by relating the image to an internal representation of our anatomical and physiological knowledge of the brain. This relation takes into account physical and geometrical properties of the underlying structural image, as well as of the superimposed blob. In addition to the activity location, other features, such as intensity, area, perimeter or shape can be used to fully characterise the activity, c.f. ([1,18]). Other non-pictorial features, such as the text in the caption, could also be used to characterise said images.



A329170_1_En_5_Fig1_HTML.gif


Fig. 1
Examples of images presented in fMRI reports. On the leftmost image (adapted from [13]), activity is present in the occipital, left temporal and frontal areas of the brain, and the activity is reported using the hot colour scale. The activity on the second image (adapted from [9]) is shown in three different uniform colours, while the third image (adapted from [22]) shows a combination of hot and cold colour scales, for increase and decrease of activity when compared to the reference

In Fig. 1, one can also see the various reporting styles illustrated, including various underlying, gray-scale, structural images, colours and formats. The leftmost image shows a typical example where a slight increase of activity when compared to the reference corresponds to dark red while a big increase is depicted in bright yellow, which is typically called the hot colour scale. On the rightmost image, the decrease of activity when compared to the reference is shown in a gradation of blue, from dark to bright, corresponding to a small to big decrease. The image on the middle shows an example where the authors only chose to report the areas of difference in activation, without giving intensity information.


2.3 Image Extraction Procedure


In Fig. 2, we show a flowchart of our framework. We start by extracting figures from the PDF files of publications, using an open source command-line utility pdfimages, running in Linux.



A329170_1_En_5_Fig2_HTML.gif


Fig. 2
Flowchart describing the blob mining procedure. First, figures are retrieved from articles (images adapted from Johnson et al.(2007)). This is then followed by the detection of possible objects containing fMRI activity reports. After processing and retrieval of these images, they are cleaned of artifacts, such as lines and text, allowing for a final stage of blob identification

For each journal publication there is a pre-defined, common reporting style, but, as shown before, different authors produce their figures with different styles. They have non-homogeneous content, such as multiple image frames per figure, other plots, annotations or captions. Since we desire a clear image in order to accurately isolate the fMRI activity of interest, it is necessary to morphologically process those figures.

The first stage is the object identification. Many figures have a simple background colour, like black or white, but others have different colours, e.g., gray. Hence, the background colour needs to be detected, which is done through histogram and border analysis. The possible background colours are detected from the borders of the image, and the one with highest number of pixels is selected.

To detect different objects in a figure, and after background detection, figures are converted to black (background) and white (objects) colour. In those binary images, the white areas correspond to the smallest rectangle enclosing a object. Objects in the border of the respective figure, as well as those composed of only a few pixels are discarded. The next step is to analyse the images that are left inside the remaining objects. After extracting said images, we need to identify and extract the ones that correspond to fMRI reports. This is done using various properties, such as:



  • a minimum perimeter of the image, which we have set to 80 pixels, to allow a sufficient processing resolution;


  • a minimum and maximum number of image/background pixel ratio, between 0.1 and 97.5, to avoid non-brain images;


  • percentage of colour pixels in the image between 0 and 40 % of coloured pixels, filtering out non-fMRI images or images with activity all over the brain;


  • image aspect ratio between 0.66 and 1.6, typical of a brain image;


  • one image should occupy more than 50 % of the frame, to eliminate multiple images in the same object frame.
Regarding the last property, we repeated the object identification procedure when objects included several images, until no more images could be found.

In the example shown in Fig. 2, the object frame containing the figure colour map is discarded, due to the aspect ratio. Two of the brain images are also discarded since they don’t have colour present, therefore not being considered as originating from an fMRI study.

The following step removes undesired annotations. In Fig. 2, these correspond to coordinate axis as well as letters ‘L’ and ‘R’. This stage is done by removing all images inside the frame, except for the biggest one. Also any lines in 0 or 90 degree angles are removed, using the Hough transform [8,20] on each frame. Pixels belonging to vertical/horizontal lines that are present in more than two thirds of the height/width of the object are replaced with an average intensity of the surrounding pixels.


2.4 Volume and Section Identification


Once the activity images have been retrieved and cleaned, the type of template used in the images, i.e. volume type, and sections are identified, to estimate the three-dimensional coordinates of the activated regions. To represent the three dimensional changes in brain activity, views from three different planes are used to represent them in two dimensions. Thus we have axial sections, along the transversal plane that travels from the top of the brain to bottom; the sagittal section that travels along the median plane, from left to right; and the coronal section, along the frontal plane, that travels from front to back. To do a proper characterisation of the images, instead of focusing on the internal features of each section, the symmetry characteristics of the section shapes are used, as show in Fig. 3.



A329170_1_En_5_Fig3_HTML.gif


Fig. 3
Section identification—Top column contains example fMRI activity images (after conversion to grey scale) and below them their corresponding binary masks. From left to right, we have axial, coronal and sagittal sections

The images are again converted to binary images, thereby outlining the respective shape of the section. Simple symmetry allows for a suitable distinction between sections. One axial section is mostly symmetric about both the horizontal and vertical axis (Fig. 3a, d). The coronal section displays some symmetry only with respect to its vertical axis (Fig. 3b, e) while the sagittal section is asymmetric (Fig. 3c, f).

Most researchers map the activity changes found onto either SPM [10] or Colin [3] volume templates. Colin volumes contain higher resolution sections, when compared to SPM. Regarding the spatial separation between adjacent sections, SPM volume templates uses 2 mm, whereas that distance is 1mm for Colin templates.

To detect the volume type, one can use a complexity measure of the images. We used a Canny filter, [4], to detect the voxels corresponding to contrast edges. This is done for both template volumes, i.e. Colin and SPM, and for all the image slices from the section identified before. The volume template we select corresponds to the one with the minimal difference between the analysed image and the volume template images. This difference is calculated for the whole image and for a centred square with half the image size. We then average both values and use this as the difference measure.

To identify which slice of the template’s volume corresponds to the extracted image, we compare that image with all of the template’s slices. This comparison is performed using a combination of correlation and scale invariant feature transform (SIFT,  [16,20]). If there is a slice with a correlation of more than.9 with the extracted image, then that slice is selected. Otherwise, we select the slice which obtains the smallest distance of SIFT features as the correct mapping. Once this information is found, the complete coordinate set is identified for the reported image.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jun 14, 2017 | Posted by in GENERAL SURGERY | Comments Off on Image Mining in fMRI Reports: a Meta-research Study

Full access? Get Clinical Tree

Get Clinical Tree app for offline access