Pittsburgh Brain Activity Interpretation Competition 2006 Methods Description Subjective ratings prediction by flatmapped covariance Project Abstract In fMRI the temporal and spatial dynamics of the BOLD signal can tell us a great deal about underlying regional brain activity. How much information can we extract from this signal? Can we use the voxel-level patterns of activity from across the brain to reliably predict the subjective experience of an individual? If so, will these methods prove to be robust when stimuli approach the complexity of real life? As a part of the Pittsburgh Brain Activity Interpretation Competition our approach relied on identifying groups of voxels that were highly covariant with a behavioral response vector. Since behavioral ratings were given for two of the three movies we focused on using those two movies to train a novel method of voxel selection. Using the functional imaging data and behavioral data pairs we created volumes wherein each voxel represented the covariance score between that voxel’s timecourse and the hemodynamically-convolved behavioral rating. By flatmapping the results of this operation were able to use a custom method of similarity detection to constrain the voxels used in the generation of a final prediction timecourse. Our method was effective at predicting some, but not all of the behavioral ratings. It preformed best on the most objective of the ratings, including body parts, language, and faces. Its performance degraded with the increased subjective value of other ratings, such as arousal, attention, and sadness. Across all ratings correlation scores for predictions made between movies one and two varied between 0.00 and 0.75. Introduction The structure of the competition and the nature of the required predictions presented a variety of difficult challenges. Our initial approach to the competition was skewed toward the domain of discriminant functions, categorization & regression trees, and support vector machines. While these are all effective methods that have been used previously for prediction from neuroimaging data, we felt that they would be insufficient for the prediction of data that was not binary in nature. Thus it was our intention from very early in the competition to develop a method that could accommodate the larger range of values. We also wanted to engineer a method that was novel and distinct from other potential entries, as we hypothesized that methods involving support vector machines and neural nets would dominate the competition submissions from other groups. Our end goal in the competition was to produce the best overall prediction for as many vectors as possible. It was also our desire to stay 'true' to the goals of the competition and not draw data from outside the brain in the calculation of our predictions. We felt that many groups would use feedback data from their first two submissions of movie three data to re-weight or otherwise adjust their predictions. We desired that our predictions be considered as pure as possible with no external feedback provided through investigator intervention. Method The below text describes the processing strategy for one subject. 1. Reorientation of the data The Automatic Image Registration package (Woods et al., 1992; Woods et al., 1998) was used to place the volumes in an SPM-compatible neurological orientation. This was a necessary step because the competition data was provided with the X dimension increasing from right to left and the Y dimension increasing from anterior to posterior. The required orientation of SPM images requires that the X dimension increase from left to right and the Y dimension to increase from posterior to anterior. Using the 'reorient' command-line program all images were rotated 180 degrees around the z-axis (see Figure 1). Using this method the final orientation of the images was neurological. 2. Spatial Normalization All images were normalized in SPM2 using parameters determined from the segmented gray matter of the anatomical MPRAGE image. Images were normalized into a standard 3D stereotaxic space defined by the International Consortium for Brain Mapping (ICBM)-305 (Ashburner and Friston 1999; Ashburner, et al. 1997b; Mazziotta, et al. 1995). 3. Creation of brain masks Images were masked to only include voxels of interest. The anatomical MPRAGE image was first segmented into probabilistic images of cerebral white and gray matter using SPM2 (Ashburner et al., 1997a). These images were then smoothed 1 mm FWHM and recombined to produce a single whole-brain mask. This mask was then used for the removal of cerebrospinal fluid, skull, and other voxels not under investigation in the EPI images (see Figure 2). The mask also sped the computation time of later steps, as the number of voxels to be computed dropped from around 140,000 to 40,000. 4. Generation of covariance volumes A covariance score was calculated based for the timecourse of each voxel in the EPI images and the hemodynamically convolved behavioral rating. For both movie1 and movie2 EPI runs each of the 13 base features were covaried against all voxels located within the brain mask. The result of this step was the creation of a volume of covariance values for each of the movie1 and movie2 behavioral ratings (see Figure 3). 5. Flatmapping All normalized EPI and covariance volumes were made into cortical flatmaps using the Computerized Anatomical Reconstruction and Editing Toolkit (CARET) version 5.3 (VanEssen et al., 2001). This process was fully automated using the caret_map_to_fmri and caret_file_convert command line tools. The target space used for the flatmapping operation was the human 'Colin' atlas left and right hemispheres (VanEssen, 2002). Grid dimensions for conversion of analyze volumes to Caret metric files were set to the SPM2 default values. A custom MATLAB script loaded the metric file and flatmap coordinate file into memory and assembled a table of node intensity, latitude, and longitude using the node number as a key value. This table of discrete points was then used to create a continuous map of values on a grid using a triangle-based cubic interpolation (see Figure 4). Two flatmaps were generated for each volume: one each for the left and right hemisphere. 6. Finding Similarity A custom method was used to identify regions with similarly high covariance between movie1 and movie2. First, a new weighted copy of each covariance map was created. The value of each voxel in the new map was calculated as the average value of the original voxel and its 24 nearest neighbors weighted by a gaussian filter (sigma = 1) in which the center voxel retained its original value and the surrounding voxels were weighted as a function of their distance from the center of the filter. In this way, robust regions of covariance were amplified, while sparse covariance thought to reflect "noise" in the data was dampened. Next, the two weighted covariance maps for each subject (movie1 and movie2) were multiplied together to enhance the values of voxels that were active in both maps and reduce the values of voxels that were only active in one of the two. The result was a single map for each hemisphere that reflected the areas of peak similarity between two covariance flatmaps (see Figure 5). 7. Generation of Prediction The EPI flatmaps were assembled in temporal order to create a flatmap timeseries. Next, an automated peak-search algorithm identified the top 120 pixels from the similaraity flatmap. The timeseries of each pixel was z-scored and weighted by the average of the covariance scores for that pixel from movie1 and movie2. The resulting prediction vectors for the left and right hemisphere were then averaged together to yield one final prediction vector. This vector was smoothed using a moving-average method with a span of five data points. For the final (third) competition submission the vectors were also averaged across subjects. Results and Discussion The method described above is good at predicting some ratings and rather poor at predicting others. The more fundamental a feature is the better the resulting prediction will be. As such, the method does quite well for body parts, faces, language, and motion. In creating predictions for movie1 and movie2 these categories would routinely have a correlation score between 0.45 and 0.70. The method did not fare as well when predicting more subjective vectors, like amusement, attention, and sadness. The correlations for these categories were below 0.20 and often close to zero. The most notable aspect of this method is that it relies so heavily on the raw BOLD timecourse for the final prediction. Thus the final predictions are a measure of how well the voxel-level signals in the brain are related to the subjective ratings measures. This is most likely why faces are highly predictable while more esoteric ratings like food are not. From our data it would seem that there is no group of voxels whose activity is strongly related to the presence of food, and it may be true that there are stimuli which can be observed but will not be predictable from voxel-level signals. Our hope for the future is to develop this method further by bringing in more a priori information related to the relationships between vectors. For example, you will not have a face rating without having a body part rating. Many of the ratings vectors are very highly correlated. On possibility is the use of a Bayes network to integrate these correlations into the final prediction. Additionally, we also hope to use the similarity algorithm on the timeseries itself to determine similarity across periods when the subjective rating is high. This information could then be used as a different method of pixel selection for generation of the final timecourse. References Ashburner, J. and K. J. Friston (1999). "Nonlinear spatial normalization using basis functions." Hum Brain Mapp 7(4): 254-266. Ashburner J & Friston KJ (1997a). "Multimodal Image Coregistration and Partitioning - a Unified Framework." NeuroImage 6:209-217. Ashburner, J., P. Neelin, et al. (1997b). "Incorporating prior knowledge into image registration." Neuroimage 6(4): 344-52. Mazziotta, J. C., A. W. Toga, et al. (1995). "A probabilistic atlas of the human brain: theory and rationale for its development. The International Consortium for Brain Mapping (ICBM)." Neuroimage 2(2): 89-101. Van Essen, D.C., Dickson, J., Harwell, J., Hanlon, D., Anderson, C.H. and Drury, H.A. 2001. An Integrated Software System for Surface-based Analyses of Cerebral Cortex. Journal of American Medical Informatics Association, 41: 1359-1378. Van Essen, D.C. (2002) Windows on the brain. The emerging role of atlases and databases in neuroscience. Curr. Op. Neurobiol 12: 574-579 Woods RP, Cherry SR, Mazziotta JC. Rapid automated algorithm for aligning and reslicing PET images. Journal of Computer Assisted Tomography 1992;16:620-633. Woods RP, Grafton ST, Watson JDG, Sicotte NL, Mazziotta JC. Automated image registration: II. Intersubject validation of linear and nonlinear models. Journal of Computer Assisted Tomography 1998;22:153-165. Figures See attached pages. Optional Comment on Competition All in all everyone involved in organizing the competition did a fantastic job. We loved the interactive webcasts and the sheer volume of data made available. It couldn’t have been easy to assemble so much data in so many formats – thank you for taking the time to make our job as easy as possible. That being said, we do wish that all data could have been released from the beginning. We know there was a crazy time crunch to get the competition underway, but we spent two weeks figuring out how to flatmap all our images with Caret right before the BrainVoyager surface data was released. D’oh! We also lost a bit of time when the ratings vectors had the wrong TR delay. We won’t fault you for that though – many of us here at Dartmouth make the same mistake with disturbing regularity. At any rate, thanks again. Seriously – as a small group of graduate students and postdocs we had a blast! Figure 1: Illustration of sample brain before (left) and after (right) reorientation using AIR. Volumes were rotated 180 degrees around the z-axis of the image. Figure 2: Illustration showing final brain mask for subject one. The mask is created using the union of the probabilistic gray and white matter volumes created during the SPM2 segmentation of the anatomical MPRAGE image. Figure 3: Sample covariance map showing the areas of high positive (red) and negative (blue) covariance in subject one for the face rating during movie one. Figure 4: Flatmapped covariance images from movie one (left) and movie two (right) for the left hemisphere. The images differ from the appearance of a traditional Caret flatmap due to the interpolation method used within Matlab. Figure 5: Completed map of similarity between movie one and movie two covariance flatmaps. Important to note is that most areas of the image have been dampened while larger areas of shared similarity are amplified.
© Copyright 2026 Paperzz