Coregistration Manual

How to coregister MRI data into MNI template-space


Table of Contents

  1. Summary
  2. Data
  3. Suggested program calling sequences
  4. Discussion
  5. Comparison of Coregistration Software
  6. References
  7. Questions/comments


I. Summary of Coregistration Steps for Neuroimaging:

A) Register functional data to anatomic data using a 6-parameter (rigid-body) fit.
B) Create a study-specific average anatomical image.
C) Register all subjects to the study-specific average image.
D) Register the average image to MNI template space.
E) Cumulate the transforms to bring functional data/results into MNI space.


II. Data

The following discussion assumes you have acquired data for a standard fMRI study. The types of image data mentioned below include:

  1. T1 high-resolution: a 3D image with pixel size approximately 1mm3, covering the entire head.
  2. T2 coplanar: a T2 image with the same plane alignment and thickness as the EPI data, but with better within-plane resolution. The location of the planes can be assumed to be similar to that of the nearest EPI sequence. Typical voxel size is 1mmx1mmx5mm.
  3. EPI: a 4D series (x,y,z,t). There are typically several EPI 4D files, or "runs", per acquisition session.
  4. MNI template: This is not something you acquire. It is an average of many subjects, created at the Montreal Neurological Institute (MNI) that defines a coordinate space for the brain. Your data will usually be coregistered to this template.

III. Suggested program calling sequences.

The following example is just that, an example. The example is tailored to the type and quality of data typically acquired in my lab, the Waisman Center Brain Imaging Lab. All of the software mentioned here is installed in our lab. There is a nearly infinite variety of approaches, especially as embellishments are added. The following provides a starting point using mostly FSL programs. It is a fairly standard approach which should not raise any flags for reviewers. FSL's "flirt" was selected to demonstrate a specific analysis pathway because it has a few well-defined, easy to use programs with a small number of input parameters.

The most important step is not explicitly included: visually inspect the results. You cannot do this often enough! Or more to the point, your graduate student cannot do this often enough!

III.A) Register functional data to anatomic data using a 6-parameter (rigid-body) fit.

III.A) Register coplanar T2 to hi-res T1:

flirt -cost normmi -dof 6 -omat /fullpath/T2_coplanar_2_T1.mat -in /fullpath/T2_coplanar.img -ref /fullpath/T1_hires.img -out /fullpath/T2_coplanar_2_T1.img

Notes:

III.B) Create a study-specific average anatomical image.

James Gee presented a very elegant approach to this when he visited us in early October. AIR has a nice program that implements this, but it may be tricky to use. For the software we have at hand, template creation is a 2-step process.

III.B.1) Register each of the hi-res T1 images to a representative image.
The choice of "representative image" is frought with minefields. Common choices include:

Let's assume we chose "a single subject from the study, AC/PC-aligned", and the subject is sub_001. For each subject NNN, do the following:

flirt -cost normmi -dof 12 -interp sinc -omat /fullpath/subNNN_2_sub001ACPCaligned.mat -in /fullpath/T1_subNNN.img -ref /fullpath/T1_sub001_ACPCaligned.img -out /fullpath/T1_subNNN_2_sub001ACPCaligned.img

III.B.2) Create a sum-image of the registered images.

Following is a list of solutions, from least to most desireable. For me, most desireable implies the least number of questions provoked...

III.B.2.a) Spamalize can create a sum-image either via the GUI:

Commands->File Manip->ANALYZE->Sum (avg.) Files

or using the Spamalize program in a larger IDL program:

spam_sum_img_file (sums 3D volumes in a 4D image file)

spam_sum3d_img_file (Sum 2 or more 3D files to create a 3D sum-image file.)

III.B.2.b) In Python, you can use a module that transparently calls an IDL routine, spam_sum3d_img_file.pro, to sum two or more 3D files to create a 3D sum-image file:

/apps/dev/python_apps/image_manip.sum_3d_files

III.B.2.c) In FSL, try iteratively using a Python script like this, where the filenames for the coregistered images (from step II.B.1) are in "file_list":

avwmaths file_list[0] -add file_list[1] sum.img
for filename in file_list[2:-1]
avwmaths sum.img -add filename sum.img

III.B.2.d) AFNI provides an easy tool for this:

3dmerge -gmean T1_sub001.img T1_sub002.img T1_sub003.img ... T1_sub999.img

If anyone has other suggestions please let me know.

Let's assume you chose one of these, and now have a sum-image compoased of brains which are AC/PC aligned and otherwise match sub001ACPCaligned.img. This will be designated:

T1_sum.img

III.C) Register all subjects to the study-specific average image.
For each subject (including sub001):

flirt -cost normmi -dof 12 -interp sinc -omat /fullpath/subNNN_2_sum.mat -in /fullpath/T1_subNNN.img -ref /fullpath/T1_sum.img -out /fullpath/T1_subNNN_2_sum.img

III.D) Register the average image to MNI template space.

flirt -cost normmi -dof 12 -interp sinc -omat /fullpath/T1sum_2_MNI.mat -in /fullpath/T1_sum.img -ref /apps/linux/fsl/etc/standard/avg152T1.img -out /fullpath/T1sum_2_MNI.img

III.E) Cumulate the transforms to bring functional data/results into MNI space.
For each subject, do the following:

III.E.1) Cumulate the transforms for coplanar->T1->sum->MNI
convert_xfm -omat /fullpath/subNNN_coplanar_2_sum.mat -concat /fullpath/subNNN_2_sum.mat /fullpath/T2_coplanar_2_T1.mat

convert_xfm -omat /fullpath/subNNN_coplanar_2_MNI.mat -concat /fullpath/subNNN_coplanar_2_sum.mat /fullpath/T1sum_2_MNI.mat

The order of the transform matrices is important, as is the order in which the commands are put together. Some trial and error is usually needed to get it correct.

III.E.2) Apply the cumulated transform to the EPI data:

applyxfm4D /fullpath/EPI.img /apps/linux/fsl/etc/standard/avg152T1.img /fullpath/EPI_2_MNI.img /fullpath/subNNN_coplanar_2_MNI.mat -singlematrix

III.E.2) Apply the cumulated transform to the T1 data:

flirt -applyxfm -init /fullpath/T1_subNNN_2_MNI.mat -interp sinc -out /fullpath/T1_subNNN_2_MNI.img -in /fullpath/T1_subNNN.img -ref /apps/linux/fsl/etc/standard/avg152T1.img


IV. Discussion

  1. Register functional data to anatomic data using a 6-parameter (rigid-body) fit.
    There are some choices here:
    1. Perform analysis in template or native space:
      1. Template space, i.e., register and reslice EPI data into tempalte space:
        • Pro: direct comparison of data across subjects is easier.
        • Con: Pixels in template are usually smaller than EPI or PET pixels, so your data size (but not content)

      2. Native space, i.e. analyze functional data in native (unregistered) space, and only bring the results into template space:
        • Pro: Larger, fewer pixels, so analysis is faster. A reslice is avoided, saving time and lots of disk space.
        • Con: Reslicing summary statistics can diminish peaks via interpolation, but not much.

    2. Direct or indirect registration of EPI to anatomical:
      1. Do NOT register EPI data directly to a hi-res anatomic image. The susceptibility (dropout) artifact in EPI data will cause it to incorrectly tilt forward to match the T1 image. (You can see an example of this in the PowerPoint file of my Coregistration Lecture.)
      2. Instead, use a suitable image (e.g., coplanar T1, etc.) acquired immediately before the EPI data, and assume no movement relative to the first EPI image. The registration transform for the proxy image is then applied to the motion corrected EPI images.
      3. Variations include:
        • Field-map images. John Ollinger has written a script to apply the field-map correction and which also registers the field map data to the T1 hi-res image, then applies the registration to the EPI data using the transforms worked out during the correction. This is a popular option for those lost souls who forgot to collect a coplanar image. It also works well in most other cases. If this method yields a poor registration, you should resort to:
        • using the first few (1-2) EPI images acquired, which have better GM/WM contrast, and registering these to the hi-res T2 image.
        • as above, but use the later EPI images to create a mask of the susceptibility region, and mask off the corresponding part of the first frame of EPI data.
        • using an image acquired between EPI runs, or at the end of the EPI run, and using the closest (in time) EPI image as the basis.
        • The coplanar image can be T1- or T2-weighted. Other types may also work but check with your friendly neighborhood MRI physicist.
        • There are several alternatives here- please chime in with other suggestions.

    3. EPI transform based on first, middle, or last 3D volume, or on a sum-image.
      1. I'll warn you from the start that this is a red herring.
      2. Proponents of using the first (or last) slice usually have a matching anatomic scan acquired immediately before the first (or after the last) slice. This is sound thinking.
      3. Proponents of using the middle slice point out, correctly, that this will involve the least amount of movement, on average, to reslice an entire EPI sequences.
      4. These arguments are usually tied into a discussion of motion correction. We assume the data have already been motion corrected, so the best approach is arguably to create a sum-image for each motion-corrected EPI sequence.
    4. Talairach or AC/PC alignment
      1. Generally, you should avoid analyzing data in the Talairach coordinate system. It is innaccurate and considered by many to be obsolete.
      2. However, there are a few reasons why you might want to define a Talairach reference frame for your data.
        • AFNI works better if it knows the Talairach reference points, even if the data have not had the Talairach transform applied to them.
        • If you plan to report results in Talairach coordinates, which should only be considered as an adjunct approach to compare with previous work, the transform between Talairach/MNI can be estiamted for each subject.
        • For ROI drawing and other morphology tasks, it is useful to preserve individual differences in shape and size, but to have a simialr alignment. This is where "AC/PC alignment" comes in. The Anterior Comissure (AC) and the Posterior Commissure (PC) are marked, and the brain is rotated so the AC and PC are in the same sagittal and axial plane. Frequently the image is shifted so the AC occupies the same pixel coordinate as other images. This procedure makes the cardinal planes much more comparable across subjects. This is different than a more complex Talairach transform, which yields the same size and similar shape for all brains. AC/PC alignment should preserve individual differences.

  2. Create a study-specific average anatomical image.
    1. Skull-Stripping: Improved accuracy can be obtained by using a skull-stripped brain. None of the software tools we use do an acceptable job of automated skull-stripping. Most of them (e.g. BET) provide an excellent starting point, but careful visual inspection is required. Almost always, subsequent manual editing of the brain mask is required. Automated programs provide about 90-95% accuracy, but a level of >99% is needed, especially if you want to perform segmentation, VBM analysis, etc. If you provide an accurate brain mask, all of the subsequent processing steps will be much more accurate.

  3. Register all subjects to the study-specific average image.
    1. For images which don't seem to register well, either in this step or in the initial step of creating the sum-image, try AC/PC aligning the T1 image to provide a better starting point.
    2. Even though sub001 was used as the target to create the sum-image, you still need to re-register the original sub001 to the sum-image.

  4. Register the average image to MNI template space.
    1. If you skull-stripped the data to remove all non-brain tissue, you should register to a skull-stripped MNI brain. Avoid using the one provided by FSL- it has a lot of non-brain regions. I have an edited version of this brain with more precise edges, but I cannot find it. If anyone knows where it is (and I have given it to a few of you) please tell me.


V. Comparison of Coregistration Software

 


VI. References for Coregistration (chronological order)

Pelizzari CA, Chen GTY, Spelbring DR, Weichselbaum RR, Chen CT, "Accurate three-dimensional registration of CT,
PET, and/or MR images of the brain", J. Comput. Assist. Tomogr., 13(1):20-26, 1989.


Pietrzyk U, Herholz K and Heiss WD, "Three-dimensional alignment of functional and morphological tomograms", J.
Comput. Assist. Tomog., 14(1):51-59, 1990.


Woods RP, Cherry SR, Mazziotta JC, "Rapid automated algorithm for aligning and reslicing PET images", J. Comput.
Assist. Tomogr., 16(4):620-633, 1992.


Woods RP, Mazziotta JC, Cherry SR, "MRI-PET registration with automated algorithm", J. Comput. Assist. Tomogr.,
17(4):536-546, 1993.


Pietrzyk U, Herholz K, Fink G, Jacobs A, Mielke R, Slansky I, Wuerker M, Heiss WD, "An interactive technique for
three-dimensional image registration: Validation for PET, SPECT, MRI and CT brain studies", J. Nucl. Med., 35:2011-
2018, 1994.


Strother SC, Anderson JR, Xu XL, Liow JS, Bonar DC, Rottenberg DA, "Quantitative comparisons of image
registration techniques based on hig-resolution MRI of the brain", J. Comput. Assist. Tomogr., 18(6):954-962, 1994.


Friston KJ, Ashburner J, Poline JB, Frith CD, Heather JD, Frackowiak RSJ, "Spatial registration and normalization of
images", Human Brain Mapping, 2:165-189, 1995.


Jiang A, Kenedy DN, et al., "Motion detection and correction in functional MRI imaging", Human Brain Mapping,
3:224-235, 1995.


Black KJ, Videen TO, and Perlmutter JS, "A metric for testing the accuracy of cross-modality image registration:
validation and application", J. Comput. Assist. Tomogr., 20(5):855-861, 1996.


Klein GJ, et al., Huesman RH, "A methodology for specifying PET volumes-of-interest using multi-modality
techniques", Proc. BrainPET'97 Conference, Washington, DC, June 1997.


Julin P, Lindqvist J, et al., "MRI-guided SPECT measurements of medial temporal lobe blood flow in Alzheimer's
disease", J. Nucl. Med., 38:914-919, 1997.


West J, Fitzpatrick JM, et al. [LOTS], "Comparison and evaluation of retrospective intermodality brain image
registration techniques", J. Comput. Assist. Tomogr., 21(4):554-566, 1997.


Woods RP, Grafton ST, Holmes CJ, Cherry SR, Mazziotta JC, "Automated image registration I: General methods and
intrasubject, intramodality validation", J. Comput. Assist. Tomogr., 22(1):139-152, 1998.


Woods RP, Grafton ST, Watson JDG, Sicotte NL, Mazziotta JC, "Automated image registration I: Intersubject
validation oif linear and nonlinear models", J. Comput. Assist. Tomogr., 22(1):153-165, 1998.


VI. Questions or Comments? Contact Terry Oakes: troakes - at - wisc.edu

last updated 2008-09-09