Neuroimaging Data Processing/Print version

1. Introduction

edit

2. Data

edit

2.1 Storage

edit
  1. Filetypes
  2. Organization

2.1.1 Filetypes

edit

This section introduces the different formats used for datasets and how to convert them into each other. Normally, image data are stored in a data file as either 8- or 16-bit integers. Besides the raw image data, there is usually a metadata along with to provide the descriptive information about the subject, type of image, imaging parameters as well as image dimensions. In the history of neuroimaging there have been several different image formats playing important roles. In the following sections, three major kinds of formats are going to be discussed in detail.

2.1.1.1 DICOM

edit

Digital Imaging and Communications in Medicine (DICOM)

edit

Digital Imaging and Communications in Medicine (DICOM) is a standard of medical images and it normally severs to the output file format of AMI scanners. DICOM generally stores each slice as a separate file; these files are conventially named using numbers reflecting the slice number. Instead of separating the storage of raw image and metadata into two files, DICOM format keeps them together in one file, and the extraction of header informaiton is possible by special software. DICOM image data can be compressed to reduce the image size. Files can be compressed using lossy or lossless variants of the JPEG format, as well as a lossless Run-Length Encoding format, and the compression informaiton can be found in the header referring to "Transfer Syntax Unique Identification". Although DICOM is the standard for MRI scanners, it can't be used directly by most analysis softwares because of its slice storage form consuming huge space. So normally the DICOM format needs to be converted to other analysis format such as the popular NIfTI file format. DCM2NII is one of the software capable to convert images from the DICOM to NIfTI format used by FSL, SPM, MRIcron and many other brain imaging analysis tools. DCM2NII is a stand-alone program that is distributed with MRIcron, and it is compatible to Windows, Linux x86, Mac OSX PPC and Mac OSX x86.

Reference

edit

1. Wiki-DICOM

2. http://www.mccauslandcenter.sc.edu/mricro/dicom/

3. http://www.mccauslandcenter.sc.edu/mricro/mricron/dcm2nii.html

2.1.1.2 NIfTI

edit

Introduction

edit

NIfTI (Neuroimaging Informatics Technology Initiative) is a kind of data format extending from ANALYZETM7.5 file format.

Compatibility to different softwares

edit

AFNI

edit

in FSL (version 4.1.9) the data format by default is NIfTI.

Reference

edit

1. http://nifti.nimh.nih.gov/nifti-1

2. http://fsl.fmrib.ox.ac.uk/fsl/fsl-4.1.9/fsl/formats.html

2.1.1.3 AFNI

edit

AFNI datasets consist of .HEAD[1] and .BRIK[2] files

The to3d [3] tool is used to import dicom data into AFNI's native format.


References

edit
  1. http://afni.nimh.nih.gov/afni/doc/faq/40
  2. http://afni.nimh.nih.gov/afni/doc/faq/39
  3. http://afni.nimh.nih.gov/pub/dist/doc/program_help/to3d.html

2.1.1.4 GIfTI

edit

GIfTI is the Geometry format under the Neuroimaging Informatics Technology Initiative (NIfTI). Basically, it is the surface-file format complement to the NIfTI volume-file format .nii. Programs which support the Gifti format, intended to allow exchange of each others surface files, include: Freesurfer, Caret, BrainVISA, Brain Voyager, CRkit, VisTrails and AFNI.

In Freesurfer, the GUI utilities freeview and tksurfer can open and display .gii files directly. The tool 'mris_convert' can be used to convert between native Freesurfer surface formats (like lh.pial) and gifti (lh.pial.gii) for exchange with other packages. See the output of 'mris_convert -help' for examples. ie. mris_convert lh.pial lh.pial.gii

1.http://ftp.nmr.mgh.harvard.edu/fswiki/GIfTI

2.1.2 Organization

edit

There also exists a format for organizing and describing outputs and datasets of neuroimaging experiments called the Brain Imaging Data Structure (BIDS) format[1]. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations.

Certain preprocessing workflows, such as fMRIprep, will require the data to be in BIDS format.


3. Processing

edit

3.1 Steps

edit

This is a collection of possible preprocessing steps. Which steps you include and also their order depends on your research question and to an extent also on the software you use

3.1.1 Discarding First Volumes

edit

Discarding first volumes

edit

The signal in first volume of an EPI sequence can be off-scale due to longitudinal magnetization not having reached steady-state yet. Therefore, they are often discarded and not used for analysis. A visual inspection of voxel time-series can give a good idea of which volumes are affected.

Keep in mind that discarding the first volumes will have an effect on the number of volumes in your scan as well as on the timing information in stimulus onset files and physiological parameter files.

AFNI

edit

3dTcat is AFNIs function to discard the first volumes. It basically copies the original data, leaving out specified volumes. A simple command could look similar to this:

3dTcat [options] INPUTFILE[2..$] 

which in this case writes all but the first two volumes (0 and 1) of the original data (inputfile) into the outputfile. This step is typically used to copy the (interesting part) of the raw data into the results directory for further preprocessing, so that the original data remains untouched. Note that volume number and timing information might have to be adjusted accordingly.

3dTcat can also be used for linear detrending using the -rlt option. Check manual page [2] for more details and options.

In afni_proc.py this is a default step, however with the number of TRs to remove set to 0. The respective option to remove the first n TRs is

-tcat_remove_first_trs n

References

edit
  1. Gorgolewski, K., Auer, T., Calhoun, V. et al. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Sci Data 3, 160044 (2016). doi:10.1038/sdata.2016.44
  2. AFNI program: 3dTcat: Output of -help

3.1.2 Despiking

edit

Signal outliers e.g. due to excessive head movement can seriously affect the analysis. Motion correction algorithms are not build to deal with these. Therefore it makes sense to inspect your data before starting the actual preprocessing and get rid of signal outliers right away. You can plot voxel time series and check in different parts of the brain if they show huge peaks, but there are also automated methods to cover all voxels (see below). A good start is also to watch your volume in a fast movie-like sequence because big movements between two pictures will become apparent to you.

When outliers are detected, they should be dealt with. Removing whole volumes that show outliers would break the time series, thus complicating the analysis of signal time series and impairing Fourier transformation as used in later steps. Therefore, respective data points are rather interpolated from neighbouring data.

AFNI

edit

3dToutcount[1] calculates the number of outliers in each volume and writes the results into a file. Outliers are automatically defined as number of MAD (median absolute deviation) that are allowed, accounting for the number of TRs in the dataset. A typical limit is about 3.5*MAD distance to the trend. It makes sense to detrend the time series before looking for outliers. This can be done using the -polort nn option to detrend with polynomial of order nn (order is based on the duration of the first run: 1 + floor(duration/150 sec)) and the -legendre option to use legendre polynomials (allowing for polort > 3). For example:

3dToutcount -automask -fraction -polort 3 -legendre INPUTFILE > OUTCOUNTFILE.1D

The outcountfile will thus contain the fraction of voxels per volume (within the automask) that exceed the outlier limits after 3rd degree legendre polynomial detrending. To check for excessive outliers you can use:

1deval -a OUTCOUNTFILE.1D -expr 't*step(a-0.03)' | grep -v '0' 

returning all timepoints with more than 3% outliers, or:

1dplot OUTCOUNTFILE.1D

for visual inspection. In the example plot you see a considerable fraction of outliers in the 222th volume, which is also the one that has been found by the 1deval command. After using despike (see below), this outlier will be much reduced.

 
Plot of outlier file with outlier in volume 222

3dDespike[2] actually removes spikes from the 3D+time input dataset and writes a new dataset with the spike values are replaced to fit a smooth curve. The spike cut values can be set via the option -cut c1 c2, where c1 is the threshold value of s for a 'spike' [default c1=2.5] and c2 is the upper range of the allowed deviation from the curve (s=[c1..infinity) is mapped to s'=[c1..c2) [default c2=4]). The order of the fit curve can be adjusted via -corder. Though 3dDespike can be run without visually checking for outliers, it is advisable to do so before and after despiking to keep track your data and detect possible oddities at the stage they first occur.

3dDespike [options] INPUTFILE

When running the outlier detection again on despiked data you can see if the outliers have been removed. For example the same plot as above but after despiking shows that the outlier has been reduced a lot (notice the difference y-range)

 
Outliers after despiking, notice lower y-range

In afni_proc.py a despiking block can be included (but is not by default)

-do_block despike

It is also possible to remove outliers in the regression, however, as far as I understand this will actually remove the respective volume and thus get you into the trouble mentioned above. Censoring TRs with more than n% outliers can be achieved by

-regress_censor_outliers 0.0n

References

edit
  1. AFNI program: 3dToutcount: Output of -help
  2. AFNI program: 3dDespike: Output of -help

3.1.3 Slice Timing

edit

Slice (Acquisition) Time Correction

edit

In order to collect data from the entire brain, a typical pulse sequence might acquire 30 (or more) slices throughout the TR (e.g. 3s). One approach is to use ascending/descending slice acquisition, in which slice are collected consecutively. However, most fMRI studies now use interleaved slice acquisition, in which the scanner first collect all odd-numbered slices, and then all even numbered slices (avoids cross-slice excitation).

Due to that consecutive or interleaved slice acquisition, adjacent parts of the brain are acquired at different time points within a certain TR. In other words, the BOLD-Signal at different layers of the brain are sampled at different time points. Consequently, the signal (haemodynamic response) acquired in the last slice (late in the TR) appears to peak earlier than those in the surrounding slices (acquired early in TR), even though the underlying activity is identical. Without correcting for the time of acquisition of each slice, the time courses would seem to differ across slices.

But we really like to have the signal for the whole volume from the same time point!

Should I do slice timing correction?
edit

Sometimes slice timing correction is considered as an optional step in preprocessing, and whether do it before or after realignment (aiming to overcome the problem introduced by head motion) is also controversial. Typically, the argument is that with short TR, slice timing in not mandatory because the haemodynamic response is sluggish. However, there are arguments against this, and Sladky et al. (2011) showed that, in the worst case it doesn't change the results, but that in most cases, it improves the results - especially for event related designs.

Suggestions from SPM!
edit

SPM provides several suggestions regarding to these questions.

  • With a sequential slice order
  • trivial head movement: realignment first, and then slice timing correction
  • significant head movement: slice timing correction first, and then realignment
  • With an interleaved slice order
It's very controversial about if slice timing should be used, especially when there is a severe head motion.
  • Performing dynamic causal modeling (DCM)
If the DCM is going to use, then slice timing correction is a necessary step.
Suggestions from FSL
edit

Generally, they do not recommend using slice-timing for GLM analysis. Instead, they recommend using temporal derivatives in the model. If you still opt for slice-timing correction they suggest doing it after motion-correction (that is also how it is implemented in the FEAT workflow).

Solution: Temporal Interpolation

edit

Correction of the slice-timing discrepancies is possible via temporal interpolation. Temporal interpolation refers to the estimation of a value of a signal at a time point that was not originally collected, using data from nearby time points. In other words, interpolation uses information from nearby time points to estimate the amplitude of the MR signal, for every slice, at a single time point within the TR. Typically either the first time-point of the TR, i.e. the time the first slice of the volume was acquired, or the middle time-point, i.e. the slice of the volume that was acquired at TA/2, are chosen as reference slices. Choosing the first time-point has the advantage of being straightforward, as e.g. GLM analyses can then be performed using the volume onset as time information. Using the middle time-point might be more accurate since less interpolation is needed (the maximum time difference being TA/2 versus a full TA) but it has to be accounted for when modelling the signal/event onsets. The standard algorithm uses sinc interpolation between time points.

Limitation

edit

Any attempt to recover the missing information will be limited by the variability in the experimental data, especially variability that is not associated with the task.

Alternative approach

edit

Create separate analysis models for each slice, which is then part of the data analysis instead of preprocessing.

Resting state fMRI

edit

Slice time correction is the same for rsfMRI as for task-based fMRI when using a standard EPI sequence. However, some rsfMRI studies use multiband sequences[1] These makes slice time correction a) potentially obsolete because of the very short TRs b) more tricky because not all slices are acquired separately.

Implementation

edit

AFNI

edit

3dTshift is the command for slice time correction in AFNI. -tzero nn option can be used to define the time offset to which all slides should be resampled to. Alternatively, the option -slice n can be used to interpolate to the nth slice of the volume. The default interpolation is Fourier which is said (on the AFNI page) to be the most accurate but slowest, but this can also be adjusted. See the manual page for more options and details [2]. A command for slicetime correction to timepoint zero of slices that were acquired interleaved in z direction (alt+z) and with a TR of 2.3 seconds would look like this:

3dTshift -TR 2.3s -tzero 0 -tpattern alt+z -prefix OUTPUTFILE INPUTFILE

To verify the slice times (e.g. whether slices where acquired interleaved) you can execute the following command in the .results folder, which gives you the offsets for slice 0,1,2,... (where FILENAME is the name of the name of the current dataset, i.e. before slice timing but after e.g. discarding the first volumes, typically something like pb00.SUBJECTNAME.RUNNAME.tcat+orig):

3dinfo -VERB FILENAME | grep offsets

You can also use the graph display in the afni viewer to check slicetiming. When the timing (indx, value, at...) jumps back and forth sliding through the slices in the image depicting the plane in which they were acquired, the slices are acquired interleaved. There should be no more change in timing after slice time correction and when walking through the timeseries in the graph itself the time should increase with TR from volume to volume. Look into the tutorial mentioned in the AFNI Software section of this book for a more detailed description.

 
Check slice timing using time series graph

In afni_proc.py this is a default component, aligning to first TR. Confusingly the standard interpolation here is quintic. All parameters can of course be adjusted using

-tshift_interp
-tshift_align_to
Slice timing order
edit

Slice order is a very important parameter in the procedure of slice timing, and an accurate description to the order ensure the correction performance. If you are not sure about the order, the best way is to ask the experimenter directly and get the first-hand information. On most scanners, the following slice orders for EPI will be availableː

  1. Ascendingː slices are obtained from the negative direction to positive, and it can be expressed as [1ː1ːnslices] in SPM
  2. Descendingː slices are obtained from the positive direction to negative, and it can be expressed as [nslicesː-1ː1] in SPM
  3. Interleavedː slices are obtained alternatively from odd and even slices. Odd slides can be expressed as [1ː2ːnslices], even slides can be expressed as [2ː2ːnslices]. Normally in this mode the aquisition is in ascending order

Furthermore, some machines may have specific slice order modes, (such as center, reverse center, maximal interleaved, etc.), see SPM/Slice_Timing for more information.

Slice timing parameters
edit
Parameter Value Comments
Session Batch of images /
Num.of.slice Number of slice Integer
TR Repetition time Float
TA TA = TR-(TR/nslices) Float
Slice order See above part /
File prefix with ä.nii /

Slicetime correction in FSL is part of the FEAT[3] pre-stats methods. It works by using (Hanning-windowed) sinc interpolation to shift each time-series by an appropriate fraction of a TR relative to the middle of the TR period.

In the FSL GUI select FEAT and then the pre-stats tab. There you can change Slice time correction: from None to the appropriate option, depending on how your slices where acquired. If slices were acquired from the bottom of the brain to the top select Regular up. If slices were acquired from the top of the brain to the bottom select Regular down. If the slices were acquired with interleaved order (0, 2, 4 ... 1, 3, 5 ...) then choose the Interleaved option. If slices were not acquired in regular order you will need to use a slice order file or a slice timings file. If a slice order file is to be used, create a text file with a single number on each line, where the first line states which slice was acquired first, the second line states which slice was acquired second, etc. The first slice is numbered 1 not 0. If a slice timings file is to be used, put one value (ie for each slice) on each line of a text file. The units are in TRs, with 0.5 corresponding to no shift. Therefore a sensible range of values will be between 0 and 1. [4]

 
FEAT GUI prestats tab, interleaved slicetime correction selected

The command line option in FSL is slicetimer[5]:

slicetimer -i INPUTFILE [options]

Options you should consider specifying are

       -r,--repeat     Specify TR of data (default is 3s)
       -d,--direction  direction of slice acquisition (x=1,y=2,z=3) (default is z)
       --odd   use interleaved acquisition (default is ascending)
       --down  reverse slice indexing (default is ascending)
       --tcustom       filename of single-column custom interleave timing file
       --ocustom       filename of single-column custom interleave order file (first slice is referred to as 1 not 0)

References

edit

Huettel, S. A., Song, A.W., & McCarthy, G. (2008). Functional Magnetic Resonance Imaging (2nd edition). Sinauer Associates, Inc: Sunderland, Massachusetts, USA.

Sladky, R., Friston, K., Trostl, J, Cunnington, R., Moser, E. & Windischberger, C. (2011). Slice-timing effects and their correction in functional MRI. NeuroImage, 58, 588-594

  1. http://practicalfmri.blogspot.de/2012/03/grappa-and-multi-band-imaging-and.html
  2. http://afni.nimh.nih.gov/pub/dist/doc/program_help/3dTshift.html
  3. http://fsl.fmrib.ox.ac.uk/fsl/fsl4.0/feat5/detail.html
  4. http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FEAT/UserGuide#Pre-Stats
  5. http://poc.vl-e.nl/distribution/manual/fsl-3.2/slicetimer/index.html

3.1.4 Realignment

edit

[1]

Realignment / Motion Correction

edit

When the head moves during an experimental run or between runs (also termed within-run vs. between runs movements), some of the images will be acquired with the brain in the wrong location. Consequently, motions can cause a given voxel to contain signal from two different types of tissue or a loss of data (e.g. at the edges of the imaging volume). Furthermore, movement of the head will alter the uniformity of the magnetic field that has been shimmed for one particular head position. Finally, head motion can have consequences for activation timing/pattern of excitation, given that each excitation pulse is targeted to one slice at a time and the head is moving through different slices during acquisition. Thus, the goal of motion correction is to adjust the series of images so that the brain is always in the same position. This is achieved by a process called co-registration.

Correction by Co-registration

edit

The general process for spatially aligning two image volumes is known as co-registration. For motion correction, successive image volumes in the time series are co-registered to a reference volume. For this purpose a rigid-body transformation is used. Rigid-body transformation assumes that the size/shape of the two volumes that should be co-registered are identical (it is the same brain). Thus, the one volume can be superimposed upon the other by a combination of three translation and three rotation. Therefore the rigid-body transformation has three translational parameters and three rotational parameters.

Computer algorithms can identify the set of parameters that provide the best match to the reference volume. Via a cost function (voxel-by-voxel subtraction regarding the absolute intensity values) one can quantify how well an image matches another. The perfect co-registration between the reference- and the corrected volume would yield a difference of zero. After determining a set of realignment parameters, the original data have to be resampled in order to estimate the values that would have been obtained in case of no head motion. This process is called spatial interpolation.

What is the right reference?

edit

The reference volume can be a particular volume of the session or run, e.g. the first volume, or it can be an average volume. The reason to use the first (usable) volume is that it is most times acquired directly after the anatomical scan which should result in minimal movement as compared to the anatomical scan. Average volume might however be preferable because it contains more information and all volumes are more or less similarly different to it. On the other hand, the average volume will be build from unaligned volumes, which is somehow problematic. This can be overcome by two-pass averaging in (e.g. in AFNI see below) where all volumes are first realigned to an unaligned average volume, then another "aligned" average is build and all volumes are aligned to this again.

Regressing out motion parameters

edit

Alternatively, effects of motion can also be corrected by removing motion-related components from the data by the inclusion of calculated motion parameters (as regressors) in the GLM (e.g., six nuisance regressors corresponding to three directions of translation and three axes of rotation). In fact many people do both, realignement in preprocessing and including motion regressors to account for anything that hasn't been solved by realignment. (See [2] for a comparison of different approaches, and [3] for the effect of including motion parameters in the regression)

Resting state fMRI

edit

In resting state fMRI motion correction is taken especially seriously due to the global effect of head movements.[4] Especially when comparing healthy controls to children or patients which often show considerably more head movement. Therefore it is standard to apply both realignement and motion regression. However, recently the concern has emerged, that subtle movement artefacts (< 0.5 mm) that survive these corrections can bias functional connectivtiy comparison between individuals and groups. Therefore, it has been suggested to either exclude volumes which show micromovements ("scrubbing") and/or to include a mean movement parameter of each subject as a covariate in the group level analysis.[5]

Controversies

edit

For a debate about why simple motion correction might not be enough, see [6] [7]

Implementation

edit

Realignment works as intra-subject registration of time-series images by least square approach and a rigid-body transformation. Within SPM8, there are four modes provided under this purpose.

* Realign (estimate)

Estimation herein refers to how to get the optimal transformation (normally rigid-body transformation with six parameters) from individual images to the reference. There are several parameters in SPM provided for use to ajust the algorithm. After execution of this step, the header of each input file will be changed to reflect their relative orientation; then a file with name like rp_*.txt; besides in the Graphics window, the six motion parameters are drawn in image series. About how to set up these parameters however is not a trivial work, and SPM just suggests to keep the default setting if you are not quite clear with their meanings. The following table lists the seven parameters and their setting suggestions.
Parameter Meaning Default
Quality A value between 0 and 1. The larger the value, the higher quality of the estimation, but the lower speed of the calculation. This value reflects the percentage of voxels taken into account during the estimation. Some of the voxels may contribute less so they can be ignored in the calculation. 0.9
Separation This is the distance (non-negative) between the points sampled in the reference space. The smaller the value, the more refined in a sampling, and the more accurate in the estimation but slower in the speed. 4
Smoothing Smoothing the images before transformation parameter estimation. Filter the noise in the images. SPM suggested a 5mm kernel for MRI data 5
Num passes The selection of reference image is determined here. The reference could be either the first volume or the average volume of all images. Besides, same as AFNI, a two-pass procedure could be selected. However, SPM stated that this may didn't contribute to the realignment performance but twice as many time in calculation register to mean
Interpolation Depending on the complexity of sampling on individual images and the number of voxels considered in the algorithm, the parameter estimation may be affected. The higher degree of interpolation, the more accurate to the estimation, with the cost of longer time and slower speed. 2nd Degree B-spline
Wrapping This is about the directions in the volumes the values should wrap around in. In MRI scans, the images wrap around in the phase encode direction. SPM suggested that if the images have been spatially transformed, or you are not sure how to set it up, then it's better to set with void. No wrap
Weighting You could weight voxels in a reference image with different values. It's suggested to use in SPM whenever there is significant extra-brain motion or a patch of region with severe noise No weight file

* Realign (reslice) Once the transformation parameters are estimated, there is no new image generated. During the procudure of reslice, a series of registered images will match the first image selected voxel-to-voxel and lead to new series of images. The new images will name based on their oringals but with a prefix of 'r'. Basically, reslice still belongs to the issue of interpolation of points in the original space to infer the value in new space, and in SPM there are several parameters provided to control the algorithm.

Parameter Meaning Default
Resliced images Refine which images are needed to do reslice operation, and the "mean image" refers to an additional image by averaging all the resliced ones All Images+Mean Image
Interpolation SPM suggested that easier approaches (e.g. nearest neighbor) normally perform worse than higher degree methods, which consider more voxels into interpolations 4th Degree B-Spline
Wrap This is about the directions in the volumes the values should wrap around in. In MRI scans, the images wrap around in the phase encode direction. SPM suggested that if the images have been spatially transformed, or you are not sure how to set it up, then it's better to set with void. No
Masking For voxels which need to be sampled from outside the original images, they are set to zero directly over all images Mask images
Filename prefix How to name the image files after reslice r

* Realign (Est & Res)

This is a combination of step one and two. More information could be found above.

* Realign & Unwarp

In FSL, MCFLIRT is used for motion correction (MC). This function is integrated into FEAT FMRI analysis protocol. The easiest way to perform MC is by clicking on the 'FEAT FMIR analysis', and then checking on the "MCFLIRT" option in the Pre-stats tab.

 
Click on FEAT FMRI analysis, and then check on MCFLIRT

Behind such a simple operation, there are several adjustable parameters are set to default automatically in the GUI, which could be set up flexibly in the command line. These parameters involveː

  • -costː The cost function in MCFLIRT used for quantification of dissimilarity between the reference image and the target image. There are five different cost functions availableː mutual information (mutualinfo), correlation ratio (corratio), normalised correlation (normcorr), normalised mutual information (normmi) and least squares (leastsquares), with the default of normcorr.
  • -binː number of histogram bins, with the default value of 256
  • -dofː number of transform dofs, with the default value of 6
  • -refvolː number of reference volume, with the default of middle volume
  • -scalingː default value is 6.0
  • -smoothː default value is 1.0, which controls the smoothing in cost function
  • -rotationː specify scaling factor for rotation optimization tolerances
  • -verboseː a value no smaller than 0, and the default value is 0
  • -stagesː normally the registration of the target image to the reference one takes several stages, from three to four. The default setting is 3-stage, and it could be increased to 4-stage. In different stages, the interpolation algorithms are different, which could be either trilinear, sinc or nearest neighbor.
  • -initː an intitial transform matrix to apply to all voxels, the default is none
  • -gdtː run search on gradient images
  • -edgeː run search on contour images
  • -meanvolː register time series to mean volume instead of the reference volume
  • -statsː produce variance and std. dev. images
  • -matsː save transformation matricies in subdirectory outfilename.mat
  • -plotsː save transformation parameters in file outputfilename.par
  • -reportː report progress to screen
  • -helpː print this information

The command line for the GUI operation on motion correction isː

mcflirt -in input_func_data -out out_func_data_mcf -mats -plots -refvol 100

The result includes three plots corresponding to the trend of six parameters along time-series, and the mean displacement.

 
The result plots of MCFLIRT with default settings

AFNI

edit

3dvolreg applies rigid body transformation using 6 parameters. The reference is defined using the -base option. Another helpful feature is -1Dfile which creates a file containing the motion parameters. This can later be used for regression. The method of interpolation can be adjusted as well, the default is Fourier method, which is said to be the slowest but most accurate. For more details and options see manual page [8]

To align to an average of the run, one can use 3dTstat to calculate this before and the use this as base in the 3dvolreg command. This is an example, which also saved the realignment parameters and the transformation matrix. This can be useful, when you want to combine all spatial transformations and only apply them all together in the end. As mentioned above, AFNI 3dvolreg contains the option -twopass, check the manual for more details.

3dTstat -mean -prefix MEAN_FILE INPUTFILE
3dvolreg -twopass -1Dfile PARAMETER.1D -1Dmatrix_save MATRIX.1D -base MEAN_FILE  -prefix OUTPUTFILE  INPUTFILE

One should always check the motion parameters visually. This can be done with the following command:

1dplot -volreg PARAMETERFILE.1D

There is another function called 1d_tool.py [9]which summarizes all motion parameters into a single timeseries. This can be helpful to detect overall motion and also be plotted using 1dplot. The resulting enorm file can be used to censor big outliers (those cannot be handled by realignment) by including them in the regression. This can be daone automatically using the command -regress_censor_motion 0.3 in afni_proc.py, where 0.3 is the threshold which of course can be adjusted. The same can be achieved by setting the motion censor limit in uber_subject.py.

It is also a good practice to look at the volumes in a video before and after the realignment to check if motion has been reduced. In the afni viewer, click on the image you want to see as a video and press v on your keyboard. Press Esc to stop the video mode.

In afni_proc.py realignment is a standard step with by default aligning to the 3rd volume of the first run, using cubic interpolation and preparing motion parameters for regression across all runs at once. To change these use:

-volreg_align_to 
-volreg_interp

References

edit
  1. Huettel, S. A., Song, A. W., & McCarthy, G. (2008). Functional Magnetic Resonance Imaging (2nd ed.). Sinauer Associates, Inc: Sunderland, Massachusetts, USA.
  2. Churchill, Nathan W., Oder, Anita, Abdi, Hervé, Tam, Fred, Lee, Wayne, Thomas, Christopher, Ween, Jon E., Graham, Simon J., Strother, Stephen C., Optimizing preprocessing and analysis pipelines for single-subject fMRI. I. Standard temporal motion and physiological noise correction methods, Human Brain Mapping 2012, Volume 33 Issue 3, doi:10.1002/hbm.21238
  3. Torben E. Lund, Minna D. Nørgaard, Egill Rostrup, James B. Rowe, Olaf B. Paulson, Motion or activity: their role in intra- and inter-subject variation in fMRI, NeuroImage, Volume 26, Issue 3, 1 July 2005, Pages 960-964, ISSN 1053-8119, doi:10.1016/j.neuroimage.2005.02.021
  4. Kevin Murphy, Rasmus M. Birn, Peter A. Bandettini, Resting-state fMRI confounds and cleanup, NeuroImage, Volume 80, 15 October 2013, Pages 349-359, ISSN 1053-8119, doi:10.1016/j.neuroimage.2013.04.001
  5. Chao-Gan Yan, Brian Cheung, Clare Kelly, Stan Colcombe, R. Cameron Craddock, Adriana Di Martino, Qingyang Li, Xi-Nian Zuo, F. Xavier Castellanos, Michael P. Milham, A comprehensive assessment of regional variation in the impact of head micromovements on functional connectomics, NeuroImage, Volume 76, 1 August 2013, Pages 183-201, ISSN 1053-8119, doi:10.1016/j.neuroimage.2013.03.004
  6. "Motion problems in fMRI: Receive field contrast effects." (2012 Oct 27). practiCal fMRI: the nuts & bolts http://practicalfmri.blogspot.de/2012/10/motion-problems-in-fmri-receive-field.html
  7. Sheltraw, D. & Inglis, B. (2012 Oct 12). "A Simulation of the Effects of Receive Field Contrast on Motion-Corrected EPI Time Series." arXiv 1210.3633 http://arxiv.org/abs/1210.3633
  8. AFNI program: 3dvolreg: Output of -help
  9. AFNI program: 1d_tool.py: Output of -help

3.1.5 Skull Stripping

edit

Concept

edit

Removal of the skull and other non-brain tissue like dura and eyes from anatomical images, which could otherwise complicate e.g. coregistration and normalization steps. Skull stripping can be part of the tissue segmentation (e.g. in SPM) but is mostly done by specialized algorithms that delineate the brain boundary. See [1] for a comparison of some brain extraction algorithms (BSE, BET, SPM, and McStrip), which suggests that all algorithms perform well in general but results highly depend on the particular dataset.

Implementation

edit

Skull stripping in SPM can be achieved by segmenting the anatomical scan, and using a thresholded version of the sum of grey and white matter probability maps to mask out the bias corrected structural scan.

This masking can be done using SPM --> Util --> Image Calculator (ImCalc button) directly in the batch editor or by pressing the ImCalc button on the SPM GUI. The image calculator allows to perform algebraic manipulations on a set of structural images.

 
Skull stripping in SPM ImCalc

Input Images = segmentation generated probability maps of grey matter(i1) and white matter(i2) and the original anatomical scan (i3)
Expression = defining the threshold to mask out the skull based on the selected tissue probability maps e.g ·[i3.*((i1 +i2) > 0.2) ] --> if you set the threshold higher then your skull strip is more stringent For older versions of SPM (or OldSeg), you can use the following expression: (i3.*(i1+i2))

BET and BET2

AFNI

edit

3dSkullStrip[2] is a program in AFNI to extract the brain from surrounding tissue from MRI T1-weighted images. The simplest command would be:

3dSkullStrip INPUTFILE

The process by default includes spatial normalization, some intensity normalization and repositioning of the brain, but to a certain extent those can be switched off. The actual skull stripping is a modified version of the BET[3] algorithm, expanding a spherical surface iteratively until it envelopes the brain. The output can be a skull stripped (masked) brain, the mask itself or different surface formats.

3dSkullStrip is called by a set of other afni functions like align_epi_anat.py, @auto_tlrc and 3dSeg. Therefore in afni_proc.py there is no separate block for skull-stripping but options can be adjusted as parts of the respective blocks.

References

edit
  1. Kristi Boesen, Kelly Rehm, Kirt Schaper, Sarah Stoltzner, Roger Woods, Eileen Lüders, David Rottenberg, Quantitative comparison of four brain extraction algorithms, NeuroImage, Volume 22, Issue 3, July 2004, Pages 1255-1261, ISSN 1053-8119, http://dx.doi.org/10.1016/j.neuroimage.2004.03.010.
  2. http://afni.nimh.nih.gov/pub/dist/doc/program_help/3dSkullStrip.html
  3. Smith, Stephen M., Fast robust automated brain extraction, Human Brain Mapping 2002, Volume 17 Issue 3, page 143-155, http://dx.doi.org/10.1002/hbm.10062

3.1.6 Tissue Segmentation

edit

Concept

edit

Classification of the brain tissue into different tissue classes using high-resolution T1 weighted anatomical images. Usually gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) are separated but more classes are possible. Theoretically, in the T1-weighted image different tissue types can be separated based on their distinctive intensities. However, segmentation turns out much more thorny in reality because of noise that leads to overlapping intensity spectra of the different tissue types, partial volume effects i.e. single voxels containing different tissue types and remaining field inhomogeneities. Tissue segmentation algorithms can therefore also take into account tissue probability maps (e.g. in SPM) and geometric structure of the gray-white interface (e.g. FreeSurfer). For a comprehensive (though not so current) review of tissue segmentation methods see [1]

Implementation

edit

Segmentation New Segmentation

Unified segmentation in SPM is a one-click method that combines spatial normalization, bias field correction and tissue segmentation together. The prior probability that any voxel contains gray or white matter can be determined using a probabilistic atlas of tissue types; this prior probability is then combined with the data from the image determine the tissue class. Using this approach, two voxels with identical intensities can be identified as different tissue types.

FSL’s Brain Extraction Tool BET (Smith et al., 2004)

AFNI

edit

3dSeg [2] is an AFNI program for tissue segmentation. It allows for adding a variety of global and voxelwise priors but is not recommended for quantitative segmentation, e.g. for VBM. The most simple command (segmenting into CSF, GM and WM by default) is

3dSeg -anat ANATOMICAL DATASET

In afni_proc.py segmentation can be invoked through the (default) mask block via (apparently automatically implies -mask_segment_erode to calculate eroded tissue classes)

-mask_segment_anat yes

If WM and CSF should be used for tissue based regressors, there is an options which automates this process calles @ANATICOR[3]. It is strongly recommended to use this in afni_proc.py where (in addition to the -mask_segment_anat like above) you would add

-regress_anaticor

References

edit
  1. L.P. Clarke, R.P. Velthuizen, M.A. Camacho, J.J. Heine, M. Vaidyanathan, L.O. Hall, R.W. Thatcher, M.L. Silbiger, MRI segmentation: Methods and applications, Magnetic Resonance Imaging, Volume 13, Issue 3, 1995, Pages 343-368, ISSN 0730-725X, http://dx.doi.org/10.1016/0730-725X(94)00124-L.
  2. http://afni.nimh.nih.gov/pub/dist/doc/program_help/3dSeg.html
  3. http://afni.nimh.nih.gov/pub/dist/doc/program_help/@ANATICOR.html

3.1.7 Coregistration and Normalization

edit

Concept of Coregistration

edit

Generally, coregistration refers to the spatial alignment of a series of images, either from intra-subject or inter-subject image volumes and is utilized in several steps of preprocessing. Coregistration most often refers to the alignment of functional and structural images from the same subject to map functional information into anatomical space. Normalization refers to the coregistration of the subjects (mostly anatomical) image to a standard template to overcome the issue of brain shape variability from different subjects. (Besides, Realignment coregisters intra-subject time-series image volumes to against head motion of subjects.)

A typical workflow is to coregister an average EPI image to the subjects structural image through affine transformation (i.e. linear, preserving proportions) and warping the structural image to a template using nonlinear transformation. The resulting transformation information from the second step can be applied to the coregistered EPI from the first step to obtain functional information in standard space.

Preprocessing of anatomical images

edit

The anatomical images used in these procedures have to be preprocessed as well. Depending on the application and the mode of coregistration this can include removing slow frequency drifts (see Data Quality and Temporal Filtering), Field map correction, Surface extraction and tissue segmentation (see also Surface extraction. Depending on the software these steps can be implemented as a part of the coregistration procedure.

Structural-functional coregistration

edit

If the aim is to study how functional activations of a subject overlay to individual's own anatomy, functional and structural images of the same brain should be aligned together. However, the difference between the functional and structural images from the same brain is not trivial. By contrast to the high-resolution structural images with clear region boundary contours, functional images are normally blurry and suffered from geometric and intensity distortions. The basic idea regarding coregistration herein is similar to the realignment, i.e. defining a cost function with the goal to minimize the differences on image parameters among images. However, because of distortions on functional images, the rigid-body transformation with six parameters may be not enough to correct. Depending on the complexity of distortions, either a nine-parameter transformation with another three additional parameters accounting for scaling differences on x-, y- or z- axes or even more sophisticated algorithms could be used to quantify the cost function. Meanwhile, as a result of the different contrasts between functional and structural images, the mutual information is more suitable to act as cost function than the sum of squared differences.

Spatial normalization

edit

Human brains are variable in their size and shape. Such structural variability of brains imposes obstacles on intersubject brain function studies in how to determine regional correspondence from brain to brain despite of their divergence. The attempts are mainly focusing on setting up a reference frame in a three-dimensional Cartesian coordinate space as a common space for different brains to align to. The ultimate goal of spatial normalization is the spatial transformation of brains into a common space, making them comparable to each other.

Templates

edit

A template refers to a representative image with anatomical features in a coordinate space, which then provides a target to individual images aligned to. The very first attempt to model a reference brain atlas was proposed by Jean Talairach in 1967. A set of anatomical landmarks were defined as anterior commissure (AC), posterior commissure (PC), the midline sagital plane and the exterior boundaries of the brain at each edge respectively. On the basis of these anchors in brain, a three-dimensional coordinate space was built up. To be more specific, anterior commissure (AC) is set up as origin and the direction from AC to PC is the Y axis; the longitudinal (interhemispheric or midsagittal) fissure conferencing to Y axis is Z axis; the last X axis is perpendicular to the YZ plane. Talairach coordinates provide a possibility to normalize any brain to this template by a well-defined procedure. However, such Talairach space template is not perfect, and the limitations regarding to the lack of MRI scan base and unrepresentative of population [1] call for further developments. Currently the most commonly used templates are proposed by Montreal Neurological Institute (MNI), known as the MNI templates. Here 241 normal MRI scans were taken as a basis to manually define various landmarks, in order to identify a line very similar to the AC-PC line, and the edges of the brain. Each brain was scaled to match the landmarks to equivalent positions on the Talairach atlas. Then 305 normal MRI scans (all right handed, 239 M, 66 F, age 23.4 +/- 4.1) were matched to the average of the Talairach matched 241 brains using an automated 9 parameter linear algorithm and an average of 305 brains was created, the MNI305.[2] The MNI305 was the first MNI template. The current standard MNI template is the ICBM152, which is the average of 152 normal MRI scans that have been matched to the MNI305 using a 9 parameter affine transform.

Spatial normalization methods

edit

Landmark-based

edit

For a landmark-based normalization, a cuboid in AC-PC space is defined, which requires specification of additional landmarks specifying the borders of the cerebrum. Then the bounding box is sub-divided by several sub-planes into 12 sub-cuboids. In a final Talairach transformation step, each of the 12 sub-cuboids is compensated to match the corrsponding standard Talairach template by mathematical stretching, squeezing and warping the sub-cuboids. According to such piecewise linear transformation, the difference between the each brain and the Talairach template obtains minimum.

Volume-based

edit

Volume-based normalization aims to maximize the overlapping voxels in the intersection region of the template and individual target image on the basis of normalized correlation coefficient (NCC). Suppose the template image as X, and the target image represented as X', the overlapped region between individual target image X' and template X can be denoted as:

 ..........(1)


 : a rigid body transformation


The overlapped voxels between template and target image are:


 : intensity set of overlapped voxels in template


 : intensity set of overlapped voxels in target image


The normalized correlation coefficient (NCC) between overlapped sets are:


 ..........(2)


 : mean intensity of voxels in  


 : mean intensity of voxels in  


 ..........(3)


 ..........(4)


Finally, try to maximize the normalized correlation coeffcient in equation (2) to achieve the spatial normalization.

Surface-based

edit

Instead of using a full brain volume to perform the normalization, surface-based methods consider the cortical surface alone. The methods normally involve two steps, 1). extraction of the cortical surface from the anatomical image (see Surface extraction); 2). registration to a surface atlas. Surface features like sulci and gyri are taken into account possibly enhancing coregistration accuracy. However, this approach is limited to research questions involving the cortical surface.

Quality control

edit

It's very necessary to check the performance of normalization with the aim to detect outliers in a series of normalized images. To summarize, the methods fall into three general categories.

  1. Check the overlap between the template and the normalized image by coregistration strategy
  2. Inspect the averaging image of all normalized brains. A good normalization result expects a blurry version of a brain. If there is a brain image showing extraordinary, then it implies some problems during the normalization procedure.
  3. View the series of normalized images as a movie (e.g. in FSLView), jump out images are identified as outliers for a normalization.

Implementations

edit

Coregistration

In SPM, coregistration of functional and structural images can be done by the Coregister module in the GUI (fMRI section). There are three options provided: Coregister (Estimate), Coregister (Reslice) and Coregister (Est & Res). Your choice will depend on further steps. If only “Estimate” is selected, the transformation matrix for coregistration will be estimated and the transformation parameters will be stored in the header of the source imaging, without actually applying it to the image. This is recommended if other spatial transformations are planned to follow the coregistration, e.g. normalization. In that case the spatial transformation parameters from each spatial transformation will be combined and only finally be applied to the data in one step, reducing negative side effects of the transformation step. On the other hand, "Estimate & Reslice" will estimate the transformation and also apply it to the data, generating a new dataset with the prefix 'r' (as a default, can be adjusted). Choosing only "Reslice" assumes that you already have the transformation files toapply to you images, which you'll be queried to provide.

Herein, we selected the “Coregister (Estimate)” option to do the transformation alone as a normalization step will follow.

 
1 Main GUI with coregister module (estimate) selected

When clicking this button a Batch Editor is showing up with several parameters to set. First, define the reference image by highlighting on Reference Image row, and click on the button Select Files in the bottom. Select the file you want to coregister to (depending on the direction you wish, e.g. the mean realigned functional image, therefore, filter the files in the select files menu on mean) and click Done Do the same for the Source Image parameter, this time choosing the file you want to coregister to your reference image (e.g. anatomical image). Note that the reference image is assumed to keep constant, and the source image is supposed to match to it later on. For other images, you can choose all other functional images if they also shall be coregistered. Under Estimation Options there are four adaptable parameters which you can adapt or leave at default settings as we did here. To learn more about any parameter to set, click on the according row and read the description in the box at the bottom. Click the green triangle in the menu bar of the batch editor to run the coregistration.

 
2 Batch editor

After the calculation is finished both the source and reference image are displayed in the graphics window.

 
3 Graphics window displaying results

Normalisation

For normalisation choose the module Normalise in the main fMRI GUI. Again there are three options Normalise (Estimate), Normalise (Write) and Normalise (Est & Write). Though the naming is a bit different the meaning is pretty much the same as for the coregister module (see above, reslice is replace by write). In case normalisation is the final spatial transformation step as for our example, Est & Write should be chosen, since we want to estimate the transformation to the template space and apply both transformations, the coregistration and the normalisation, to the functional volumes.

 
1 Main GUI with normalisation (Est&Write) selected

Once the Batch editor has openend, highlight the row Data and click on New Subject in the box below to create a new subject (or more if you like). You have to select three (sets of) files by highlighting the respective row, clicking on select files, choosing your files and clicking Done. Remember that there is a filter option in the Select files menu, making you life much easier. First you select the Source Image, this is the image for that the transformation into standard space is calculated, mostly the high resolution anatomical image. Next you define the Images to Write, i.e. the images you want to apply your transformations to. Here this will be all our realigned (prefix r) functional images because we want to have them warped into standard space. Then choose the Template Image. The Select File button will automatically direct you in SPM's template folder where you find MNI templates. Choose the one that matches your source images, in this case T1. Finally, you can adjust all other parameters if you want. It makes sense to adjust Voxel sizes to the voxel size of your images to write to (for functional often [3 3 3], highlight the respective row and click Edit Value below). You cannot achieve a higher resolution than your original images are, lower voxel size values will only increase the size of your data.

 
2 Batch editor

Now you can run the batch by pressing the green triangle. It is important that in the previous step we coregistered the anatomical image, which we now use to calculate transformation to standard space, to the mean functional image. So now we apply both transformations (functiona-anatomical and anatomical-template) to all functional images and thus warp them into standard space. After running SPM will have created a new set of data, prefixed with w (for warped), which you can watch via the Display or Check Reg button in the main GUI. The latter enables you to display more than one dataset at a time, unfortunately no overlay is supported to visually inspect the alignment of functional, structural and templated images. You might want to use another software like FLSview instead.

AFNI

edit

Oblique data

For coregistration problems with oblique data, see AFNI

Coregistration

align_epi_anat.py script computes the alignment between two datasets, typically an EPI and an anatomical structural dataset, and applies the resulting transformation to one or the other to bring them into alignment.[3] The transformation is calculated to align the anatomical to the epi data, but the resulting transformation can be used either way that is specified. Basic input is anatomical and epi dataset, which epi volume should be the base of alignment, direction of alignment (0/mean/median/max/volume#) and which direction of alignement is required (anat2epi/epi2anat), e.g.:

align_epi_anat.py anat2epi -anat ANATOMICALDATA -epi EPIDATA -epi_base mean

In afni_proc.py the align block is not set by default but can be included by

-do_block align 

By default this means anat2epi registration, which can be changed with the following option:

-volreg_align_e2a

To check if your coregistration has worked choose the aligned (and skullstripped) anatomical as Underlay and epi as Overlay (or vice versa if you have aligned the other way around) and decrease the opaqueness of the overlay (default is 9)

 
Visualization of coregistration in afni

Normalisation

@auto_tlrc [4] is a script to transform an antomical dataset to match a template in Talairach space.

@auto_tlrc -base TEMPLATE -input ANATOMICAL

Note that this script also performs skullstripping unless instructed otherwise (-no_ss). It can actually also be applied to epi data and not only to anatomical.

In afni_proc.py normalization can be achieved by including do_block tlrc which by default uses TT_N27+tlrc as a base and affine registration (can be changed by -tlrc_NL_warp)

-volreg_tlrc_warp 

additionally tells afni_proc.py to apply that transformation to the EPI data at the volreg step (this is then implemented as part of 3dAllineate[5]). So the output of the volreg block will be in Talairach space.

Combine spatial transformations

Spatial transformations in afni are done one after the other. There is however a way to apply all your spatial transformations in one step (see SPM Implementation part for why to do this). The way to do this in afni is to actually run the different spatial transformations but not to use the created datasets but only the transformation matrices that are also calculated. This matrix is a default output of align_epi_anat.py whereas if you want to include the realignment matrix you'd have to tell 3dvolreg explicitly to store the matrix (as mentionend in the Realigment chapter). @auto_tlrc cannot store the matrix but it can be extracted from the created +tlrc file. An example for a workflow would be this

Calculate realignment to mean image while storing transformation matrix

3dTstat -mean -prefix mean_func_deob func_tshift+orig
3dvolreg -twopass -1Dfile realign_par.1D -1Dmatrix_save realign_mat.aff12.1D -base mean_func_deob+orig  -prefix func_realigned  func_tshift+orig

Calculate coregistration of anatomical to slicetime corrected EPI series (NOT realigned ones), saving the skullstripped anatomical image for normalisation (the three last options are required here since all these steps have been done before)

align_epi_anat.py -anat2epi -anat anat+orig -epi func_tshift+orig -epi_base mean -save_skullstrip -volreg off -tshift off -deoblique off 

Normalise skullstripped anatomical image produced above (NOT the coregistered one) to talairach space, skipp skullstripping (-no_ss) because that's already done

@auto_tlrc -base TT_N27+tlrc -input anat_ns+orig -no_ss

Catenate transformation matrices. The first one is the header data from the normalised anatomical, the second the matrix from the coregistration. These two have to be inverted (-I) because now we want to coregister the EPI pages to anatomical and then to Talairach, which is the opposite direction as the matrices where calculated. The last matrix is the realignment transformation. total_transform... is the name of the new combined transformation matrix

cat_matvec -ONELINE anat_ns+tlrc::WARP_DATA -I anat_al_mat.aff12.1D -I realign_mat.aff12.1D > total_transform_mat.aff12.1D

Apply combined transformations to the EPIs before the calculation of the spatial transformations to fit the anatomical in standard space. -mast_dxyz is set to 3 to prevent upsampling of the EPI data which only has a resolution of 3x3x3 mm here

3dAllineate -base anat_ns+tlrc -1Dmatrix_apply total_transform_mat.aff12.1D -mast_dxyz 3 -float -prefix func_trans func_tshift+orig

References

edit

Comparison of coregistration software http://brainimaging.waisman.wisc.edu/~oakes/teaching/coreg_software_comparison.html

Huettel, S. A., Song, A.W., & McCarthy, G. (2008). Functional Magnetic Resonance Imaging (2nd edition). Sinauer Associates, Inc: Sunderland, Massachusetts, USA.

  1. https://en.wikipedia.org/wiki/Jean_Talairach#Limitations
  2. http://www.nil.wustl.edu/labs/kevin/man/answers/mnispace.html
  3. http://afni.nimh.nih.gov/pub/dist/doc/program_help/align_epi_anat.py.html
  4. http://afni.nimh.nih.gov/pub/dist/doc/program_help/@auto_tlrc.html
  5. http://afni.nimh.nih.gov/pub/dist/doc/program_help/3dAllineate.html

3.1.8 Field Map Correction

edit

Magnetic field inhomogeneities

edit

Tissue differences in the brain result in static field inhomogeneities which can lead to signal distortion in the fast and sensitive EPI sequences used for fMRI. Such distortions become severe in regions where air-filled sinuses border with bone or tissue, such as the frontal lobes and temporal lobes. The result can be geometric distortion of the image or even signal drop-outs, making it difficult to achieve an accurate registration between an activation map calculated from an fMRI time series and an undistorted anatomical image (which is acquired by a less sensitive sequence). Field inhomogeneities can be minimized during the scan through shimming. By adjusting many first-, second-, and higher-order magnetic field gradients generated by shimming coils, field distortions can be corrected. However, it might be necessary or recommendable to account for inhomogeneities that could not be overcome by shimming during preprocessing.[1]

Field mapping

edit

A field map is an image of the intensity of the magnetic field across space. A field map can be obtained by getting two images of the signal phase with slightly different echo times. The difference between the phase images is proportional to the strength of the field at any given location. If the field is completely uniform, then the phase difference induced by the difference echo times will be the same in all voxels, and the resulting image will be a uniform gray. A field map can be acquired as part of the scanning procedure or on a phantom and then used to correct for any geometric distortions.[2]

Bias field correction

edit

If there is no prior knowledge on the magnetic field distribution, intensity variations can be estimated from the collected images themselves. The image is assumed to be a combination of the true data without bias along with the distorting effects of the bias field. Correction algorithms commonly relying on Markov random field models take are used to determine the most likely pattern of distortions, and to reconstruct the assumed trued image by removing the calculated distortions from the biased image. Bias field correction can (or does?) also take in to account tissue segmentation in that different type of tissues are modeled and then the distributions of intensities in different tissues is made equal.[3]

Implementation

edit

SPM provides two ways for geometric distortion correction. When the distortions occur along anterior-posterior axis and the intrinsic symmetry of the brain is not affected, Realign & Unwarp is designed for such benign distortion. However, if the distortions happen on the other axes, such as the right-left orientation, the FieldMap toolbox and VDM utility should be applied to overcome these severe distortions.

References

edit
  1. Hutton, Chloe; Bork, Andreas; Josephs, Oliver; Deichmann, Ralf; Ashburner, John; Turner, Robert (2002). "Image Distortion Correction in fMRI: A Quantitative Evaluation". NeuroImage. 16 (1): 217–40. doi:10.1006/nimg.2001.1054. PMID 11969330.
  2. Huettel, S. A., Song, A.W., & McCarthy, G. (2008). Functional Magnetic Resonance Imaging (2nd edition). Sinauer Associates, Inc: Sunderland, Massachusetts, USA.
  3. Guillemaud, R.; Brady, M. (1997). "Estimating the bias field of MR images". IEEE Transactions on Medical Imaging. 16 (3): 238–51. doi:10.1109/42.585758. PMID 9184886.

Further reading

edit

3.1.9 Physiological Noise Regression

edit

Physiological Noise

edit

Physiological noise can seriously confound the signal measured by fMRI. While very low frequency fluctuations due to vascular / metabolic oscillations (< 0.01 Hz) are usually removed by Temporal Filtering high frequency confounds from breathing (~0.3 Hz) or heartbeat (~1.0 Hz) are harder to deal with. In standard fMRI sequences these frequencies are often undersampled (according to the Nyquist theorem) and thus aliased into lower frequencies [1]. Furthermore, breathing and cardiac rate fluctuate at low frequencies, affecting the cerebral blood flow (through CO2 vasodilation or blood pressure respectively) and thus eventually the BOLD response. [2][3]. Physiological fluctuations can thus be expressed in the frequency range of interest in resting state fMRI (< 0.1 Hz), possibly introducing spurious connectivity between simultaneously measured time series. A problem to bear in mind is that the physiological fluctuations and neural activity of interest might be coupled temporally. In that case, removing the former would remove (at least parts of) the latter as well.

Physiological Noise Regression

edit

Time series representing physiological noise can be included as nuissance regressors into a GLM. The part of the signal that is explained by the nuissance regressors will be removed from the residuals (which is the data of interest for rsfMRI). By removing a structured, non-random part of the residuals, nuissance regressors also render it more normally distributed or “white”. This helps fulfilling one basic assumption of the GLM, namely identically and normally distributed error terms. Unfortunately, the more regressors in the GLM, the fewer its degrees of freedom (= observations(voxels) – regressors), leading to a the more conservative significance testing of the model and single parameter weights

Obtaining Regressors

edit

There are different ways to arrive at reasonable regressors for physiological noise. The most straightforward is to acquire physiological measurements during the scan, using chest straps for the respiratory and a pulse oximeters for the cardiac rate. These measures are then fed into the analysis software to model appropriate nuissance regressors. However, getting the timing of MRI and physiological data straight can be very tricky.

If these measures are not available, the time series of the physiological noise can be modelled in different ways. Since the signal fluctuation of interest should primarily be located in the grey matter, one approach is to extract physiological noise time series of voxels located in white matter or ventricles.[4] Another method, called CompCor[5], focuses on the voxels showing highest variabilty, subsequently reducing the resulting time series to dominating components using PCA. It is also possible to apply ICA to separate noise components from components of interest. However, this always relies on the assumption that one is able to correctly tell them apart. Automated ICA based physiological noise removal methods like CORSICA [6] or PESTICA [7] should therefore be handled with care.

Resting State fMRI

edit

As mentioned above, resting state analysis is especially vulnerable to physiological artifacts, since they often manifest in the frequency range of interest (0.01 - 0.1 Hz). [8] Therefore, physiological noise regression is far more common in resting state than in task-based fMRI, which often relies on simple temporal filtering.

Implementation

edit

AFNI

edit

3dretroicor[9] uses an image-based method called RETROICOR [10] which estimates the phase of the cardiac and respiratory cycles at which an imaging slice is acquired and models a low-order Fourier series of this phase data for regression. The default order is 2 but can be adjusted via the -order option, as can the threshold for the detection of R-wave peaks in the input via -threshold. The input is 1D -resp / -card files. A command which also outputs the calculated respiration and cardiac wave for control (-respphase/-cardphase) would look like this

3dretroicor -resp 1resp_file -card card_file -cardphase cardphase.1D -respphase respphase.1D INPUTFILE

The phase outputs can be inspected with 1dplot

NOTE that the algorithm uses slice timing information for calculation. Therefore any steps destroying slice timing information, e.g. 3dvolreg motion correction, should be avoided before this step. Also, when some first volumes have been discarded, those time periods have to be discarded from the physiological data accordingly.

In afni_proc.py, this is not a default step but but can be included using the do block -ricor option. The default solver is OLS, polynomial order 2*runlength. The respective options to remove n timepoints from the physiology [default = 0], and apply the PHYSFILES in the regression respectively are:

-ricor_regs_nfirst n 
-ricor_regs PHYSFILE

PNM [11] also provides cardiac and respiratory RETROICOR regressors of requested order and additionally allows for cardiac-respiratory interaction regressors to be specified. Furthermore, it is possible to receive alternative physiological regressors like RVT (respiration volume per time) [12], HeartRate [13] and CSF regressors. It is strongly recommended in the manual, to provide scanner triggers (1/volume) to ensure timing accuracy between scanning and physiological data. Input is required as a single text file with different columns representing cardiac, respiratory and trigger information, along with a file describing these columns. It is possible and recommended to manually check accuracy of peak detection.

SPM8 does not have a built-in method for dealing with physiological noise. However, there is a couple of extensions, that can be used for this purpose. Visit: http://en.wikibooks.org/wiki/SPM/Physio. The DRIFTER toolbox for SPM can also be applied without external physiological data if the temporal resolution allows. [14]

References

edit
  1. Pallab K. Bhattacharyya, Mark J. Lowe, Cardiac-induced physiologic noise in tissue is a direct observation of cardiac-induced fluctuations, Magnetic Resonance Imaging, Volume 22, Issue 1, January 2004, Pages 9-13, ISSN 0730-725X, http://dx.doi.org/10.1016/j.mri.2003.08.003.
  2. Rasmus M. Birn, Jason B. Diamond, Monica A. Smith, Peter A. Bandettini, Separating respiratory-variation-related fluctuations from neuronal-activity-related fluctuations in fMRI, NeuroImage, Volume 31, Issue 4, 15 July 2006, Pages 1536-1548, ISSN 1053-8119, http://dx.doi.org/10.1016/j.neuroimage.2006.02.048.
  3. Karin Shmueli, Peter van Gelderen, Jacco A. de Zwart, Silvina G. Horovitz, Masaki Fukunaga, J. Martijn Jansma, Jeff H. Duyn, Low-frequency fluctuations in the cardiac rate as a source of variance in the resting-state fMRI BOLD signal, NeuroImage, Volume 38, Issue 2, 1 November 2007, Pages 306-320, ISSN 1053-8119, http://dx.doi.org/10.1016/j.neuroimage.2007.07.037.
  4. Andreas Weissenbacher, Christian Kasess, Florian Gerstl, Rupert Lanzenberger, Ewald Moser, Christian Windischberger, Correlations and anticorrelations in resting-state functional connectivity MRI: A quantitative comparison of preprocessing strategies, NeuroImage, Volume 47, Issue 4, 1 October 2009, Pages 1408-1416, ISSN 1053-8119, http://dx.doi.org/10.1016/j.neuroimage.2009.05.005.
  5. Yashar Behzadi, Khaled Restom, Joy Liau, Thomas T. Liu, A component based noise correction method (CompCor) for BOLD and perfusion based fMRI, NeuroImage, Volume 37, Issue 1, 1 August 2007, Pages 90-101, ISSN 1053-8119, http://dx.doi.org/10.1016/j.neuroimage.2007.04.042.
  6. Vincent Perlbarg, Pierre Bellec, Jean-Luc Anton, Mélanie Pélégrini-Issac, Julien Doyon, Habib Benali, CORSICA: correction of structured noise in fMRI by automatic identification of ICA components, Magnetic Resonance Imaging, Volume 25, Issue 1, January 2007, Pages 35-46, ISSN 0730-725X, http://dx.doi.org/10.1016/j.mri.2006.09.042.
  7. Erik B. Beall, Mark J. Lowe, Isolating physiologic noise sources with independently determined spatial measures, NeuroImage, Volume 37, Issue 4, 1 October 2007, Pages 1286-1300, ISSN 1053-8119, http://dx.doi.org/10.1016/j.neuroimage.2007.07.004.
  8. Kevin Murphy, Rasmus M. Birn, Peter A. Bandettini, Resting-state fMRI confounds and cleanup, NeuroImage, Volume 80, 15 October 2013, Pages 349-359, ISSN 1053-8119, http://dx.doi.org/10.1016/j.neuroimage.2013.04.001.
  9. http://afni.nimh.nih.gov/pub/dist/doc/program_help/3dretroicor.html
  10. Gary H. Glover, Tie-Qiang Li, David Ress, Image-based method for retrospective correction of physiological motion effects in fMRI: RETROICOR, Magnetic Resonance in Medicine, Volume 44, Issue 1, Pages 1522-2594, doi: 10.1002/1522-2594(200007)44:1<162::AID-MRM23>3.0.CO;2-E
  11. http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/PNM
  12. Rasmus M. Birn, Jason B. Diamond, Monica A. Smith, Peter A. Bandettini, Separating respiratory-variation-related fluctuations from neuronal-activity-related fluctuations in fMRI, NeuroImage, Volume 31, Issue 4, 15 July 2006, Pages 1536-1548, ISSN 1053-8119, http://dx.doi.org/10.1016/j.neuroimage.2006.02.048.
  13. Catie Chang, John P. Cunningham, Gary H. Glover, Influence of heart rate on the BOLD signal: The cardiac response function, NeuroImage, Volume 44, Issue 3, 1 February 2009, Pages 857-869, ISSN 1053-8119, http://dx.doi.org/10.1016/j.neuroimage.2008.09.029. (http://www.sciencedirect.com/science/article/pii/S1053811908010355)
  14. http://becs.aalto.fi/en/research/bayes/drifter/

3.1.10 Temporal Filtering

edit

Concept

edit

Temporal filtering aims to remove or attenuate frequencies within the raw signal, that are not of interest. This can substantially improve the SNR. The tricky thing is to decide which frequencies are of interest and which are noise.

Frequencies of interest: In task-based fMRI mostly frequencies around the task-frequency are of interest (e.g. when presenting a stimulus every 5 sec, one would expect a response signal to also have a frequency around 0.2 Hz).

Noise Frequencies: Frequencies that are usually considered as noise are very low frequency trends (~< 0.01 Hz, linear or non-linear) resulting from scanner drifts, coil interference or slow vascular/metabolic oscillations as well as high frequency physiological fluctuations like breathing (~0.3 Hz) or heartbeat (~1.0 Hz)

Fourier Transform

edit

Temporal filtering relies on the Fourier Transform: any series of data can be expressed as a linear sum of sine waves of different frequencies/amplitudes/phases. Thus a signal of intensity by time (or space) can be converted to a frequency spectrum, reflecting the contribution of each frequency to the signal (power by frequency). The frequency range covered by the spectrum depends on the sampling rate (TR), the highest identifiable frequency is 1/2 TR (Nyquist frequency). The contribution of higher frequencies will be aliased, i.e. artificially expressed, into lower frequencies.

Detrending

edit

Prior to frequency based filtering, the data is usually also detrended using linear, quadratic or higher order polynomial algorithms or sometimes wavelets (see [1]for a comparison) Detrending is usually implemented as part of the temporal filtering method.

Different Filters

edit

High-pass filters

edit

High pass filters cut off frequencies below a certain threshold which of course should below the lowest frequency of interest. Since in fMRI, noise is disproportionally expressed in low frequencies, high-pass filtering can also help whiten the noise (i.e. flattening the noise spectrum), which helps fulfill GLM assumptions. A very broad rule of thumb is to use a high-pass of 2-3x task frequency in task-based fMRI. The default in software ranges between 100-128 sec which is appropriate for trial length between 8-45 sec.

Low-pass filters

edit

Low-pass filters attenuate high-frequency noise and are sometimes also referred to as temporal smoothing. They are often constructed as not merely a cut-off but in the shape of a canonical HRF in order to enhance signals of that shape (matched filter theorem). They also introduce a high degree of autocorrelation into the signal, which once was put forward as a way to deal with serial autocorrelation in the signal (called precoloring).[2] But see Controversy

Band-pass / band stop filters

edit

If only a limited frequency range is of interest, the signal can be filtered on specifically this band (band-pass). On the other hand, if a certain frequency is known to reflect only noise, it can be removed specifically from the signal (band-stop).

Resting State

edit

Resting state fMRI is mostly interested in low frequency fluctuations (< 0.1 Hz). But see [3]. Therefore, most rsfMRI studies use a band-pass filter on frequencies between 0.01 Hz – 0.1 (or 0.08) Hz. But see Controversy

Controversy

edit

While high-pass filtering is used in most studies, low-pass filtering is controversial. It has been shown to decrease detection sensitivity w/o really increasing specificity [4][5] Imposing a high level of autocorrelation on the signal it violates the temporal independency assumptions which are used for statistical testing. In rsfMRI it has been argued that band-pass filtering introduces spurious correlations, which should be accounted for by correcting for temporal filtering [6].

Physiological noise should anyway be addressed using Physiological noise regression especially since it is often undersampled and aliased into lower frequencies. However, most resting state studies additionally rely on low-pass filtering (band-pass filtering to be precise) to be even more on the safe side in terms of removing high-frequency noise.

Autocorrelation is often dealt with by means of pre-whitening in task-based fMRI. This has also been suggested for resting state fMRI where low-pass filtering might actually accelerate the problem of autocorrelation[7]. However, for oscillating signals like BOLD time series a degree of autocorrelation is actually very natural. Eliminating autocorrelation eliminates important information from the time series. Prewhitened time series would only show correlation if the two signals are correlated with lag zero. However, this is rather unrealistic, given we are looking at BOLD responses which arguably differ across different brain regions even if the neural signals would be perfectly correlated.

Implementation

edit

AFNI

edit

3dBandpass[8] is used for FFT-based temporal filtering in AFNI.

3dBandpass [options] fbot ftop dataset

Where fbot is the lowest andt ftop the highest frequency in the bandpass in Hz. To only low- or high pass one can use fbot=0 or ftop=99999 (or ftop> Nyquist frequency), respectively. However, the mean and Nyquist frequency are always removed. The procedure by default includes a check for initial transients and constant linear and quadratic detrending*, which however can be switched off (-notrans, -nodetrend). Other options are despiking (-despike), including other time series to also band-pass filter and orthogonalize the original time series to (-ort, -dsort), blurring inside a mask (-blur, -mask / -automask), calculation of local principal vectors.

NOTE: Detrending is also part of 3DToutcount (see Data Quality) and can be called explicitly with 3dDetrend[9]

3dFourier[10] is a less complex alternative for FFT filtering, where low- and or high-pass frequencies are included as options -lowpass f / -highpass f. 3dFourier has fewer options. However it can used to create a notch filter which does not seem to be possible with 3dBandpass

3dFourier [options] dataset

In afni_proc.py use the regression block for temporal filtering

-regress_bandpass fbot ftop

Highpass temporal filtering uses a local fit of a straight line (Gaussian-weighted within the line to give a smooth response) to remove low frequency artefacts.
This is preferable to sharp rolloff FIR-based filtering as it does not introduce autocorrelations into the data.

Using GUI selecting FEAT FMRI analysis --> Pre-stats --> Temporal filtering --> High Pass --> check or uncheck

 
High Pass Filter

Lowpass temporal filtering reduces high frequency noise by Gaussian smoothing (sigma=2.8s), but also reduces the strength of the signal of interest, particularly for single-event experiments.
It is not generally considered to be helpful, so is turned off by default. By default, the temporal filtering that is applied to the data will also be applied to the model.

Filtering can be accomplished using `fslmaths` with the option `-bptf <hp_sigma> <lp_sigma>`. Either sigma may be set to a negative value to disable that filter. Example:

   fslmaths data.nii.gz -bptf -1 2.5 filtered_data.nii.gz

Por favor pongan información aquí

Gracias

References

edit

http://mindhive.mit.edu/node/116

  1. Jody Tanabe, David Miller, Jason Tregellas, Robert Freedman, Francois G. Meyer, Comparison of Detrending Methods for Optimal fMRI Preprocessing, NeuroImage, Volume 15, Issue 4, April 2002, Pages 902-907, ISSN 1053-8119, http://dx.doi.org/10.1006/nimg.2002.1053
  2. K.J. Friston, O. Josephs, E. Zarahn, A.P. Holmes, S. Rouquette, J.-B. Poline, To Smooth or Not to Smooth?: Bias and Efficiency in fMRI Time-Series Analysis, NeuroImage, Volume 12, Issue 2, August 2000, Pages 196-208, ISSN 1053-8119, http://dx.doi.org/10.1006/nimg.2000.0609.
  3. Boubela Roland Norbert, Kalcher Klaudius, Huf Wolfgang, Kronnerwetter Claudia, Filzmoser Peter, Moser Ewald , Beyond noise: using temporal ICA to extract meaningful information from high-frequency fMRI signal fluctuations during rest, Frontiers in Human Neuroscience, Volume 7, 2013, http://www.frontiersin.org/Journal/Abstract.aspx?s=537&name=human_neuroscience&ART_DOI=10.3389/fnhum.2013.00168
  4. Skudlarski et al. (1999), "ROC analysis of statistical methods used in functional MRI: individual subjects" NeuroImage, 1999, doi: 10.1006/nimg.1999.0402
  5. Della-Maggiore et. al (2002), "An empirical comparison of SPM preprocessing parameters to the analysis of fMRI data," NeuroImage, 2002, doi: 10.1006/nimg.2002.1113
  6. Catherine E. Davey, David B. Grayden, Gary F. Egan, Leigh A. Johnston, Filtering induces correlation in fMRI resting state data, NeuroImage, Volume 64, 1 January 2013, Pages 728-740, ISSN 1053-8119, http://dx.doi.org/10.1016/j.neuroimage.2012.08.022.
  7. P Christova and S M Lewis and T A Jerde and J K Lynch and A P Georgopoulos, True associations between resting fMRI time series based on innovations, Journal of Neural Engineering, Volume 8, Issue 4, 2011, http://stacks.iop.org/1741-2552/8/i=4/a=046025
  8. http://afni.nimh.nih.gov/pub/dist/doc/program_help/3dBandpass.html
  9. http://afni.nimh.nih.gov/pub/dist/doc/program_help/3dDetrend.html
  10. http://afni.nimh.nih.gov/pub/dist/doc/program_help/3dFourier.html

3.1.11 Smoothing

edit

Spatial Smoothing

edit

Basically a signal stream could be decomposed into time domain, space domain or frequency domain separately, and the story about "smoothing" happens on the frequency domain of the signal. Imagine that a batch of signal could be represented as a vector of frequencies versus its power in each point. The curve may follow a jumpy period before reaching its maximum, and then the peak is followed by a long undershoot. Smoothing is just to filter out high frequency regions, with the aim to increase the signal-to noise ratio in images. Normally so called low-pass filters will remove high frequency signals while retaining low parts, whereas high-pass filters do the other way around. So in the smoothing what is needed is a low-pass filter. The most popular ways to smoothing is by deconvoluting three-dimensional images with a three-dimensional Gaussian filter. The degree of smoothing is proportional to the full width at half-maximum (FWHM) of the Gaussian distribution, which is associated to the standard deviation (σ) by the equation FWHM = 2σ2ln(2). The larger the standard deviation (σ), the more gentle of the curve, and the larger smoothing is achieved.

What is a good smoothing ?

edit

Intuitively, the larger the smoothing, the more blurring of an image is, and more information lost concomitantly. Then what is a suitable smoothing to get a balance between retaining high resolution and removing artifacts. Depending on the purpose to smoothing, there may have different answers. First of all, if the aim is to reduce noise in the image, the sensible choice of smoothing is no larger than the activation signals with intention to be found. This is straightforward to understand because no tadpole would be caught by using a net with dolphin-scale holes. Then the purpose of smoothing may lie in reducing the effects of anatomical variability, which is failed to correct during spatial normalization. In this circumstance, the optimal smoothing will rely on the variability degree in the series of images you are comparing. The last but not the least aim for smoothing could be as a prerequisite for the statistical analysis (such as the Gaussian random fields needs certain degree of spatial smoothness), then it's appropriate to select an FWHM of twice the voxel dimensions.[1]


Resting State fMRI

edit

In graph-based analysis, the application of spatial smoothing is controversial. It has been argued that the increased spatial dependency introduced by smoothing might confound local connectivity strength especially for the small ROIs used ind voxel-based parecellation [2]

Implementation

edit

SPM(Statistical parametric mapping)

via GUI:

 
Click on FEAT FMRI analysis, select Pre-stats and set your smoothing kernel

Start the FSL GUI, click on FEAT FMRI analysis and select Pre-stats. In the field Spatial smoothing FWHM (mm) you can set your smoothing kernel. Per default it is set to 5 mm. If you don´t want to smooth at all, set the value to 0. Bear in mind that the extent of smoothing (that makes sense) is dependent on the size of the underlying activation area. So if you are looking for a large area of activation, you can increase the kernel (e.g. to values of 10–15 mm); if you are looking for a small activation area, you should reduce the kernel from the default 5 mm.

AFNI

edit

3dmerge function contains an option for spatial smoothing which is -1blur. There are different parameters you can adjust (e.g. to use rsm or sigma thresholds), check out the manual page [3] . For a Gaussian smoothing kernel of 4mm FWHM (which is the default) applied to the whole dataset the command would be:

3dmerge -1blur_fwhm 4.0 -doall -prefix OUTPUTFILE INPUTFILE

In afni_proc.py blur is a default block with the following settings, which however can be changed:

-blur_filter -1blur_fwhm
-blur_size 4
-blur_in_mask no

Reference

edit
  1. Poldrack, Russell A. Mumford, Jeanette A. Nichols, Thomas E. Handbook of functional MRI data analysis. 2011
  2. Satoru Hayasaka, Paul J. Laurienti, Comparison of characteristics between region-and voxel-based network analyses in resting-state fMRI data, NeuroImage, Volume 50, Issue 2, 1 April 2010, Pages 499-508, ISSN 1053-8119, http://dx.doi.org/10.1016/j.neuroimage.2009.12.051.
  3. http://afni.nimh.nih.gov/pub/dist/doc/program_help/3dmerge.html

3.1.12 Surface Extraction

edit

Concept

edit

The topographic, columnar and laminar organization of the cerebral cortex cannot readily be deduced from 3D volumes or slice representations of the brain. Cortical surface maps provide an alternative approach to represent the cortex and its intrinsic two-dimensional structure. Usually a T1-weighted, anatomical 3D- MRI volume is used to produce an individual surface map. The anatomical surface can be co-registered to a surface atlas Coregistration and Normalization and functional images can be transformed and displayed on the anatomical surface.

Different surface representations are available like pial surface, gray matter - white matter boundary, inflated surfaces or spheres. The advantage of inflated surfaces is that information buried in the sulci becomes visible and it gets clearer which regions are actually adjacent within the tissue and which only get close to each other because of cortical folding. Spheres are designed to make a surface-based coordinate system available.

Surface extraction routines comprise a set of different steps which are described in the following. A complete routine was first described by Fisher and Dale [1] [2] and is implemented in FreeSurfer[3], which is probably the most prominent software for surface extraction.

NOTE: Some of these steps can also be used in volume-based analyses. For instance tissue segmentation might be used in order to perform normalization using tissue probability maps (e.g. done in SPM) or before voxel-based morphometry analysis. Also skullstripping is sometimes included in the preprocessing stream without subsequent surface extraction. AND again the different steps should undergo visual quality control since mistakes in the surface extraction will carry on to the whole analysis.

Isosurfaces

edit

The surface extraction described here is different from computing isosurfaces from a volume. The latter much more simply extracts one layer of the volume but does not take into account tissue boundaries, cannot be inflated, etc. Isosurfaces can be helpful for visualizations but do not help to delineate surface topography.

Surface extraction routine

edit

Correcting for tissue inhomogeneities

edit

As for volume based approaches it is important to correct the anatomical images for magnetic field inhomogeneities. This is described in more detail in Field map / bias field correction

Skullstripping

edit

Removal of non-brain tissue, see Skull Stripping

Tissue segmentation

edit

Classification of the brain tissue into different tissue classes, usually gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). See Tissue Segmentation

Cutting Planes

edit

In order to gain a representation of each hemisphere and to remove subcortical structures two cutting planes are established. One of them is a saggital cut along the corpus callosum, the other one a horizontal cut through the pons.

Constructing the surface

edit

Ensuring that there are no topological defects such as handles (two separate parts of the surface are connected by a tunnel) or patches (holes in the surface) but the mesh achieves a spherical topology. Filling interior holes using connected-component analysis.

Tessellating a triangular mesh over the gray-white boundary of each hemispheric volume and deforming the mesh to produce a smooth representation of the gray-white interface and pial surface Ensuring that there are no topological defects such as handles (two separate parts of the surface are connected by a tunnel) or patches (holes in the surface) but the mesh achieves a spherical topology.

Smoothing

edit

For cortical surfaces spatial smoothing is only carried out after the surface extraction. This helps to minimize the blurring of fmri signal across sulci or gyri.

Implementation

edit

AFNI

edit

SUMA (SUrface Mapping with Afni) is a program for surface based functional imaging analysis in the AFNI framework. It allows viewing three-dimensional cortical surface models, and mapping volumetric data onto them. With SUMA, AFNI can simultaneously and in real-time render Functional Imaging data in 4 modes: Slice, Graph (time series), Volume and Surface with direct links between them. Find out about SUMA here: [4][5] or simply try it (given you have installed AFNI, which contains SUMA sourcecode as well) by entering the AFNI environment and typing

suma

In afni_proc.py the block surf can be included. Important options to specify are the anatomical volume which should be aligned with the surface and the surface specification file(s) (one or both hemispheres).

-do_block surf 
-surf_anat ANAT_DSET
-surf_spec spec1 [spec2]

There are also a couple of AFNI functions that can be used to carry out different single steps of the above routine, see the respective chapters.

IsoSurface [6] is a programm for isosurface extraction from a volume. (see above)

Other helpful functions in handling volumes and surfaces may be 3dVol2Surf, ConvertSurface, CompareSurfaces, Surf2VolCoord

References

edit

Huettel, S. A., Song, A.W., & McCarthy, G. (2008). Functional Magnetic Resonance Imaging (2nd edition). Sinauer Associates, Inc: Sunderland, Massachusetts, USA.

  1. Anders M. Dale, Bruce Fischl, Martin I. Sereno, Cortical Surface-Based Analysis: I. Segmentation and Surface Reconstruction, NeuroImage, Volume 9, Issue 2, February 1999, Pages 179-194, ISSN 1053-8119, http://dx.doi.org/10.1006/nimg.1998.0395.
  2. Bruce Fischl, Martin I. Sereno, Anders M. Dale, Cortical Surface-Based Analysis: II: Inflation, Flattening, and a Surface-Based Coordinate System, NeuroImage, Volume 9, Issue 2, February 1999, Pages 195-207, ISSN 1053-8119, http://dx.doi.org/10.1006/nimg.1998.0396.
  3. http://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial
  4. http://afni.nimh.nih.gov/afni/suma
  5. http://afni.nimh.nih.gov/pub/dist/doc/program_help/suma.html
  6. http://afni.nimh.nih.gov/pub/dist/doc/program_help/IsoSurface.html

3.2 Tools

edit

This section regards the basic things needed to be known about the different softwares, like: where and how to get them, how is the general structure, what is special,... See the separate section of the actual processing steps for details on their implementation within the different softwares

3.2.1 SPM

edit

Structure

edit

Statistical Parametric Mapping refers to the construction and assessment of spatially extended statistical processes used to test hypotheses about functional imaging data.
SPM is Matlab based and has been designed for the analysis of brain imaging data sequences. The current release is designed for the analysis of fMRI, PET, SPECT, EEG and MEG.
The SPM software package can be extended by different toolboxes.

How to start SPM

edit

After you installed SPM on your computer:

  1. Set SPM environment by typing SPM in capital letters in your terminal (if you have more than on version on your computer select the current one)
  2. Type matlab in small letters in your terminal to open matlab
  3. To start the SPM GUI type spm in small letters in matlab (by adding fmri, it will open the fmri mode already)


 
SPM GUI

After you opened the SPM interface for fmri by pressing the fmri button it should look like this..

 
SPM GUI fMRI

and you can start your data processing right away.

What is a Batch?

edit

A Batch is a perfect tool to run your processing steps at once without MATLAB programming. For the basic understanding of SPM it is important to know, that each data processing step is called \module". There are several different modules e.g. for temporal (slice timing), spatial (realignment, normalisation, smoothing) or statistical (fMRI or factorial design specification, model estimation, contrast specification) processing of your MRI data. A batch describes which modules should be run on what kind of data and how these modules depend on each other. Compared to running each processing step interactively, batches have a number of advantages:

  1. Documentation: Each batch can be saved as a MATLAB script. All parameters (including default settings) are included in this script. Thus, a saved batch contains a full description of the sequence of processing steps and the parameter settings used.
  2. Reproducibility: Batches can be saved, even if not all parameters have been set. For a multi- subject study, this allows to create template batches. These templates contain all settings which do not vary across subjects. For each subject, they can be loaded and only subject-specific parts need to be completed.
  3. Unattended execution: Instead of waiting for a processing step to complete before entering the results in the next one, all processing steps can be run in the specified order without any user interaction.
  4. Multiple batches: Multiple batches can be loaded and executed together.

To generate a Batch you can click on the Batch button on the SPM fmri GUI:

 
Batch Editor

By clicking SPM in the upper taskbar you can select the processings steps you need for your data. The parameters which needs to be defined individually beside the default settings are indicated by a cross (<--X).

 
How to create a batch!

When you specified the order of your processings steps it is really important to set the dependencies between the single steps. Otherwise the data which was processed in the previous step can not be used in the next one.

 
How to set the processing step dependencies!
edit

SPM Manual http://www.fil.ion.ucl.ac.uk/spm/doc/manual.pdf

SPM Wikibook https://en.wikibooks.org/wiki/SPM

In this website, there are a series of lecture slides about how to use SPM for fMRI design and analysis (scroll down you could find the slides download) http://neurometrika.org/BasicSPM12

3.2.2 FSL

edit

Overview of FSL

edit

FSL is a comprehensive library of analysis tools for FMRI, MRI and DTI brain imaging data. It could be used either by FSL main GUI or the commond-lines. The advantage of the GUI lies in its simplicity but the drawback is the less flexibility comparing to the command-lines. FSLView as the display tool could be completely separate use from processing and analysis. Currently FSL only accepts the input files in NIFTI format, and the DICOM files need to be converted to NIFTI after acquisition and before processing. The convertion from DICOM to NIFIT could be fulfilled by several methods, such as dcm2nii from mricron or dcmstack. In the GUI there are nine popular modules are defined, and depending on the input file types, they fall into functional-oriented, structural-oriented or diffusion-oriented modules. The structural-oriented functions include "BET brain extraction", "FAST segmentation", "FLIRT: linear registration"; the functional-oriented functions invole "SUSAN noise reduction","FEAT FMRI analysis" and "MELODIC ICA"; the DTI-oriented functions incorporate "FDT diffusion". Besides these functional modules, there is a simulator called "POSSIUM MRI simulator", which is a software tool to produce realistic simulated MRI and FMRI images or time series. POSSUM (Physics-Oriented Simulated Scanner for Understanding MRI) includes tools for the pulse sequence generation, signal generation, noise addition and image reconstruction.

 
The GUI of FSL with nine modules

How to launch FSL

edit

Suppose all the environmental vairables are settled well for FSL, there are two ways to launch FSL, either from the user GUI or command-line.

User GUI
edit

Just type fsl in the console and get access to the main GUI, from where you could reach to the modules by clicking on the options.

 fsl

Depending on your operating system, you might need to do some additional steps prior to type in this starting command into your console/terminal

operating system preparing steps
Linux/Unix to tell your terminal, that the following commands are referred to the fsl-toolbox, your need to type in
 FSL

in capital letters. Afterwards you will get a message, that your terminal will accept fsl-commands.

Windows FSL is not running under Windows. Therefore you will need additional software, to emulate another operating system. FSL will run perfectly under a virtual machine like VMware Player[1] or VirtualBox[2]
Mac OS X no prior steps needed

If you want to jump to specific modules directly, then just type in the name starting with a capital letter, e.g. Melodic for Melodic ICA

 Melodic
Command-line
edit

If you want to launch a function from the command line, just type the function name in lowercase, along with the necessary arguments to the function.

 melodic -i inputfile

FSL function overviews

edit
Structural image processing
edit
Function GUI/command Aim Online document
BET Both Brain extraction http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/BET
FAST Both Tissue segmentation http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FAST
FIRST Command Subcortical segmentation http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FIRST
FLIRT Both Linear registration http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FLIRT
FNIRT Command Nonlinear registration http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FNIRT
FUGUE Command EPI distortion correction http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FUGUE
SIENA Command Atrophy analysis http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/SIENA
FSL-VBM Command Grey matter density http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FSLVBM
Functional MRI data analysis
edit

In FSL, the functional images could be analyzed in two integration workflows, rather than step-by-step as in SPM and AFNI. Analysis for an experiment can be set up in less than 1 minute and finished in 5-20 minutes for the first-level session, producing a web page analysis report, including colour activation images and time-course plots of data vs model.

FEAT is based on general linear modelling (GLM). It allows to describe the experimental design; then a model is created that should fit the data, telling you where the brain has activated in response to the stimuli. In FEAT, the GLM method used on first-level (time-series) data is known as FILM. FILM uses a robust and accurate nonparametric estimation of time series autocorrelation to prewhiten each voxel's time series; this gives improved estimation efficiency compared with methods that do not pre-whiten.

 
GUI for FEAT with six tabs

MELODIC is model-free analysis which uses Independent Component Analysis (ICA) to decompose a single or multiple 4D data sets into different spatial and temporal components. MELODIC can pick out different activation and artefactual components without any explicit time series model being specified.

 
GUI for MELODIC with six tabs
FSLUTILS
edit

FSLUTILS is a set of useful command-line utilities which allow the conversion, processing etc. of Analyze and Nifti format data sets. Many of them work on both 3D and 4D data. For each of these programs, type just the program name to get the usage help. Those commands could be broadly divided into four categories depending on their aims.

1. Mathematical manipulation of images

Function Aim Examplary command
fslcc Run cross-correlations between every volume in one 4D data set with every volume in another fslcc input1.nii input2.nii (measure similarities in ICA outputs)
fslfft Outputs the Fast-Fourier Transform (or inverse) for a complex input volume fslfft input.vol output.vol
fslmaths Simple but powerful program to allow mathematical manipulation of images fslmaths func_data.nii -bptf 25.0 -1 func_data_tempfilt.nii (bandpass temporal filtering)
fslmeants Output the average timeseries of a set of voxels fslmeants -i input.nii -o mean.txt -m mask (preliminary step for seed-based correlelation)
fslroi Extract region of interest (ROI) from an image fslroi input.nii output_roi.nii 0 1 (extract the first volume)

2. File combination and splitting

Function Aim Examplary command
fslcomplex Allow 3D or 4D complex image files to be split or constructed from corresponding real components fslcomplex -complexsplit source dest 1 3
fslinterleave Interleave two inputs to form a combined image, and this combination is in Z axis only fslinterleave input1 input2 output
fslmerge Concatenate image files into a single output. This concatenation can be in time, X, Y or Z axis fslmerge -t outfile infile1 infile2 (concatenate files in time series)
fslslice Split a 3D file into lots of 2D files along Z-axis. fslslice volfile slicefile
fslsplit Split a 4D file into lots of 3D files in either time, X,Y,or Z axis fslsplit -t input outfileprefix (split volumes along time series)

3. Header-related utilities

Function Aim Examplary command
fslcpgeom Copy certain parts of the header information from one image to another with identical file type. fslcpgeom source dest
fslcreatehd Creates a new image header along with a zero intensity data image. fslcreatehd
fslhd Report every field of an Analyze or Nifti header fslhd input.nii
fslinfo Report a basic subset of an Analyze or Nifti header. fslinfo input.nii
fslval Report a particular parameter from an image header fslval input.nii

4. Orientation-related Utilities

Function Aim Examplary command
fslreorient2std Reorient an image to match the orientation of the standard template images (MNI152), but this only rotated to 90, 180 or 270 degree and can't substitute coregistration fslreorient2std input output
fslorient Advanced tool that reports or sets the orientation information in a file fslorient -forceradiological input
fslswapdim Advanced tool that re-orders the data storage to permit changes between axial, sagittal and coronal slicing fslswapdim
FSLView
edit

The FSLView could be launched from terminal by typingː

 fslview --help

1.Cursor view

 
FSL cursorview

Box-1ː show and control the cursor position in voxels

Box-2ː show and control the cursor position in mm

Box-3ː shows and controls the current volume in a 4D dataset

Box-4ː shows the value in the voxel at the current cursor position

2.FSL image overlay

 
FSL image overlay

Box-1ː The layer marked with chevrons to the right of it is the "main layer". This layer is the first one loaded during any given session and cannot be removed from the list as much of the viewer's display capabilities are determined from this layer's attributes.

Box-2ː Visibilty" checkbox controls whether or not a given layer is visible; "Lock" checkbox determines if a given layer can be edited or not; "Transparency" slider determines how the selected layer blends with the layers below it

Resources and references

edit
  1. http://www.vmware.com/products/player/
  2. https://www.virtualbox.org/

3.2.3 AFNI

edit

Structure

edit

AFNI is a command line based tool, i.e. all the processing steps are defined in a script, using commands and methods built-in to afni, which is then run on the data. You can but don't have to put up your scripts from scratch, afni_proc.py and uber-subject.py (see below) are helpful tools to help you putting together your processing script. Afni does have a graphical viewer for images and graphs. Once you have installed AFNI (check tutorial below or [1]) and entered AFNI environment by typing AFNI, you can call this GUI by typing

afni

to you terminal. It will try to find and open Afni or Nifti format datasets in your current folder and otherwise open on a default anatomical image. The tutorial mentioned below gives a good introduction to how you can use the viewer to inspect you data before, during and after processing.

What is afni_proc.py?

edit

a command line program used to generate processing scripts (proc. scripts) for single subject analysis. Generated scripts are in tcsh syntax and written to be easily read and modified. Learn about the syntax you need here [2], where you find a full list of all the possible building blocks and options along with very helpful example scripts. You get the same output when typing (in AFNI environment):

afni_proc.py 

In the AFNI part of the implementation in each subsection of this wikibook there is some very basic information on respective proc.py commands you need for each step. Also the tutorial mentioned below is a good introduction to proc.py. One thing I find helpful to know is how to includ non-default processing blocks in proc.py (if you read the help on proc.py it will become clearer what that means): you can include single blocks in their default position by

-do_block

followed by the name of the block. If you want to include the processing blocks in a specific order use

-blocks

followed by the blocks in their respective order.

What is uber_subject.py?

edit

a graphical user interface to afni_proc.py, for creating processing scripts and running the analysis. When starting with AFNI it might be easiest to create scripts using uber_subject and then explore the generated scripts and modulate them if you wish. However this tool is still rather new and might not be fully developed.

The uber_subject GUI is quite self-explanatory. You can find some help here [3] or just open and explore it using this command (given AFNI is installed and AFNI environment is active)

uber_subject.py
 
Afnis uber_subject.py GUI

Data formats

edit

AFNI can handle nifti files (nii and nii.gz). However, the native data format is .HEAD and .BRIK files (see Afni) and when using nifti files it seems there can be problems with the header information. To import dicom data into head/brik use the to3d command [4].

For example for functional data with TR 2300 ms, 257 volumes, 37 slices acquired in z direction and then in time (zt), in interleaved order (alt+z):

to3d -prefix OUTPUTFILE -time:zt 37 257 2300 alt+z INPUTFILES

INPUTFILES should be all dicoms, i.e. using a wildcard like data/func/run001/*.dcm and for anatomical

to3d -anat -prefix OUTPUTFILE INPUTFILES

Make sure in both cases that INPUTFILES refers to all dicoms, i.e. using a wildcard like path_to_data/*.dcm

Oblique data in AFNI

edit

If you data was acquired oblique this information will be stored in the dicom header. Afni recognizes this and will complain. You should apply 3dWarp[5] to the oblique data (to both anatomical AND functional in case both are oblique). See also [6]

3dWarp -deoblique -prefix OUTPUTFILE INPUTFILE

Otherwise, probably due to different mosaic centers, your coregistration may end up looking something like this:

 
Coregistration can fail with oblique data
edit

Documentation on the single commands http://afni.nimh.nih.gov/pub/dist/doc/program_help//index.html

A tutorial with sample data is provided here: http://afni.nimh.nih.gov/pub/dist/edu/data/CD.expanded/AFNI_data6/FT_analysis/tutorial/

The same tutorial, each step documented in a video: http://andysbrainblog.blogspot.de/2013/08/afni-start-to-finish-playlist.html

This is a really helpful "what you need to know when you are absolutely new to AFNI": http://afni.nimh.nih.gov/pub/dist/doc/program_help/README.afnigui.html

And a ppt presentation which you might also find helpful as a start: http://ngp.usc.edu/files/2013/06/Gimbel_AFNI.pdf

References

edit
  1. http://afni.nimh.nih.gov/pub/dist/HOWTO/howto/ht00_inst/html/
  2. http://afni.nimh.nih.gov/pub/dist/doc/program_help/afni_proc.py.html
  3. http://afni.nimh.nih.gov/pub/dist/doc/program_help/uber_subject.py.html
  4. http://afni.nimh.nih.gov/pub/dist/doc/program_help/to3d.html
  5. http://afni.nimh.nih.gov/pub/dist/doc/program_help/3dWarp.htm
  6. http://afni.nimh.nih.gov/sscc/dglen/Obliquity

4. References

edit

5. Glossary

edit

BOLD

edit

blood-oxygen-level-dependent

(gradient-echo) echoplanar imaging

fMRI

edit

functional magnetic resonance imaging

magnetic resonance imaging

rotation

edit

turning the entire image around an axis in space: x-y, x-z, y-z (in the absence of translation)

echo time

repetition time

translation

edit

moving the entire image volume along an axis in space: x-,y-, and z-axes (in the absence of rotation)

volume

edit

one 3-dimensional image, i.e., one timepoint in a 4-dimensional fMRI image

voxel

edit

one 3-dimensional unit (length × width × height) in a 3-dimensional image (analogous to a pixel in a 2-dimensional image)

See also

edit