= Longitudinal segmentation of hippocampal subfields =
== UPDATE - An enhanced version of this module, which also segments the nuclei of the amygdala, can be found in FreeSurfer 7. ==
'''See [[HippocampalSubfieldsAndNucleiOfAmygdala]] for the functionality found in [[https://surfer.nmr.mgh.harvard.edu/fswiki/ReleaseNotes|FreeSurfer 7]].'''
'''''This functionality corresponds to !FreeSurfer 6.0'''''
''Author: Juan Eugenio Iglesias''
''E-mail: e.iglesias [at] ucl.ac.uk''
''Rather than directly contacting the author, please post your questions on this module to the FreeSurfer mailing list at freesurfer [at] nmr.mgh.harvard.edu''
If you use these tools in your analysis, please cite:
* [[http://www.nmr.mgh.harvard.edu/~iglesias/pdf/Neuroimage_2016_longitudinal.pdf|Bayesian longitudinal segmentation of hippocampal substructures in brain MRI using subject-specific atlases]]. Iglesias JE, Van Leemput K, Augustinack J, Insausti R, Fischl B, and Reuter M. Neuroimage, accepted for publication.
See also: [[HippocampalSubfields]], [[BrainstemSubstructures]]
<
>
=== Contents ===
1. Motivation and General Description
2. Installation
3. Usage
4. Gathering the volumes from all analyzed subjects
5. Frequently asked questions
6. Test data
<
>
=== 1. Motivation and General Description ===
Longitudinal analysis greatly reduces the confounding effect of inter-individual variability by using each subject as his or her own control. Our original [[https://surfer.nmr.mgh.harvard.edu/fswiki/HippocampalSubfields|hippocampal subfield segmentation tool]] was designed to analyze individual datasets. You could still use it to analyze longitudinal data, by assuming that the different time points of each subject were independent, but such cross-sectional analysis of longitudinal data disregards important information, i.e., the fact that the scans are of the same subject.
This tool jointly segments the hippocampal subfields in a set of MRI scans from the same subject acquired at different time points. The method relies on a subject-specific atlas, and treats all time points the same way in order to avoid processing bias. We have shown in the paper cited above that this strategy increases the robustness of the method and yields more sensitive subfield volumes. It is important to remark that, the same way as the [[https://surfer.nmr.mgh.harvard.edu/fswiki/LongitudinalProcessing|main FreeSurfer longitudinal pipeline]], this method does not assume any specific trajectory for the segmentations or corresponding volumes. It is up to the user to incorporate such information in subsquent analyses, e.g., with a [[https://surfer.nmr.mgh.harvard.edu/fswiki/LinearMixedEffectsModels|linear mixed effects model]].
=== 2. Installation ===
The hippocampal subfield module requires the Matlab R2012 runtime; the runtime is free, and therefore '''NO MATLAB LICENSES ARE REQUIRED TO USE THIS SOFTWARE.'''
Please note that, if you have already installed the runtime for the [[https://surfer.nmr.mgh.harvard.edu/fswiki/HippocampalSubfields|hippocampal subfield module]] or the [[https://surfer.nmr.mgh.harvard.edu/fswiki/BrainstemSubstructures|brainstem module]], you do not need to install it again.
Instructions for the installation of the runtime can be found here:
https://surfer.nmr.mgh.harvard.edu/fswiki/MatlabRuntime
<
>
=== 3. Usage ===
At this point, this software can only be used with T1 data that has been processed through the [[https://surfer.nmr.mgh.harvard.edu/fswiki/LongitudinalProcessing|main FreeSurfer longitudinal processing pipeline]].
Let's say that is the ID of the base subject (template) from the main stream. Then, we can produce the longitudinal hippocampal subfield segmentation with the following command:
{{{
longHippoSubfieldsT1.sh [SubjectsDirectory]
}}}
The second argument is the subjects directory, and is only necessary when the environment variable SUBJECTS_DIR has not been set (or if we want to use a subjects directory different from that pointed by SUBJECTS_DIR). Note that we do not need to specify the time points, which are taken from the list specified in the directory of the base.
The output will consist of six files (three for each hemisphere) for each time point, which can be found under the corresponding "mri" directories of the longitudinally processed subjects (i.e., $SUBJECTS_DIR/.long./mri/):
''[lr]h.hippoSfLabels-T1.long.v10.mgz'': they store the discrete segmentation volume (lh for the left hemisphere, rh for the righ) at 0.333 mm resolution.
''[lr]h.hippoSfLabels-T1.long.v10.FSvoxelSpace.mgz'': they store the discrete segmentation volume in the FreeSurfer voxel space.
''[lr]h.hippoSfVolumes-T1.long.v10.txt'': these text files store the estimated volumes of the subfields and of the whole hippocampi.
Note that [lr]h.hippoSfLabels-T1.long.v10.mgz covers only a patch around the hippocampus, at a higher resolution than the input image. The segmentation and the image are defined in the same physical coordinates, so you can visualize them simultaneously with (run from the subject's mri directory):
{{{
freeview -v nu.mgz -v lh.hippoSfLabels-T1.long.v10.mgz:colormap=lut -v rh.hippoSfLabels-T1.long.v10.mgz:colormap=lut
}}}
On the other hand [lr]h.hippoSfLabels-T1.long.v10.FSvoxelSpace.mgz lives in the same voxel space as the other FreeSurfer volumes (e.g., orig.mgz, nu.mgz, aseg.mgz), so you can use it directly to produce masks for further analyses, but its resolution is lower than that of [lr]h.hippoSfLabels-T1.long.v10.mgz.
<
>
=== 4. Gathering the volumes from all analyzed subjects ===
Once this module has been run on a number of subjects, it is possible to collect the volumes of the hippocampal substructures from all the subjects and write them to a single file - which can be easily read with a spreadsheet application. This can be done with ConcatenateSubregionsResults (please note that our older tool quantifyHippocampalSubfields.sh is deprecated).
<
>
=== 5. Frequently asked questions (FAQ) ===
* '''Does this module work with high-resolution T1s processed with the flat -cm?'''
Yes, but do not mix images of different resolutions!
* '''Does this module not work with high-resolution T2?'''
Unfortunately, the answer is no at this point. However, you can still cross-sectionally analyze the longitudinally-processed data, i.e., by running the standard [[HippocampalSubfields]] on = .long., rather than on . We have shown in the [[http://www.nmr.mgh.harvard.edu/~iglesias/pdf/Neuroimage_2016_longitudinal.pdf|paper]] that such a strategy improves the robustness and sensitivity of the analyses, compared with the purely cross-sectional version.
* '''Are you sure that this software does not require Matlab licenses? Why does it require the Matlab runtime, then?'''
The software uses compiled Matlab code that requires the runtime (which is free), but no licenses. So, if you have a computer cluster, you can run hundred of segmentations in parallel without having to worry about Matlab licenses. And yes, this is all perfectly legal ;-)
* '''The sum of the number of voxels of a given structure multiplied by the volume of a voxel is not equal to the volume reported in [lr]h.hippoSfVolumes*.txt.'''
This is because the volumes are computed upon a soft segmentation, rather than the discrete labels in [lr]h.hippoSfLabels*.mgz. This is the same that happens with the main recon-all stream: if you compute volumes by counting voxels in aseg.mgz, you don't get the values reported in aseg.stats.
* '''The size of the image volume of [lr]h.hippoSfLabels*.mgz (in voxels) is not the same as that of norm.mgz or the additional input scan.'''
The segmentation [lr]h.hippoSfLabels*.mgz covers only a patch around the hippocampus, at a higher resolution than the input image. The segmentation and the image are defined in the same physical coordinates, so that is why you can still visualize them simultaneously with FreeView using the commands listed above. The software also gives [lr]h.hippoSfLabels*.FSvoxelSpace.mgz, which is in the same voxel space as the other FreeSurfer volumes, in case you need it to produce masks for other processing.
* '''I am interested in the soft segmentations (i.e., posterior probabilities), can I have access to them?'''
Yes. All you need to do is to define an environment variable WRITE_POSTERIORS and set it to 1. For example, in tcsh or csh:
{{{
setenv WRITE_POSTERIORS 1
}}}
Or, in bash:
{{{
export WRITE_POSTERIORS=1
}}}
Then, the software will write a bunch of files under the subject's mri directory, with the format:
''posterior_side__T1.long_v10.mgz''
* '''This module is CPU hungry'''
Indeed! The deformation of the atlas towards the input scan(s) is parallelized, but recon-all by default limits operation to one thread (which is the polite mode of operation on a cluster). If you want to increase the number of cores that the software is allowed to use, you simply need to add this flag to the end of your recon-all command:
{{{
-itkthreads 4
}}}
where in this example usage of four threads is specified. You can set this to whatever number is optimal for your machine (two or four per core is typical). This flag sets the environment variable ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS.
* '''What are the computational requirements to run this module?'''
It depends heavily on the number of time points. If you have many time points (e.g., more than 10), it can require tens of GBs of RAM memory.
* '''The volume of the whole hippocampus obtained with this module is not equal to the value reported by the main FreeSurfer pipeline in $SUBJECTS_DIR//stats/aseg.stats.'''
Yes! The reason for this is that the volumes correspond to two different analyses.
<
>
=== 6. Test data ===
Coming soon...