This wiki page contains a historical archive of student questions and answers from various Freesurfer courses.
Answers to Boston October 2018 Course questions:
Topic : Permutation
Question: I am working with the data on my work computer and was informed there is a patch for running permutation tests with continuous variables for the most recent version of freesurfer? Is there a link to this patch?
Answer (from Doug): You can get it from here ftp://surfer.nmr.mgh.harvard.edu/pub/dist/freesurfer/6.0.0-patch/mri_glmfit-sim. Just copy it into $FREESURFER_HOME/bin.
Topic : Skull stripping
Question: In the event that skull-stripping does not work well, would it ever make sense to take the output brain mask file and use it as the input to a new recon-all? I.e. would it make sense for the skull-stripping to be iterative?
Answer (from Doug): I doubt that it would help that much.
From Bruce: I suppose that running skull-stripping iteratively could work in some cases. There is a "-multistrip" option, which will apply the watershed at multiple thresholds then determine which is optimal, which you could try also.
Topic : Speeding up recon-all
Question: I've a question regarding running -recon all on your own machine given that all the process is really painful in term of time (especially if you have more than 100 subjs). I was wondering whether if you could please show us a way to speed it up I've found this link below interesting: https://support.opensciencegrid.org/support/solutions/articles/12000008490-anlysis-of-a-brain-mri-scan .
Answer (from Doug): We are working on ways to speed up recon-all. If you only have one computer, you can use various cloud computing services (eg, AWS). The open science grid is basically a free cloud computing environment that they have set up to run freesurfer. It does not make anything run faster per se, it just allows you to run lots of jobs simultaneously.
from Bruce: If you have multiple cores on your machine you can speed up an individual recon run with -openmp <# of cores>. However, if you have many subjects to get through you are probably better off running multiple instances of recon-all on 1 core/subjects (assuming you have enough memory).
from Paul: If they have access to a high performance compute cluster at their institution, I'd suggest they talk to the administrators to get FreeSurfer installed and instructions on how to submit jobs. Otherwise, they can pay for cloud compute time. There are instructions on how to run FreeSurfer on AWS here: https://github.com/corticometrics/fs6-cloud
Briefly, the steps are:
1) Upload input data to AWS, into what's called an 's3 bucket'
2) Submit a job to AWS batch, with details specifying:
- - What command to run (recon-all) and in what environment (docker
container)
- Where the input data is located (s3 bucket location)
- Where the output data should be placed (s3 bucket location)
- FreeSurfer license key
3) Download processed data from the AWS s3 bucket
Topic : Hippocampus subfields
Question: What's are the steps for running recon for hippocampal subfields? The tables extracted from recon files in class does not give subfield information.
Answer (from Doug): See this web page http://surfer.nmr.mgh.harvard.edu/fswiki/HippocampalSubfieldsAndNucleiOfAmygdala. Bascially, you need to run recon-all, then run segmentHA_T1.sh subject name. You will need to download a development version of FS to run this as it is not in version 6.0. See the wiki page above.
Topic : Cortical Diffusion
Question: I understand that 2mm isotropic resolution diffusion data is not optimal to perform cortical diffusivity analysis due to partial volume effects. If one were to create a surface at the halfway point between the pial and WM surface and sample the diffusivity from that surface (assuming T1 and DTI volumes are registered, and DTI is upsampled to 1mm isotropic to match T1), is that sufficient enough to mitigate the PV effects since now the sampling is away from the GM/CSF layer and the GM/WM layer? What if cortical thickness is added as a regressor in the statistical analysis?
Answer (from Doug): the cortex is about 3mm thick, so 2mm voxels may be small enough to do what you are suggesting. You can use mri_vol2surf with --projfrac 0.5 to sample the DTI data onto the surface a layer halfway between the white and pial
Topic : Curvature
Question: How is the ideal circle determined to calculate curvature? It seems like there are any number circles tangent to the surface at a single vertex, so it's not clear to me how the radius is determined. Is the circle tangential at two points? What are those points? Thanks!
Answer (from Bruce):
A 2D surface such as the cortex has two curvatures, usually called k1 and k2. These are the curvature in the direction of maximum curvature and in the direction of minimum curvature. The circles are then tangent to the surface in each of these directions, and the curvature is 1/r, where r is the radius of the inscribed circle in that direction. FWIW, the Gaussian curvature (K) is k1*k2 and the mean curvature (H) is the average of k1 and k2 ((k1+k2)/2)). When we talk about "curvature", we typically mean "mean curvature".
Question: I understand that 2mm isotropic resolution diffusion data is not optimal to perform cortical diffusivity analysis due to partial volume effects. If one were to create a surface at the halfway point between the pial and WM surface and sample the diffusivity from that surface (assuming T1 and DTI volumes are registered, and DTI is upsampled to 1mm isotropic to match T1), is that sufficient enough to mitigate the PV effects since now the sampling is away from the GM/CSF layer and the GM/WM layer? What if cortical thickness is added as a regressor in the statistical analysis? Thanks!
Answer (from Anastasia): A large, diffusion-space voxel will have a single diffusivity value that may potentially be a mix of diffusion in multiple tissue classes (GM/WM/CSF). When you upsample it to T1 space, you will simply spread that same value into multiple smaller voxels. Even if you then sample only some of those smaller voxels, you will still be sampling that value that includes diffusion in all these tissue types. There are other types of diffusion analyses that you can do to try to tease apart the CSF diffusion (models other than the tensor that give you metrics other than diffusivity), but for those you need more than 1 b-value.
Answers to Boston 2017 Course questions:
Topic: Previous Reports of Functional Areas in Talairach Coordinates
Question: Since cortical folding patterns are so different across populations, does this mean that reporting functional areas in talairach coordinates in previous papers are actually not measuring activity at the same location on the cortex? Or do most people report talairach coordinates on an "average" brain like you could do with the fsaverage brain?
Answer (from Bruce): yes, this is probably true. Talairach coordinates that are not even guaranteed to be within the gray matter across subjects, let alone represent the "same" point.
Topic: Voxel to Vertex Conversion and Distribution of fMRI response values
Question: How are the number of vertices for each specific voxel computed? Is it where each vertex only corresponds to 1 specific voxel but each voxel can correspond to multiple vertices? Or can vertices correspond partially to multiple voxels? Then as for assigning fMRI response values, if each vertex only corresponds to a single voxel, are all of those vertices assigned that same fMRI value? And if the vertices can be associated with multiple voxels, how would its assigned fMRI value be weighted between the multiple voxels it received data from?
Answer (from Bruce): For every voxel face in the filled.mgz that is "on" and neighbors a voxel that is "off", 2 triangles and 4 vertices are added to the tesselation. Thus the average edge length is around 1mm (if the input data is 1mm)
Topic: Volume vs Surface-based Smoothing
Question: Under what circumstances would volume-based smoothing be more beneficial compared to surface-based smoothing and vice versa?
Answer (from Bruce): for volumetric structures such as the caudate or amygdala
Topic: loading detect_label on lh.orig.nofix
Question: During the trouble shooting tutorial - using Correcting topological defects (white surface error) - When we load lh.detect_lablel on lh.orig.nofix to check the modifications - the 3D image I obtain had various color. (I have saved the screenshot that I can show it to you in class- but it seems I can not attach it to this question form). In any case I am wondering what do they present? It seems 4 different colors - yellow, yellow-brown- brown and red.
Answer (from Bruce): the defect_labels file contains labels for every connected set of vertices that are topologically defective. Typically we set the min to 1 and the max to whatever the number of defects is. There is also a button on the interface you can click to color the edges of the surface in e.g. the ?h.orig.nofix by the defect color (or gray if 0). This really highlights the defects and makes them easy to see in the volume.
Topic: virtual machine
Question: what version of virtual box for windows OS is recommended for FS?
Answer (from Iman): please see http://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/QuestionAnswers#AnswerstoApril2017Coursequestions.3A
Answers to Barcelona 2017 Course questions:
Topic: MRI with contrast Question: Hi, I was wondering if you can tell me some tips for using T1 sequences with contrast (angio-MRI) instead of the regular T1 sequences. That would be great! thanks!
Answer (from Bruce): the only real problem with contrast is that it lights up the dura, which sometimes messes up the skull stripping. Mostly it has worked ok in my experience, although I haven't done very many.
Topic : QDEC Question: Could you please give us an estimate date for to fix QDEC bugs?
Answer (from Doug and Emma): No estimate at this time.
Topic: Mean Cth values Question: We would like to extract mean Cth values for each subject within a own-creaded ROI (surface defined), using freeview (for instance). Is it possible to create a ROI (i.e. spherical surface) surrounding a particular vertex of an sig.mgh file? (and then extract its mean Cth values for each subject)?
Answer (from Doug): Yes, but It is a little involved. If you know what the vertex number is, then you would run
mri_volsynth --template $SUBJECTS_DIR/fsaverage/surf/lh.thickness --pdf delta --delta-crsf vertexno 0 0 0 0 --o delta.sm00.mgh
mris_fwhm --smooth-only --subject fsaverage --hemi lh --i delta.sm00.mgh --niters 10 --o delta.sm10.mgh
mri_binarize --i delta.sm10.mgh --min 10e-5 --o delta.sm10.bin.mgh
mri_segstats --seg delta.sm10.bin.mgh --id 1 --i y.mgh --avgwf y.delta.sm10.bin.dat
where y.mgh is the input to mri_glmfit --y.
What this does is to create a surface overlay with a value of 1 at the given vertex number and 0 everywhere else (delta.sm00.mgh), then smooth it to expand it out to the 10 nearest neighbors (becomes something like a circle), binarize it to make it a "segmentation", then get the stats with mri_segstats. the output .dat file will have a value for each subject.
Topic: transformation matrices Question: Are the transformations matrices from fsaverage to fsaverage3 (for instance) available? if not, how is it possible to compute them?
Answer (from Bruce): The fsaverages are generated recursively. So the first one has 12 vertices and the second 42, etc.... This means that the first 12 vertices (indices 0-11) of fsaverage2 are identical to the 12 vertices of fsaverage1, and so on.
Topic: p-value degradation within clusters in Freeview Question: When doing cluster correction (i.e. Monte-Carlo), clusters appear with a uniform color scheme. How could we still visualize only the cluster-corrected clusters but with a p-value degradation within them (so as to know which are the most significant vertices within the cluster)?
Answer: (from Doug) view csdbase.sig.masked.mgh - the original sig volume masked to show only clusters.
Answers to April 2017 Course questions:
Topic : FreeSurfer on Windows 10 Question: Hi , I am auditing the freesurfer course this week, and therefore I would like to download freesurfer on my laptop. It's running on Windows 10 Pro and it just got mentioned that it is possible to install Freesurfer on this os directly. I was wondering whether you could tell me how to do this. Thank you in advance. Kind regards, Rose
Answer: (from Iman)
Note: FreeSurfer has not yet been fully tested on this platform, and we are not officially supporting it until we do.
This is how you can run FreeSurfer on Windows 10:
Step 1: Enable bash on Windows:
http://www.windowscentral.com/how-install-bash-shell-command-line-windows-10
Along the way, you may need to install some packages using apt-get (like tcsh, libglu1, ...).
Step 2: Install Xming X Server and its necessary fonts, so you can use the GUI tools like FreeView:
Make sure you add "export DISPLAY=:0" in ~/.bashrc so bash can use Xming.
Step 3: Install and use FreeSurfer in bash, just as you’d do in Linux:
https://surfer.nmr.mgh.harvard.edu/fswiki/DownloadAndInstall
Topic : spatial registration Question: does the linear (or other) spatial transformation suppose that all points in the cortex of subject 1 WILL be in subject 2? If injury in one patient has destroyed that geometric surface how does registration handle that violation? i.e. that the given cortical fold cannot be registered in the other subject's brain
Answers:
- Lilla: That depends on the model complexity of the registration algorithm. Rigid / affine transformations are used for computing global correspondence between subjects or are used for intra-subjects cases where we know that no biological changes took place. Non-linear registrations account for more detailed differences, but due to other optimization constraints they might not be able to account for all differences either. Knowing your data set and the type of deformation that can account for the differences between them will help you choose the appropriate registration tool for your purposes.
- Bruce: Many people do registration by minimizing the mean squared difference between target and subject:
R = argmin((T-S)^2)
instead, we also measure the variance of the target, so our "atlas" is more than just a single # at each point, and we find:
R = argmin ((mean(T)-S)^2) / var(T)
the variance is critial as it discounts folds that are not found in most subjects. In those regions where the folds are not a stable predictor, we instead minimize the metric distortion of the warp (this is not really a binary decision - there is a single energy functional with both these terms in it). So what happens is that the stable primary folds (central sulcus, calcarine, etc..) all line up, then more variable regions between them are placed where they need to go so that they are about the right distance from all the stable features
Topic : recon-all intro Question: WHat happen when you provide to T1 as input to recon-all Does it averages and makes the analysis on the average?
Answer: The analysis is performed in the native space of the subject. (Doug)
Topic : recon-all intro Question: What method do you use for interpolation when making orig.mgz from the rawavg.mgz
Answer: Cubic. (Doug)
Topic : recon-all intro Question: If you use as input a volume that has a higher spatial resolution than 1mm iso (let say 0.7 iso) does freesurmar makes a orig.mgz with 1mm iso?
Answer: By default, yes. If you run recon-all with the -cm option, it will conform to the minimum voxel dimension. Eg, if you your volume is .8x.9x1, orig.mgz will be .8x.8x.8/ (Doug)
Topic : ROI vs. whole-brain analysis debate Question: Have there been any papers published using FS (or other platforms) comparing the pros/cons of using ROI vs whole brain voxel-wise approaches, especially in terms of power that you would recommend to gain a better understanding of this debate?
Answer: Not that I know of. For sure it will depend on the effect you are studying. If it happens to obey gyral boundaries then using the parcellations is a huge win from a power perspective. If not, then less so. (Bruce)
Topic : exporting tables Question: Quite difficult to easily view the output of tables produced using the terminal/gedit. pasting into excel with "merge delimiters" option checked helps a bit. Any other advice on this would be appreciated! Thanks!
Answer: When you run the commands asegstats2table or aparcstats2table, it will put the stats files in a text friendly format that can then be opened in Excel without trouble or much formatting. (Allison S.)
Answers to September 2016 Course questions:
Topic : cortical thickness measures Question: Does Freesurfer take into account cortical folding when measuring cortical thickness? For instance, an area may appear to be thicker simply because of the cortical folding but the actual cortex itself might not be thicker in that area. I am asking because the most recent Glasser and Van Essen paper describes having to remove the FreeSurfer "curv" data from their values prior to comparing cortical thickness across areas of the cortex. thanks!
Answer: (from Bruce) Cortical folding patterns can make it difficult to accurately measure thickness in 2D due to apparent changes in thickess. This is not an issue if the thickness is measures in 3D.
However, there are also real correlations between thickness and curvature. For example, the crowns of gyri are thicker in general than the fundi of sulci. This real variation in thickness can obscure changes in thickness that reflect boundaries of architectonic regions, which is why Glasser and Van Essen regressed it out.
Answers to CPH August 2016 Course questions:
Question: are there parallel computing mechanisms implemented in any of the time consuming FS commands (like recon)?
Answer:
Lilla: Yes, since v5.2 we have had an option for runing the recon pipeline in a parallelized fashion. You can access this option by using the -openmp <numthreads> flag with recon-all, where <numthreads> is the number of threads you would like to run (where 8 is a typical value, assuming you have a four-core system). A more detailed description for system requirements when trying to use this option is here: https://surfer.nmr.mgh.harvard.edu/fswiki/SystemRequirements
Question: When is the new Freesurfer version coming out?
Answer:
Allison: We are currently beta-testing v6.0 within our lab. In addition to daily use of this beta version, we have run 6.0 on several different datasets which are being inspected for errors/problems. If the beta testing goes well, we should be able to announce the release of 6.0 in the Fall. If we find any issues, we will have to track down the problem, fix it, and rerun the tests. The nature of the issue would determine how long this would take which is why we cannot yet give an exact date for a release.
Martin: we are currently testing a release candidate. If all tests succeed, the release could be soon (like in the order of a few weeks). If there are problems with this version, we have to go back, fix them and restart all testing which will be in the order of months.
Topic : 3 group analysis
Question: How would you set up an analysis to look at 3 groups such as controls, mci and alzheimers?
Answer:
Melanie: You can find an answer to this here: https://surfer.nmr.mgh.harvard.edu/fswiki/Fsgdf3G0V
Emily: If you wanted to look at three groups, you would have an FSGD file with three classes: AD, MCI, and OC. The design matrix created by FS would end up looking like this:
X=[1 0 0;
1 0 0;
0 1 0;
0 1 0;
0 0 1;
0 0 1]
(In this example you have two individuals in each group).
- If you wanted to look at simple group differences without any continuous variable regressors, your contrast matrix would actually be a 3x3 matrix rather than a vector and would look as follows:
C=[1 -1 0;
1 0 -1;
0 -1 1]
This basically is an ANOVA, and tests whether there is a difference between any two groups, but it takes care of multiple comparisons under the hood. Any time you want to compare more than 2 groups, then you need to do something similar.
Your sig.mgh file will only show where *any* group differences are, so you will not know which groups are significantly different from each other. You can follow the directions with individuals contrast vectors posted on the fswiki (Melanie's link) but there you are being less strict with your p-values.
Topic : anterior temporal lobe
Question: We have encountered a lot of problems with segmentation of the anterior temporal lobes in our data, the lobe is being 'cut off' before the lobe actually ends. Is this related to our T1 data quality? Or might there be a way to fix or improve this problem? Unfortunately I did not bring my data to the class Thank you
Answer: It's hard to say if it is related to data quality without seeing the data. However, the most likely fix would be to make the watershed parameter less strict. The default parameter is 25. I would suggest increasing the parameter by 5 until you find that the temporal lobe is no longer missing (but hopefully, the skull is still sufficiently stripped):
recon-all -skullstrip -wsthresh 30 -clean-bm -subjid <insert subject name>
If this does not work, give gcut a try:
recon-all -skullstrip -clean-bm -gcut -subjid <insert subject name>
Answers to April 2016 Course questions:
Topic : Constructing the pial surface Question: When constructing the pial surface, are vertices nudged *orthogonally* from the white matter surface?
Answer: Answered live.
Question: On Monday it was mentioned that .6x.6x.6 isotropic resolution was the best parameters for T1 scan in order to analyze the volume of hippocampal subfields, is it possible to do a hippocampal subfield analysis using a T1 with 1x1x1 resolution?
Answer: It seems that 0.4x0.4x2.0 coronal is becoming "standard". Placing the box perpendicularly to the major axis of the hippo is important. This may not be the best resolution and isotropic voxels have clear advantages. It is hard to say what is best.
1mm isotropic is not optimal. It is possible to use it, but there is not much information in these images to fit boundaries of subfields. So your results are basically fully driven by the priors in the atlas. It could still help to improve the full hippocampus segmentation.
Question: What is the difference between voxel-based morphometry and the voxel-based approach used to analyze the volume of sub-cortical regions?
Answer:
Question: Is it correct to conclude that each subcortical segmentation is based on a volumetric segmentation atlas created by Freesurfer, but that this data is not transformed into the space of this atlas, but just used a reference for where regions exist based on probability?
Answer:
Answers to November 2014 Course questions:
Topic : CVS registration
Question: In the advanced registration talk, it was mentioned that a new template had been created for CVS registration because the existing template wasn't suitable for the method. Are the methods available/published on how one would go about making a custom CVS template for a specific population?
Answer: The new CVS atlas is an intensity-based image that has all recon-style generated files (thus surfaces as well as volumes). There is no paper that explains the atlas-creation per se. A set of 40 manually segmented data sets were registered together using CVS in several rounds (in order to avoid biasing the atlas-space) and then the resulting subject was run through recon-all.
Answers to August 2014 Copenhagen Course questions:
1. Topic: Best method to calculate cohenD or any other map of effective size with lmd
Question: I have calculated the residuals as y-x*beta as described in (Bernal-Rusiel et al., 2012). What is the best method to calculate cohenD or any other map of effective size with lme?
Answer: I don't think there is a standard way to calculate a cohenD 's effect size for lme. We just usually report the rate of change per year (or per any other time scale used for the time variable) as the effect size. If you are comparing two groups then you can report the rate of change over time of each group. Once you have the lme model parameters estimated you can compute the rate of change over time for each group from the model. For example, given the following model, where
Yij-> Cortical thickness of the ith subject at the jth measurement occasion
tij-> time (in years) from baseline for the ith subject at the jth measurement ocassion
Gi-> Group of the ith subject (zero or one)
BslAgei->Baseline age of the ith subject
Yij = ß1 + ß2*tij +ß3*Gi + ß4*Gi *tij + ß5*BslAgei + b1i + b2i*tij + eij
then you can report the effect sizes:
Group 0 : ß2 rate of change per year
Group 1: ß2 + ß4 rate of change per year.
The difference in the rate of change over time can also be reported. In this case it is given by ß4.
-Jorge
2. Topic : Dealing with missing data in clincal data in linear mixed model
Question: In my dataset, I am missing some of the clinical variable for some of the subjects. How do I represent missing variables to the linear mixed model? Should I use zero or use the mean?
Answer: We don't have a way to deal with missing clinical variables. Lme expects you to specify a value for all the predictor variables in the model at each measurement occasion. Certainly there are some techniques in statistics to deal with that problem such as imputation models but we don't provide software for that.
-Jorge
3. Topic : LME
Question: It is possible to test the model fit of intercept compared to time with lme_mass_LR. Both models have one random effect.
Answer: lme_mass_LR can be used to compare a model with a single random effect for the intercept term against a model with two random effects including both intercept and time. If you want to select the best model among several models each one with just a single random effect then you just need to compare their maximum likelihood values after the estimation.
-Jorge
4. Topic : Qatools
Question: Can Qatools prescreen for subjects with problems and then look at them in freeview? Do you use Qatools?
You can use the QATools scripts for a variety of things. It can check that all your subjects successfully completed running recon-all and that no files are missing. It can also be used to detect potential outlier regions in the aseg.mgz within a dataset, it can calculate SNR and WM intensity values, and collect detailed snapshots of various volumes that are put on a webpage for quick viewing. These features can certainly be useful in identifying problem subjects quickly however, the tools cannot absolutely tell you which subjects are bad and which are good. It can only give you information that can help you assess data quality. It is certainly possible that there may be an error on a slice of data that is not visible in the snapshots. It is also possible that cases with low SNR or WM intensities may still have accurate surfaces. Again, the information the QATools provide can be very useful in identifying problem subjects quickly but it cannot do all the data checking work for you :).
We do use these tools in the lab. Email the mailing list if you have questions about the scripts so the person or people currently working on the scripts can answer your questions.
-Allison
5.Topic: Area maps, mris -preproc and -meas
Question: If you create a map, the values are generally from .2 to 1.6. What do the values represent? How can you go from these relative values to mm2? Is there a conversion method?
Answer: Freesurfer creates many different maps. Thickness maps have values from 0 to probably 5 or 6 mm (e.g. you can look at any subject lh.thickness). Other maps contain area or curvature information and have different ranges. You need to let us know what map you were looking at and what you are trying to do (mm^2 indicates you are interested in area?).
Updated Question: If you create a surface area map, the values are generally from .2 to 1.6. What do the values represent in FreeSurfer? How can you go from these relative values to mm2? Is there a conversion method? The aim would be to compare surface area values using a standard unit of measurement.
The values in ?h.area are mm2 and represent the surface area associated to each vertex (basically the area that the vertex represents). Area is the average area of the triangles that each vertex is part of (in mm2) of the white surface. At the vertex level this information is not too interesting because it depends on the triangle mesh, but it is e.g. used to compute the ROI areas in the stats files.
6. Topic : FDR
Question: Is the two-step (step-up) FDR procedure that is implemented in the longitudinal linear mixed models analysis stream much better than the old FDR version?
Answer: Yes. FDR2 (the step-up procedure that we have in Matlab) is less conservative than the regular FDR correction. A description of that method can be found in "Adaptive linear step-up procedures that control the false discovery rate. Benjamini, Krieger, Yekutieli. Biometrika 93(3):491-501, 2006. http://biomet.oxfordjournals.org/content/93/3/491.short
7. Topic : command lines for 5.1.0 vs. 5.3.0
Question: I have been using FreeSurfer v.5.1.0., which comes with Tracula/freeview and longitudinal analysis features. I have done recon 1, 2, 3 and LGI (with FreeSurfer communicating with MATLAB correctly), and now I would like to proceed to Tracula analysis (with FSL working), as well as group analysis, longitudinal analysis, and multi-modal analysis. Can I use the command lines presented in the workshop tutorials in v. 5.1.0 or do I need to install v. 5.3.0? If it depends, could you distinguish which command lines can be used in v. 5.1.0 and which command lines can be used only in v. 5 3.0?
Answer: Most of the commands in the tutorials will work with 5.1. Differences in tutorial commands are mainly related to Freeview (visualization) and the use of lta transformation files. In case a command does not work with 5.1: Some tutorials have older versions using the TK tools (tkmedit, tksufer) and will probably work, also you can look at older versions (history) on the wiki pages and pull up a historic version of the tutorial.
You can also think about switching to version 5.3 with your processing now. Regarding longitudinal analysis, you could run the v.5.3 for the longitudinal processing based on cross-sectional data that was reconned in v.5.1. It is highly possible that you could get better results overall and in the cross-sectional timepoints if you rerun the cross-sectional data with v5.3. You may have to do less manual edits.
8. Topic : Tracula for Philips Achieva dcm files
Question: For Tracula analysis of non-Siemens data, the webpage says, "Put all the DTI files in one folder," and "Create a table for b-vals and b-vecs." My Philips Achieva DTI dcm files are stored in 6 different folders per subject. Can I put these 6 folders in one folder or do I need to put only the DTI files into one folder? In the former case, in what order should I create the b-val b-vec table when the order in which FreeSurfer accesses the 6 folders are unknown. In the latter case, I need to rename DTI dcm files because files with identical names (e.g., IM_00001, etc.) exists across 6 folders. Would the order of folder matter in renaming DTI files?
Answer: STILL NEEDS TO BE ANSWERED
9. Topic: group analysis with missing values
Question: Some of my patients' data sets have missing values for some structures in the individual stats output files. These files have fewer rows than those with values for all the structures. When FreeSurfer creates a combined file or when it computes averages, would it check the indices/labels of the structures so the missing rows would not results in averaging of values of different structures or would it average from the top row and result in averaging of values of different structures when some individual stats files have missing rows?
Answer: Freesurfer's aseg and aparcstats2table commands can deal with missing values in the stats files. You can specify if you want to select only those structures where all subjects have values (I think that is the default), if you want the methods to stop and print an error, etc (see command line help).
10. Topic: Installing more than one FreeSurfer version
Question: When installing a newer version of FreeSurfer (e.g., v. 5.3.0 with pre-existing v.5.1.0), where would you recommend to set FREESURFER_HOME for the newer version? Thank you.
Answer: Your would change the FREESURFER_HOME variable each time you want to switch which version you use to view or analyze data.
11. Topic : Vertex Parcellation
Question: Previously you could load aparc.annot with read.annotation.m and then you'd get a 163842 x 1 vector where each scalar had a number from 0 to 34, indicating that vertex's membership in one of the Desikan-Killiany parcellations. Then you could select vertices according to parcellation membership. In FS 5.3.0 when you use read_annotation to load aparc.annot, you get a vector with numbers from 1 to 163842, simply the vertex number. How can one get the same information about parcellation membership now?
Answer: Each component of the <label> vector has a structureID value. To match the structureID value with a structure name, lookup the row index of the structureID in the 5th column of the colortable.table matrix. Use this index as an offset into the struct_names field to match the structureID with a string name.
12. Topic: Aliases
Question: Where can we find information about setting up aliases like Allison mentioned in the Unix tutorial?
Answer: You can find that here: https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/Scripts
-Allison
13. Topic: Using different annotations
Question: How do I take the PALS Brodmann annotation and map it onto my individual subjects?
Answer: You can find information on how to do this here: https://surfer.nmr.mgh.harvard.edu/fswiki/PALS_B12
-Allison
14. Topic: Piecewise in Group analysis tutorial
Question: What does the piecewise option do when thresholding in the group analysis tutorial?
Answer: It's a piece-wise linear color map instead of a single linear scale (so two linear ones). You can exclude certain values also in the middle (so show greater than this and smaller than this only)
-Melanie
15. Question: Why is cortical thickness a measure worthwhile to study?
Answer: Cortical thickness is interesting because it is a sensitive surrogate biomarker for a wide variety of conditions and diseases (AD,HD, schizophrenia, dyslexia, aging, ....). It is a also a point measure of the amount of gray matter at a particular point that has a straightforward neurobiological interpretation as opposed to measures like gray matter density. Also, cortical folding patterns are robust across age groups and pathologies.
16. Topic: longitudinal design
Question: I have two questions about a longitudinal design, specifically about how to handle missing time points. * During the talk it was mentioned that it was possible to do the analysis with participants that have only 1 measurement since FS5.2. Is it also possible to do the analysis in version 5.1 and specify the same scan twice or does that give problems? (e.g. recon-all -base baseID -tp MRI1st_scan -tp MRI1st_scan -all) This because the entire department works with version 5.1 on the server, which makes it difficult to get a newer version installed. * If we want to include 3 measurements in our study, and see the effect of "non-linear" brain change instead of linear brain change, can we still use the longitudinal stream with participants with missing time points?
Answer: 1. No, single time point data in the longitudinal stream is not supported in 5.1. Specifying the same time point twice is not the same and would not be sufficient. 2. Yes. The non-linear modeling would be done after the longitudinal image processing (e.g. using linear mixed effects models, although they are called linear, you can model non-linear trajectories, only the parameter estimation is done using linear approaches).
Answers to May 2014 Course questions:
Topic: Gray Matter Volume
Question: In v5.0 and above, there is a stat within each subject's stat folder that is a single value for whole brain gray matter volume. Two questions regarding that metric: 1) Does that include the cerebellum? 2) Is there a way to access that statistic using version 4.5?
Answer: 1) yes 2) The easiest way to do this is to download version 5.3 and just run the program to generate the stats files, eg
cd $SUBJECTS_DIR/subject
mri_segstats --seg mri/aseg.mgz --sum stats/aseg.53.stats --pv mri/norm.mgz --empty --brainmask mri/brainmask.mgz --brain-vol-from-seg --excludeid 0 --excl-ctxgmwm --supratent --subcortgray --in mri/norm.mgz --in-intensity-name norm --in-intensity-units MR --etiv --surf-wm-vol --surf-ctx-vol --totalgray --euler --ctab $FREESURFER_HOME/ASegStatsLUT.txt --subject subjects
The data will be in stats/aseg.53.stats
Doug Greve
Topic: Control points
Question: I use control points followed by autorecon to include areas that are obviously part of the brain but were not included in the first run. how many times should i do this, because sometimes i do it like 8 or 10 times (with added control points) and that area is still excluded, is there another way of doing this?
Answer: There are a couple of possibilities. Have you checked the wm.mgz to make sure that is filled in for every voxel you think is white matter? If not, fill that in and try rerunning (possibly without the control points).
Also, if control points are not placed on voxels that are less than 110, then you might not see the desired effect. It could also be that you are not placing enough control points on enough slices.
Without seeing your case, I can't say for sure what you need but most cases need 3-4 control points on every other slice for several slices. Since control points are used for fixing intensity problems, you'll need to place them on most of the slices that have low intensity in that region and not necessarily just on the slices that have problems with the surface.
If control points were placed in gray matter, partial volumed voxels, or white matter bordering gray matter, that could be causing the problem. You should delete those control points.
If after trying the above, control points still don't seem to be working, let us know which version you are working with so we can investigate.
Allison
Topic: ROIs & volume
Question: I used AnatomicalROIFreeSurferColorLUT to determine my ROI, i then used FSL math get the volume of this ROI. I wanted to compare between Healthy control and patients, is it ok to publish my results using the above as a reliable method for volume assessment?
Answer: It is probably ok, though you do not need to do it this way. The recommended method is to get this information from the statistics files (eg, subject/stats/aseg.stats or lh.aparc.astats). These statistics are computed using partial volume correction and surface-based volume correction so they should be more accurate.
Doug Greve
Topic: Editing in multiple views & using inflated surface for QA
Question: After running recon-all which volumes/surfaces should be checked? During the tutorial we focused on wm and pial edits; however, it was not clear how to utilize the surface reconstructions/inflations, etc. and at what point in the data checking each of these volumes/surfaces should be referenced. We also only monitored the data from a coronal view. Do you suggest also making edits on sagittal/axial views as well? Should you only work on one view at a time?
Answer: You would typically view brainmask.mgz, wm.mgz, and aseg.mgz to check & fix recons. Brainmask is used to correct pial errors. wm is used to fix most white surface errors. Control points can be placed using the brainmask as a guide to fix white surface errors caused by intensity normalization problems.
The inflated surface can be useful for looking for obvious errors (holes in the surface or extra bumps on the surface), however, not all errors will be obvious on the inflated surface AND not all bumps on the surface need to be fixed. Thus, looking at the inflated surface when you are just getting started or have a recurring problem that shows up on the surface could be useful. But it could also take up a lot of time, especially if data quality is poor and the entire surface will look bumpy.
The error should be fixed completely. In other words, the error should not be visible in any view. However, you should edit in whichever view you feel most comfortable with and check the other views to be sure you got everything.
Allison
Topic: Editing the aseg Question: How should the aseg file be checked after recon-all? How do you make edits? Is this ever necessary?
Answer: The best way to check the aseg is to overlay it on the norm.mgz and make sure it accurately follows the intensity boundaries of the subcortical structure underneath. In practice, I view it on the brainmask.mgz as I usually have that loaded in freeview anyway. For the most part, you'll catch any problems while viewing it on the brainmask but for some areas I'm not certain about, I have to check it on the norm.mgz. There are some regions, such as pallidum, where you can't really see the boundary of the structure. In these cases, there isn't much that can be done other than to look for obvious outliers from the expected shape & size of the structure.
To edit the aseg, you can do this in freeview with the voxel edit button and the Color LUT. More information can be found on the Freeview wiki page.
It is rarely necessary but we do see inaccuracies in some studies in hippocampus, amygdala, and putamen.
Questions from the Nov 2013 course:
Topic : 1.5 and 3T longitudinal data
Question: I have a set of longitudinal data that began on a 1.5T scanner but then our research group moved to a 3T scanner exclusively. Is it possible to use longitudinal analysis in FreeSurfer to analyze both 3T and 1.5T data in the same analysis or possibly smooth across both to accomplish this analysis?
Yes, you can add the scanner type as a time varying covariate into a linear mixed effects model. It would be good to have several time points before and after the change to get more reliable slope estimates on both sides. See also LinearMixedEffectsModels .
Hello, I would like to know how to measure the cerebellar vermis volume using freesurfer. As far as I know, freesurfer provide cerebellum cortex volume and cerebellum white matter volume in the aseg.stats file, but I could not find cerebellar vermis volume. Can freesurfer provide cerebellar vermis volume? I found a web page in freesurfer wiki that describe cerebellum parcellation. http://surfer.nmr.mgh.harvard.edu/fswiki/CerebellumParcellation_Buckner2011 I hope to obtain the volume of each subregion of cerebellum, but I have not understood the procedure. I know the MNI152 atlas is in the $FREESURFER_HOME/average/ directory. Could you tell me how to warp the MNI152 parcellations to subjects' native structural MRI space and extract the volume of each subregion? Thanks in advance.
We do not segment the cerebellar vermis. If you want to measure it or label it, you will need to manually draw it on each of the subjects or if you find another atlas with the required label, register that to your subject and project the label to that space. The cited URL provides cerebellar clustering in the MNI152 nonlinear space. You need to use the talairach.m3z transformation to map the labels back to the subject's native space.
I know freesurfer can process cortical surface correctly for the subjects who are over 5 years old. How do you think the reliability of the freesurfer outputs in very young children (e.g. 3-4 years old).
Not good. We do not encourage people to use FS recon for that age group. We do not have a qunatifiable performance measure though.
Test edit by Martin