|
Size: 18041
Comment:
|
Size: 18154
Comment:
|
| Deletions are marked like this. | Additions are marked like this. |
| Line 121: | Line 121: |
| }}} or as a spreadsheet in Openoffice (select 'space' for seperation): {{{ ooffice -calc long.qdec.table.dat |
Longitudinal Processing - Tutorial
This page will take you through the steps of processing your longitudinal data: first with running the cross sectionals and creating the unbiased within-subject base and longitudinals. Then we will take a look at the results, learn how to do some post-processing and about editing the data and rerunning the different streams with the new edits. You can read more about the longitudinal stream at LongitudinalProcessing and about edits at LongitudinalEdits.
Preparations
If you are taking one of the formally organized courses, the tutorial data is already installed on the computer provided to you so please skip ahead to the Viewing Volumes with Tkmedit section. If not, then to follow this exercise exactly be sure you've downloaded the tutorial data set before you begin. If you choose not to download the data set you can follow these instructions on your own data, but you will have to substitute your own specific paths and subject names. If you are using the tutorial data please set the environmental variable TUTORIAL_DATA to the location that you have downloaded the data to (here, it has been copied to $FREESURFER_HOME/subjects):
tcsh setenv TUTORIAL_DATA $FREESURFER_HOME/subjects/buckner_data/tutorial_subjs
Notice the command to open tcsh. If you are already running the tcsh command shell, then the 'tcsh' command is not necessary.
First you need to set your SUBJECTS_DIR to the appropriate place:
setenv SUBJECTS_DIR $TUTORIAL_DATA cd $SUBJECTS_DIR
this will set your SUBJECTS_DIR to the location where your tutorial data is if you have defined the variable TUTORIAL_DATA as indicated at the top of this tutorial. If you are not using the tutorial data you should set your SUBJECTS_DIR to the directory in which the subject you will use for this tutorial is located.
Alternatively you can set SUBJECTS_DIR to the directory where your cross sectionals reside (the different time points for each subject). And don't forget to source FREESURFER.
cd /path/to/your/data setenv SUBJECTS_DIR $PWD source $FREESURFER_HOME
Creating Longitudinal Data
Processing your data currently consists of three steps:
First, run all your cross sectionals. Run recon-all -all for all tpNs (i.e. all time points for all subjects):
recon-all -subjid <tpNid> -all
Second, create your template/base from the tpNs. Here you can choose a name for the templateID, e.g. 'bert' or 'bert_base' if 'bert' is already used for the first time point of this subject:
recon-all -base <templateID> -tp <tp1id> -tp <tp2id> ... -all
Finally, create the longitudinals using the template and tpNs. Repeat the following steps for all tpNs. The resulting directories will be in the format of tp1id.long.templateID
recon-all -long <tpNid> <templateID> -all
So for example for a subject with two time points OAS2_0001_MR1 and OAS2_0001_MR2 you would run (don't do it, it has allready been done for you):
recon-all -subjid OAS2_0001_MR1 -all recon-all -subjid OAS2_0001_MR2 -all
(here you can specify -i path/to/dicomfile -i ... to import dicoms, if the input is not available in OAS2_0001_MR1/mri/orig/001.mgz ... see ??HOWTORUNDATA-link???). We call these runs the cross sectionals (or cross runs) because the two time points are processed completely independently as if they were from different subjects.
Once the norm.mgz is available on both time points, you can create the unbiased template/base. We decided to name it OAS2_0001 :
recon-all -base OAS2_0001 -tp OAS2_0001_MR1 -tp OAS2_0001_MR2 -all
This will create the within-subject template (we will call it the base) and run it through recon-all (so it will take approximately the same time as a regular recon-all run). A directory OAS2_0001 will be created.
Finally once the base and the two cross sectionally processed time points are fully completed, you can run the longitudinal runs:
recon-all -long OAS2_0001_MR1 OAS2_0001 -all recon-all -long OAS2_0001_MR2 OAS2_0001 -all
These runs create the directories OAS2_0001_MR1.long.OAS2_0001 and OAS2_0001_MR2.long.OAS2_0001 containing the final results. These are complete subjects directories and we will use them for any postprocessing or analysis as the results are more sensitive and repeatable than the independent cross runs. These longitudinal processes run much faster than the cross and base above. We call them the longitudinal or simply long runs, because they make use of common information taken from the template.
Inspecting Longitudinal Data
Once your results are there (and they are in this tutorial), you can take a look at different things. First let's check the base. Open the norm.mgz into the freeview:
freeview -v OAS2_0001/mri/norm.mgz -f OAS2_0001/surf/lh.pial:edgecolor=red OAS2_0001/surf/rh.pial:edgecolor=red OAS2_0001/surf/lh.white:edgecolor=blue OAS2_ooo1/surf/rh.white:edgecolor=blue
(we'll try to have a shorter command for this in future versions ...)
alternatively you can use good-ole tkmedit:
tkmedit OAS2_0001 norm.mgz --surfs
This will show you a synthesized image, basically the average anatomy of this subject across time. If the across-time registration failed you would see a blurry image or ghosts (usually never happens, but if it does, report it). You can also inspect the surfaces on the average anatomy. This will be important later in case of edits as the surfaces are transferred into the longitudinal runs and therefore should be accurate in the base.
Now it is time to look at the longitudinal results. Starting with Freesurfer 5.1 the base and the long runs are all in the same voxels space, therefore images will be registered and can be directly compared if opened on top of each other:
freeview (open both time points and surfaces (and annotations, labels ...?)
or tkmedit:
note that tkmedit cannot open more than 2 images at the same time, therefore we recommend freeview.
You can see the ....(surfs, annot,labels...) Also switch back and forth between the images by pressing ???. to directly see any longitudinal change.
Post-Processing Longitudinal Data
In order to analyze your longitudinal data, you have different options. You could, e.g., open the stats text files in each tp's /stats/ dir, containing statistics such as volume of subcortical structures or thickness averages for cortical regions. These statistics can be fed (after conversion) into statistical packages to run whatever analysis you are interested in, such as linear mixed models. In the GLM and QDEC group analysis tutorials you learn how to run some (simple) statistical analyses on thickness maps (e.g. to find cortical regions with different thickness across groups). We won't do any statistical analysis in this tutorial, but we will discuss how to prepare and view your longitudinal data. Here we prepare for a simple statistical model consiting of 2 stages:
- we compute some measure for each subject, for example the rate of change (mm/year thinning, or percent change)
- this measure can then be compared across groups (e.g. with QDEC) to detect disease or treatment effects.
Longitudinal QDEC Table
To get the longitudinal data ready for this you need to create a table (space separated as a text file) in the following format:
fsid |
fsid-base |
years |
... |
OAS2_0001_MR1 |
OAS2_0001 |
0 |
|
OAS2_0001_MR2 |
OAS2_0001 |
1.25 |
|
OAS2_0004_MR1 |
OAS2_0004 |
0 |
|
OAS2_0004_MR2 |
OAS2_0004 |
1.47 |
|
... |
|
|
|
where the first column is called fsid (containing each time point) and the second column is fsid-base containing the base name, to group time points within subject. You can have many more columns such as gender, age, group ... Make sure there is a column containing an accurate time variable (optimally measured in years if you are interested in yearly change) such as age or the time from the frist time point. Here we use years to measure the time from baseline scan (=tp1, note baseline is not equal to base!). You can see that the two subjects OAS2_0001 and OAS2_0004 each have two time points that are not equally spaced (approx 15 and 18 months apart). We have created this table for you in the subjects directory and can get started. You can look at the file by opening it in your favorite text editor, e.g.:
gedit long.qdec.table.dat
or as a spreadsheet in Openoffice (select 'space' for seperation):
ooffice -calc long.qdec.table.dat
Preparing the Data - QCACHE
The following commands can be used to prepare the data (don't run it, it will take a while and has allready been done for you):
long_mris_slopes --qdec ./long.qdec.table.dat --meas thickness --hemi lh --do-avg --do-rate --do-pc1 --do-spc --do-stack --do-label --time years --qcache fsaverage
This will:
- (--qdec) read in the qdec table
- (--meas) take the thickness measure of each time point
- (--hemi) work on left hemisphere
- (--do-avg) compute the temporal average (thickness at midpoint of linear fit, here it is just the average)
- (--do-rate) compute the rate of change (thickening in mm/year)
- (--do-pc1) compute the percent change (with respect to time point 1)
- (--do-spc) compute a symmetrized percent change (with respect to the temporal average)
- (--do-stack) output a stacked thickness file for each subject (time series)
- (--do-label) intersect the cortex label to ensure we don't include non cortex regions
- (--time) specify the column in the longqdec.table.dat that contains the time variable (here 'years')
- (--qcache) and automatically smooth everything and map it to fsaverage for a potential group analysis using qdec
You would then run the same command for the right hemisphere (--hemi rh). Note, if you split your table up into smaller tables containing only information for a specific subject each, you can run this on a cluster in parallel for each subject to speed things up (you can use long_qdec_table --qdec ./long.qdec.table.dat --split fsid-base to split the table up into individual subjects).
Now before continuing, let's get an idea about what the above 4 measures mean in our setting (2 time points):
The temporal average is simply the average thickness: avg = 0.5 * (thick1 + thick2)
The rate of change is the difference per time unit, so rate = ( thick2 - thick1 ) / (time2 - time1), here thickening in mm/year, so we expect it to be negative.
The percent change (pc1) is the rate with respect to the thickness at the first time point: pc1 = rate / time1. We also expect it to be negative and it tells how much percent thinning we have at a given cortical location.
The symmetrized percent change (spc) is the rate with respect to the average thickness: spc = rate / avg. This is a more robust measure as pc1, because pc1 could be an outlier. Also it is symmetric: when reversing the order of tp1 and tp2 it switches sign. This is not true for pc1. Therefore and for other reasons related to statistical power, we recommend to use spc.
So let's investigate (some of) the results, which can be found in each base surf/qcache directory. Call for example:
tksurfer OAS2_0001 lh pial -overlay $SUBJECTS_DIR/OAS2_0001/surf/qcache/lh.long.thickness-avg.fwhm15.mgh -timecourse $SUBJECTS_DIR/OAS2_0001/surf/qcache/lh.long.thickness-stack.mgh -aparc
to open up the pial surface (left hemi) of subject OAS2_0001. You are looking at the smoothed average thickness (color overlay). If you click at any location on the surface you can see a plot of the thickness values at the two time points. The -aparc flag opens the cortical parcellation which can be switched on and off, it helps to find out what region you are inspecting when clicking on the surface. For a single subject values are often noisy even after smoothing. That is why for a group analysis one needs several subjects in each group.
In a similar fashion you can open for example the symmetrized percent change. This time we open it on fsaverage, an average subject provided with FreeSurfer and often used as the common target to compare results across subjects. The --qcache flag of long_mris_slopes has conveniently registered and mapped all results to this avarage subject:
tksurfer fsaverage lh pial -overlay $SUBJECTS_DIR/OAS2_0001/surf/qcache/lh.long.thickness-spc.fwhm15.fsaverage.mgh -aparc
note the 'fsaverage' in the filename.
Additional QDEC Info
In order to run a QDEC group analysis (you will learn later how to do this exactly), you need a qdec table. Since qdec cannot work with the longitudinal qdec tables yet, you have to shrink it into a cross sectional form. Use
long_qdec_table --qdec ./long.qdec.table.dat --cross --out ./cross.qdec.table.dat
to create a table with only a single line for each subject. For each subject, numerical values such as age or height will be averaged across time, other values will be copied from the first tp for each subject as ordered in the input table.
Editing Longitudinal Data
Editing longitudinal data can be complicated, but in some cases you actually save time, as some edits are only necessary in the base. You should be familiar with Edits and might want to check also the page about LongitudinalEdits.
Here are some examples of some common errors you can encounter with your data:
Skullstrip Error
Take a look at subject OAS2_0004_before in tkmedit. First open the base.
tkmedit OAS2_0004 brainmask.mgz -aux T1.mgz -surfs
As always, you should also check out the cross-sectionals and longitudinals for the same subject to see if the problem is present in all parts of the streams.
To open the cross (in separate terminals):
tkmedit OAS2_0004_MR1 brainmask.mgz -aux T1.mgz -surfs tkmedit OAS2_0004_MR2 brainmask.mgz -aux T1.mgz -surfs
To open the longs (in separate terminals):
tkmedit OAS2_0004_MR1.long.OAS2_0004 brainmask.mgz -aux T1.mgz -surfs tkmedit OAS2_0004_MR2.long.OAS2_0004 brainmask.mgz -aux T1.mgz -surfs
This will open the brainmask.mgz volume, the T1.mgz loaded as aux, and the surfaces for both hemispheres.
The trouble with this subject has occurred in the skull stripping step. Check the brainmask.mgz volume carefully, comparing it to the T1.mgz volume (loaded in aux) to make sure that the skull has been completely stripped away, leaving behind the complete cortex and the cerebellum.
Skullstrip failures, are best fixed by adjusting the watershed parameter of the cross-sectionals (it can only be done at this part of the stream), and recreate the base and longitudinals from there. Click here for detailed instructions on how to do this.
Pial/Brainmask Edits
There are situations where you will need to make edits to the brainmask.mgz. You can open subject OAS2_0057_before to see where and how the edits should be done.
tkmedit OAS2_0057 brainmask.mgz -aux T1.mgz -surfs
This will open the brainmask.mgz volume, T1.mgz volume as aux, and all surfaces. You should also check out the cross-sectionals and the longitudinals of this subject to check if the problem is present in all parts of the stream:
tkmedit OAS2_0057_MR1 brainmask.mgz -aux T1.mgz -surfs tkmedit OAS2_0057_MR2 brainmask.mgz -aux T1.mgz -surfs
tkmedit OAS2_0057_MR1.long.OAS2_0057 brainmask.mgz T1.mgz -surfs tkmedit OAS2_0057_MR2.long.OAS2_0057 brainmask.mgz T1.mgz -surfs
See if you can find the slices where you will need to edit the braimask.mgz in order to correct the surfaces. You can click here for a detailed description on how and at which stage you can fix this problem.
Control Points Edits
Sometimes, there are areas in the white matter with intensity lower than 110 and where the wm is not included in the surfaces. The best way to fix this problem is to add control points.
Take a look at the next subject, OAS2_0121_before in all cross, base, and long to see if you can find the place that need control points.
tkmedit OAS2_0121_MR1 brainmask.mgz -aux T1.mgz -surfs tkmedit OAS2_0212_MR2 brainmask.mgz -aux T1.mgz -surfs
tkmedit OAS2_0121 brainmask.mgz -aux T1.mgz -surfs
tkmedit OAS2_0121_MR1.long.OAS2_0121 brainmask.mgz -aux T1.mgz -surfs tkmedit OAS2_0212_MR2.long.OAS2_0121 brainmask.mgz -aux T1.mgz -surfs
Click here if you want to see where to place the control points and what effects it has on the longitudinals.
White Matter Edits
White matter edits are possibly the most common type of edits you will come across in data. In many cases, even a difference of 1 wm voxel can cause a huge defect in the surfaces. Take a look at the following subject in tkmedit. For this one, we will open the brainmask.mgz volume and the wm.mgz volume as aux instead of T1.mgz.
tkmedit OAS2_0185 brainmask.mgz -aux wm.mgz -surfs
tkmedit OAS2_0185_MR1 brainmask.mgz -aux wm.mgz -surfs tkmedit OAS2_0185_MR2 brainmask.mgz -aux wm.mgz -surfs
tkmedit OAS2_0185_MR1.long.OAS2_0185 brainmask.mgz -aux wm.mgz -surfs tkmedit OAS2_0185_MR2.long.OAS2_0185 brainmask.mgz -aux wm.mgz -surfs
Toggle between the two volumes (use alt-c) and see if you can spot an area where a simple wm edit will make a huge difference to the surfaces. Once you find it, or if you give up, click here to see how the edits can be done.
