Differences between revisions 16 and 19 (spanning 3 versions)
Revision 16 as of 2011-03-16 16:35:41
Size: 6396
Editor: MartinReuter
Comment:
Revision 19 as of 2011-03-17 19:38:47
Size: 10463
Editor: MartinReuter
Comment:
Deletions are marked like this. Additions are marked like this.
Line 55: Line 55:
So for example for a subject with two time points OAS2_0001_MR1 and OAS2_0001_MR2 you would run: So for example for a subject with two time points OAS2_0001_MR1 and OAS2_0001_MR2 you would run (don't do it, it has allready been done for you):
Line 75: Line 75:
These runs will create the directories OAS2_0001_MR1.long.OAS2_0001 and OAS2_0001_MR2.long.OAS2_0001 containing the final results. These are complete subjects directories and we will use them for any postprocessing or analysis as the results are more sensitive and repeatable than the independent cross runs. These longitudinal processes will run much faster than the cross and base runs above. We call them the long runs, because they make use of common information taken from the template. These runs create the directories OAS2_0001_MR1.long.OAS2_0001 and OAS2_0001_MR2.long.OAS2_0001 containing the final results. These are complete subjects directories and we will use them for any postprocessing or analysis as the results are more sensitive and repeatable than the independent cross runs. These longitudinal processes run much faster than the cross and base above. We call them the long runs, because they make use of common information taken from the template.
Line 80: Line 80:
Once your results are there, you can take a look at different things. First let's check the base. Open the norm.mgz into freeview (or old-fashioned tkmedit): Once your results are there (and they are in this tutorial), you can take a look at different things. First let's check the base. Open the norm.mgz into the freeview:
Line 84: Line 84:
(IS THERE A SHORT WAY OF CALLING THIS WITH FREEVIEW???) (IS THERE A SHORT WAY OF CALLING THIS WITH FREEVIEW??? for now put the full comands here...)
Line 86: Line 86:
or old-fashioned tkmedit: alternatively you can use good-ole tkmedit:
Line 94: Line 94:
Now it is time to look at the longitudinal results. Since FreeSurfer 5.1 the base and the long runs are all in the same voxels space, therefore images will be registered and can be directly compared if opened on top of each other: Now it is time to look at the longitudinal results. Starting with FreeSurfer 5.1 the base and the long runs are all in the same voxels space, therefore images will be registered and can be directly compared if opened on top of each other:
Line 102: Line 102:
note that tkmedit cannot open more than 2 images, therefore we recommend freeview. note that tkmedit cannot open more than 2 images at the same time, therefore we recommend freeview.
Line 110: Line 110:
In order to analyze your longitudinal data, you have different options. You could, e.g., open at the files in each tp's /stats/ dir, containing statistics such as volume of subcortical structures or thickness averages for cortical regions. These statistics can be fed (after conversion) into statistical packages to run your analysis, such as linear mixed models. In the GLM and QDEC tutorials you learn how to run some (simple) statistical analysis on thickness maps (e.g. to find cortical regions with different thickness across groups). We won't do any statistical analysis in this tutorial, but we will discuss how to prepare and view your longitudinal data. A simple statistical model is a 2 stage setting, where we compute some measure for each subject, for example the rate of change (mm/year thinning, or percent change) and then compare this measure across groups to detect disease or treatment effects.
Line 111: Line 112:
To get the longitudinal data ready for this you need to create a table (space separated as a text file) in the following format:

|| fsid || fsid-base || years ||
|| OAS2_0001_MR1 || OAS2_0001 || 0 ||
|| OAS2_0001_MR2 || OAS2_0001 || 1.25 ||
|| OAS2_0004_MR1 || OAS2_0004 || 0 ||
|| OAS2_0004_MR2 || OAS2_0004 || 1.47 ||
|| ... || || ||

where the first column is called '''fsid''' (containing each time point) and the second column is '''fsid-base''' containing the base name, to group time points within subject. You can have many more columns such as gender, age, group ... Make sure there is one colum containing the time variable (optimally in years) such as age or the time from the frist time point. Here we use years to measure the time from start. You can see that the two subjects OAS2_0001 and OAS2_0004 each have two time points that are not equally spaced. We have created this table for you and can get started.

The following commands can be used to prepare the data (don't run it, it will take a while and has allready been done for you):
{{{
long_mris_slopes --qdec ./longqdec.table.dat --meas thickness --hemi lh --do-avg --do-rate --do-pc1 --do-spc --do-stack --do-label --time years --qcache fsaverage
}}}
This will:
 * (--qdec) read in the qdec table
 * (--meas) take the thickness measure of each time point
 * (--hemi) work on left hemisphere
 * (--do-avg) compute the temporal average (average thickness at the middle time, here it is just the average)
 * (--do-rate) compute the rate of change (thickening in mm/year)
 * (--do-pc1) compute the percent change (with respect to time point 1)
 * (--do-spc) compute a symmetrized percent change (with respect to the temporal average)
 * (--do-stack) output a stacked thickness file for each subject (time series)
 * (--do-label) intersect the cortex label to ensure we don't include non cortex regions
 * (--time) specify the column in the longqdec.table.dat that contains the time variable (here 'years')
 * (--qcache) and automatically smooth everything and map it to fsaverage for a potential group analysis using qdec
You would then run the same command for the right hemisphere (--hemi rh). Note, if you split your table up into smaller tables containing only information for a specific subject each, you can run this on a cluster in parallel for each subject to speed things up.

So let's investigate (some of) the results.
Call for example:
{{{
tksurfer OAS2_0001 lh pial -overlay $SUBJECTS_DIR/OAS2_0001/surf/qcache/lh.long.thickness-avg.fwhm15.mgh -timecourse $SUBJECTS_DIR/OAS2_0001/surf/qcache/lh.long.thickness-stack.mgh -aparc
}}}
to open up the pial surface (left hemi) of subject OAS2_0001. You are looking at the smoothed average thickness (color overlay). If you click at any location on the surface you can see a plot of the thickness values at the two time points. For a single subject values are often noisy even after smoothing. That is why for a group analysis we need several subject in each group.

top | previous

Longitudinal Processing - Tutorial

This page will take you through the steps of processing your longitudinal data: first with running the cross sectionals and creating the unbiased within-subject base and longitudinals. Then we will take a look at the results, learn how to do some post-processing and about editing the data and rerunning the different streams with the new edits. You can read more about the longitudinal stream at LongitudinalProcessing and about edits at LongitudinalEdits.

Preparations

If you are taking one of the formally organized courses, the tutorial data is already installed on the computer provided to you so please skip ahead to the Viewing Volumes with Tkmedit section. If not, then to follow this exercise exactly be sure you've downloaded the tutorial data set before you begin. If you choose not to download the data set you can follow these instructions on your own data, but you will have to substitute your own specific paths and subject names. If you are using the tutorial data please set the environmental variable TUTORIAL_DATA to the location that you have downloaded the data to (here, it has been copied to $FREESURFER_HOME/subjects):

tcsh
setenv TUTORIAL_DATA $FREESURFER_HOME/subjects/buckner_data/tutorial_subjs

Notice the command to open tcsh. If you are already running the tcsh command shell, then the 'tcsh' command is not necessary.

First you need to set your SUBJECTS_DIR to the appropriate place:

setenv SUBJECTS_DIR $TUTORIAL_DATA
cd $SUBJECTS_DIR

this will set your SUBJECTS_DIR to the location where your tutorial data is if you have defined the variable TUTORIAL_DATA as indicated at the top of this tutorial. If you are not using the tutorial data you should set your SUBJECTS_DIR to the directory in which the subject you will use for this tutorial is located.

Alternatively you can set SUBJECTS_DIR to the directory where your cross sectionals reside (the different time points for each subject). And don't forget to source FREESURFER.

cd /path/to/your/data
setenv SUBJECTS_DIR $PWD
source $FREESURFER_HOME


Creating Longitudinal Data

Processing your data currently consists of three steps:

First, run all your cross sectionals. Run recon-all -all for all tpNs (i.e. all time points for all subjects):

recon-all -subjid <tpNid> -all

Second, create your template/base from the tpNs. Here you can choose a name for the templateID, e.g. 'bert' or 'bert_base' if 'bert' is already used for the first time point of this subject:

recon-all -base <templateID> -tp <tp1id> -tp <tp2id> ... -all

Finally, create the longitudinals using the template and tpNs. Repeat the following steps for all tpNs. The resulting directories will be in the format of tp1id.long.templateID

recon-all -long <tpNid> <templateID> -all

So for example for a subject with two time points OAS2_0001_MR1 and OAS2_0001_MR2 you would run (don't do it, it has allready been done for you):

recon-all -subjid OAS2_0001_MR1 -all 
recon-all -subjid OAS2_0001_MR2 -all

(here you can specify -i path/to/dicomfile -i ... to import dicoms, if the input is not available in OAS2_0001_MR1/mri/orig/001.mgz ... see ??HOWTORUNDATA???). We call these runs the cross sectionals (or cross runs) because the two time points are processed completely independently as if they were from different subjects.

Once the norm.mgz is available on both time points, you can create the unbiased template/base. We will call it OAS2_0001 :

recon-all -base OAS2_0001 -tp OAS2_0001_MR1 -tp OAS2_0001_MR2 -all

This will create the within-subject template (we will call it the base) and run it through recon-all (so it will take approximately the same time as a regular recon-all run). A directory OAS2_0001 will be created.

Finally once the base and the two cross sectionally processed time points are fully completed, you can run the longitudinal runs:

recon-all -long OAS2_0001_MR1 OAS2_0001 -all
recon-all -long OAS2_0001_MR2 OAS2_0001 -all

These runs create the directories OAS2_0001_MR1.long.OAS2_0001 and OAS2_0001_MR2.long.OAS2_0001 containing the final results. These are complete subjects directories and we will use them for any postprocessing or analysis as the results are more sensitive and repeatable than the independent cross runs. These longitudinal processes run much faster than the cross and base above. We call them the long runs, because they make use of common information taken from the template.


Inspecting Longitudinal Data

Once your results are there (and they are in this tutorial), you can take a look at different things. First let's check the base. Open the norm.mgz into the freeview:

freeview -v OAS2_0001/mri/norm.mgz -f OAS2_0001/surf/lh.pial:edgecolor=blue OAS2_0001/surf/rh.pial:edgecolor=blue OAS2_0001/surf/lh.white:edgecolor=red OAS2_ooo1/surf/rh.white:edgecolor=red

(IS THERE A SHORT WAY OF CALLING THIS WITH FREEVIEW??? for now put the full comands here...)

alternatively you can use good-ole tkmedit:

tkmedit OAS2_0001 norm.mgz --surfs

This will show you a synthesized image, basically the average anatomy of this subject across time. If the across-time registration failed you would see a blurry image or ghosts (usually never happens, but if it does, report it). You can also inspect the surfaces on the average anatomy. This will be important later in case of edits as the surfaces are transferred into the longitudinal runs and therefore should be accurate in the base.

Now it is time to look at the longitudinal results. Starting with FreeSurfer 5.1 the base and the long runs are all in the same voxels space, therefore images will be registered and can be directly compared if opened on top of each other:

freeview (open both time points and surfaces (and annotations, labels ...?)

or tkmedit:

note that tkmedit cannot open more than 2 images at the same time, therefore we recommend freeview.

You can see the ....(surfs, annot,labels...) Also switch back and forth between the images by pressing ???. to directly see any longitudinal change.


Post-Processing Longitudinal Data

In order to analyze your longitudinal data, you have different options. You could, e.g., open at the files in each tp's /stats/ dir, containing statistics such as volume of subcortical structures or thickness averages for cortical regions. These statistics can be fed (after conversion) into statistical packages to run your analysis, such as linear mixed models. In the GLM and QDEC tutorials you learn how to run some (simple) statistical analysis on thickness maps (e.g. to find cortical regions with different thickness across groups). We won't do any statistical analysis in this tutorial, but we will discuss how to prepare and view your longitudinal data. A simple statistical model is a 2 stage setting, where we compute some measure for each subject, for example the rate of change (mm/year thinning, or percent change) and then compare this measure across groups to detect disease or treatment effects.

To get the longitudinal data ready for this you need to create a table (space separated as a text file) in the following format:

fsid

fsid-base

years

OAS2_0001_MR1

OAS2_0001

0

OAS2_0001_MR2

OAS2_0001

1.25

OAS2_0004_MR1

OAS2_0004

0

OAS2_0004_MR2

OAS2_0004

1.47

...

where the first column is called fsid (containing each time point) and the second column is fsid-base containing the base name, to group time points within subject. You can have many more columns such as gender, age, group ... Make sure there is one colum containing the time variable (optimally in years) such as age or the time from the frist time point. Here we use years to measure the time from start. You can see that the two subjects OAS2_0001 and OAS2_0004 each have two time points that are not equally spaced. We have created this table for you and can get started.

The following commands can be used to prepare the data (don't run it, it will take a while and has allready been done for you):

long_mris_slopes --qdec ./longqdec.table.dat --meas thickness --hemi lh --do-avg --do-rate --do-pc1 --do-spc --do-stack --do-label --time years --qcache fsaverage

This will:

  • (--qdec) read in the qdec table
  • (--meas) take the thickness measure of each time point
  • (--hemi) work on left hemisphere
  • (--do-avg) compute the temporal average (average thickness at the middle time, here it is just the average)
  • (--do-rate) compute the rate of change (thickening in mm/year)
  • (--do-pc1) compute the percent change (with respect to time point 1)
  • (--do-spc) compute a symmetrized percent change (with respect to the temporal average)
  • (--do-stack) output a stacked thickness file for each subject (time series)
  • (--do-label) intersect the cortex label to ensure we don't include non cortex regions
  • (--time) specify the column in the longqdec.table.dat that contains the time variable (here 'years')
  • (--qcache) and automatically smooth everything and map it to fsaverage for a potential group analysis using qdec

You would then run the same command for the right hemisphere (--hemi rh). Note, if you split your table up into smaller tables containing only information for a specific subject each, you can run this on a cluster in parallel for each subject to speed things up.

So let's investigate (some of) the results. Call for example:

tksurfer OAS2_0001 lh pial -overlay $SUBJECTS_DIR/OAS2_0001/surf/qcache/lh.long.thickness-avg.fwhm15.mgh -timecourse $SUBJECTS_DIR/OAS2_0001/surf/qcache/lh.long.thickness-stack.mgh -aparc

to open up the pial surface (left hemi) of subject OAS2_0001. You are looking at the smoothed average thickness (color overlay). If you click at any location on the surface you can see a plot of the thickness values at the two time points. For a single subject values are often noisy even after smoothing. That is why for a group analysis we need several subject in each group.


Editing Longitudinal Data


MartinReuter

FsTutorial/LongitudinalTutorial_tktools (last edited 2013-11-01 14:33:07 by MaritzaEbling)