|
Size: 11654
Comment:
|
Size: 13124
Comment:
|
| Deletions are marked like this. | Additions are marked like this. |
| Line 2: | Line 2: |
| Line 3: | Line 4: |
| This page will take you through the steps of processing your longitudinal data: first with running the cross sectionals and creating the unbiased within-subject base and longitudinals. Then we will take a look at the results, learn how to do some post-processing and about editing the data and rerunning the different streams with the new edits. You can read more about the longitudinal stream at [[LongitudinalProcessing]] and about edits at [[LongitudinalEdits]]. |
This page will take you through the steps of processing your longitudinal data: first with running the cross sectionals and creating the unbiased within-subject base and longitudinals. Then we will take a look at the results, learn how to do some post-processing and about editing the data and rerunning the different streams with the new edits. You can read more about the longitudinal stream at LongitudinalProcessing and about edits at LongitudinalEdits. |
| Line 9: | Line 9: |
| Line 16: | Line 15: |
| Line 25: | Line 23: |
this will set your {{{SUBJECTS_DIR}}} to the location where your tutorial data is if you have defined the variable {{{TUTORIAL_DATA}}} as indicated at the top of this tutorial. If you are not using the tutorial data you should set your {{{SUBJECTS_DIR}}} to the directory in which the subject you will use for this tutorial is located. |
this will set your {{{SUBJECTS_DIR}}} to the location where your tutorial data is if you have defined the variable {{{TUTORIAL_DATA}}} as indicated at the top of this tutorial. If you are not using the tutorial data you should set your {{{SUBJECTS_DIR}}} to the directory in which the subject you will use for this tutorial is located. |
| Line 29: | Line 26: |
| Line 34: | Line 32: |
| Line 37: | Line 34: |
| Line 41: | Line 37: |
| Line 44: | Line 41: |
| Second, create your template/base from the tpNs. Here you can choose a name for the templateID, e.g. 'bert' or 'bert_base' if 'bert' is already used for the first time point of this subject: | |
| Line 45: | Line 43: |
| Second, create your template/base from the tpNs. Here you can choose a name for the templateID, e.g. 'bert' or 'bert_base' if 'bert' is already used for the first time point of this subject: | |
| Line 49: | Line 46: |
| Finally, create the longitudinals using the template and tpNs. Repeat the following steps for all tpNs. The resulting directories will be in the format of tp1id.long.templateID | |
| Line 50: | Line 48: |
| Finally, create the longitudinals using the template and tpNs. Repeat the following steps for all tpNs. The resulting directories will be in the format of tp1id.long.templateID | |
| Line 54: | Line 51: |
| Line 58: | Line 54: |
| recon-all -subjid OAS2_0001_MR1 -all | recon-all -subjid OAS2_0001_MR1 -all |
| Line 61: | Line 57: |
| (here you can specify -i path/to/dicomfile -i ... to import dicoms, if the input is not available in OAS2_0001_MR1/mri/orig/001.mgz ... see ??HOWTORUNDATA-link???). We call these runs the cross sectionals (or cross runs) because the two time points are processed completely independently as if they were from different subjects. |
(here you can specify -i path/to/dicomfile -i ... to import dicoms, if the input is not available in OAS2_0001_MR1/mri/orig/001.mgz ... see ??HOWTORUNDATA-link???). We call these runs the cross sectionals (or cross runs) because the two time points are processed completely independently as if they were from different subjects. |
| Line 65: | Line 60: |
| Line 71: | Line 67: |
| Line 75: | Line 72: |
| These runs create the directories OAS2_0001_MR1.long.OAS2_0001 and OAS2_0001_MR2.long.OAS2_0001 containing the final results. These are complete subjects directories and we will use them for any postprocessing or analysis as the results are more sensitive and repeatable than the independent cross runs. These longitudinal processes run much faster than the cross and base above. We call them the long runs, because they make use of common information taken from the template. | These runs create the directories OAS2_0001_MR1.long.OAS2_0001 and OAS2_0001_MR2.long.OAS2_0001 containing the final results. These are complete subjects directories and we will use them for any postprocessing or analysis as the results are more sensitive and repeatable than the independent cross runs. These longitudinal processes run much faster than the cross and base above. We call them the long runs, because they make use of common information taken from the template. |
| Line 79: | Line 76: |
| Once your results are there (and they are in this tutorial), you can take a look at different things. First let's check the base. Open the norm.mgz into the freeview: | |
| Line 80: | Line 78: |
| Once your results are there (and they are in this tutorial), you can take a look at different things. First let's check the base. Open the norm.mgz into the freeview: | |
| Line 87: | Line 84: |
| Line 90: | Line 88: |
This will show you a synthesized image, basically the average anatomy of this subject across time. If the across-time registration failed you would see a blurry image or ghosts (usually never happens, but if it does, report it). You can also inspect the surfaces on the average anatomy. This will be important later in case of edits as the surfaces are transferred into the longitudinal runs and therefore should be accurate in the base. |
This will show you a synthesized image, basically the average anatomy of this subject across time. If the across-time registration failed you would see a blurry image or ghosts (usually never happens, but if it does, report it). You can also inspect the surfaces on the average anatomy. This will be important later in case of edits as the surfaces are transferred into the longitudinal runs and therefore should be accurate in the base. |
| Line 95: | Line 91: |
| Line 99: | Line 96: |
| Line 100: | Line 98: |
| Line 104: | Line 101: |
| You can see the ....(surfs, annot,labels...) Also switch back and forth between the images by pressing ???. to directly see any longitudinal change. |
You can see the ....(surfs, annot,labels...) Also switch back and forth between the images by pressing ???. to directly see any longitudinal change. |
| Line 109: | Line 105: |
| In order to analyze your longitudinal data, you have different options. You could, e.g., open the stats text files in each tp's /stats/ dir, containing statistics such as volume of subcortical structures or thickness averages for cortical regions. These statistics can be fed (after conversion) into statistical packages to run whatever analysis you are interested in, such as linear mixed models. In the [[FsTutorial/GroupAnalysis|GLM]] and [[FsTutorial/QdecGroupAnalysis|QDEC]] group analysis tutorials you learn how to run some (simple) statistical analyses on thickness maps (e.g. to find cortical regions with different thickness across groups). We won't do any statistical analysis in this tutorial, but we will discuss how to prepare and view your longitudinal data. Here we prepare for a simple statistical model consiting of 2 stages: | |
| Line 110: | Line 107: |
| In order to analyze your longitudinal data, you have different options. You could, e.g., open the stats text files in each tp's /stats/ dir, containing statistics such as volume of subcortical structures or thickness averages for cortical regions. These statistics can be fed (after conversion) into statistical packages to run whatever analysis you are interested in, such as linear mixed models. In the [[FsTutorial/GroupAnalysis|GLM]] and [[FsTutorial/QdecGroupAnalysis|QDEC]] group analysis tutorials you learn how to run some (simple) statistical analyses on thickness maps (e.g. to find cortical regions with different thickness across groups). We won't do any statistical analysis in this tutorial, but we will discuss how to prepare and view your longitudinal data. Here we prepare for a simple statistical model consiting of 2 stages: # we compute some measure for each subject, for example the rate of change (mm/year thinning, or percent change) # this measure can then be compared across groups (e.g. with QDEC) to detect disease or treatment effects. |
. # we compute some measure for each subject, for example the rate of change (mm/year thinning, or percent change) # this measure can then be compared across groups (e.g. with QDEC) to detect disease or treatment effects. |
| Line 115: | Line 110: |
| ||fsid ||fsid-base ||years || ||OAS2_0001_MR1 ||OAS2_0001 ||0 || ||OAS2_0001_MR2 ||OAS2_0001 ||1.25 || ||OAS2_0004_MR1 ||OAS2_0004 ||0 || ||OAS2_0004_MR2 ||OAS2_0004 ||1.47 || ||... || || || |
|
| Line 116: | Line 117: |
| || fsid || fsid-base || years || || OAS2_0001_MR1 || OAS2_0001 || 0 || || OAS2_0001_MR2 || OAS2_0001 || 1.25 || || OAS2_0004_MR1 || OAS2_0004 || 0 || || OAS2_0004_MR2 || OAS2_0004 || 1.47 || || ... || || || |
|
| Line 126: | Line 123: |
| Line 127: | Line 125: |
| long_mris_slopes --qdec ./longqdec.table.dat --meas thickness --hemi lh --do-avg --do-rate --do-pc1 --do-spc --do-stack --do-label --time years --qcache fsaverage | long_mris_slopes --qdec ./long.qdec.table.dat --meas thickness --hemi lh --do-avg --do-rate --do-pc1 --do-spc --do-stack --do-label --time years --qcache fsaverage |
| Line 130: | Line 128: |
| * (--qdec) read in the qdec table | * (--qdec) read in the qdec table |
| Line 141: | Line 140: |
| Line 143: | Line 143: |
| So let's investigate (some of) the results. Call for example: |
So let's investigate (some of) the results. Call for example: |
| Line 151: | Line 151: |
| Line 156: | Line 157: |
| Line 159: | Line 159: |
| Editing longitudinal data can be complicated, but in some cases you actually save time, as some edits are only necessary in the base. You should be familiar with [[Edits]] and might want to check also the page about LongitudinalEdits. | |
| Line 160: | Line 161: |
| Editing longitudinal data can be complicated, but in some cases you actually save time, as some edits are only necessary in the base. You should be familiar with [[Edits]] and might want to check also the page about [[LongitudinalEdits]]. | Here are some examples of some major errors you can encounter with your data: === Skullstrip Error === Take a look at subject OAS2_0004_before in tkmedit. First open the base. {{{ tkmedit OAS2_0004_before brainmask.mgz -aux T1.mgz -surfs }}} As always, you should also check out the cross-sectionals and longitudinals for the same subject to see if the problem is present in all parts of the streams. To open the cross: {{{ tkmedit OAS2_0004_MR1_before brainmask.mgz -aux T1.mgz -surfs }}} and {{{ tkmedit OAS2_0004_MR2_before brainmask.mgz -aux T1.mgz -surfs }}} To open the long: {{{ tkmedit OAS2_0004_MR1.long.OAS2_0004_before brainmask.mgz -aux T1.mgz -surfs }}} and {{{ tkmedit OAS2_0004_MR2.long.OAS2_0004_before brainmask.mgz -aux T1.mgz -surfs }}} |
| Line 163: | Line 192: |
| This will open the brainmask.mgz volume, the T1.mgz loaded as aux, and the surfaces for both hemispheres. | |
| Line 164: | Line 194: |
| The trouble with this subject has occurred in the skull stripping step. Check the brainmask.mgz volume carefully, comparing it to the T1.mgz volume (loaded in aux) to make sure that the skull has been completely stripped away, leaving behind the complete cortex and the cerebellum. | |
| Line 165: | Line 196: |
| ---- | For problems with skullstrip failure, it is best to fix it by adjusting the watershed parameter of the cross-sectionals (it can only be done at this part of the stream), and recreate the base and longitudinals from there. Click [[FsTutorial/LongSkullStripFix|here]] for detailed instructions on how to do this. |
Longitudinal Processing - Tutorial
This page will take you through the steps of processing your longitudinal data: first with running the cross sectionals and creating the unbiased within-subject base and longitudinals. Then we will take a look at the results, learn how to do some post-processing and about editing the data and rerunning the different streams with the new edits. You can read more about the longitudinal stream at LongitudinalProcessing and about edits at LongitudinalEdits.
Preparations
If you are taking one of the formally organized courses, the tutorial data is already installed on the computer provided to you so please skip ahead to the Viewing Volumes with Tkmedit section. If not, then to follow this exercise exactly be sure you've downloaded the tutorial data set before you begin. If you choose not to download the data set you can follow these instructions on your own data, but you will have to substitute your own specific paths and subject names. If you are using the tutorial data please set the environmental variable TUTORIAL_DATA to the location that you have downloaded the data to (here, it has been copied to $FREESURFER_HOME/subjects):
tcsh setenv TUTORIAL_DATA $FREESURFER_HOME/subjects/buckner_data/tutorial_subjs
Notice the command to open tcsh. If you are already running the tcsh command shell, then the 'tcsh' command is not necessary.
First you need to set your SUBJECTS_DIR to the appropriate place:
setenv SUBJECTS_DIR $TUTORIAL_DATA cd $SUBJECTS_DIR
this will set your SUBJECTS_DIR to the location where your tutorial data is if you have defined the variable TUTORIAL_DATA as indicated at the top of this tutorial. If you are not using the tutorial data you should set your SUBJECTS_DIR to the directory in which the subject you will use for this tutorial is located.
Alternatively you can set SUBJECTS_DIR to the directory where your cross sectionals reside (the different time points for each subject). And don't forget to source FREESURFER.
cd /path/to/your/data setenv SUBJECTS_DIR $PWD source $FREESURFER_HOME
Creating Longitudinal Data
Processing your data currently consists of three steps:
First, run all your cross sectionals. Run recon-all -all for all tpNs (i.e. all time points for all subjects):
recon-all -subjid <tpNid> -all
Second, create your template/base from the tpNs. Here you can choose a name for the templateID, e.g. 'bert' or 'bert_base' if 'bert' is already used for the first time point of this subject:
recon-all -base <templateID> -tp <tp1id> -tp <tp2id> ... -all
Finally, create the longitudinals using the template and tpNs. Repeat the following steps for all tpNs. The resulting directories will be in the format of tp1id.long.templateID
recon-all -long <tpNid> <templateID> -all
So for example for a subject with two time points OAS2_0001_MR1 and OAS2_0001_MR2 you would run (don't do it, it has allready been done for you):
recon-all -subjid OAS2_0001_MR1 -all recon-all -subjid OAS2_0001_MR2 -all
(here you can specify -i path/to/dicomfile -i ... to import dicoms, if the input is not available in OAS2_0001_MR1/mri/orig/001.mgz ... see ??HOWTORUNDATA-link???). We call these runs the cross sectionals (or cross runs) because the two time points are processed completely independently as if they were from different subjects.
Once the norm.mgz is available on both time points, you can create the unbiased template/base. We will call it OAS2_0001 :
recon-all -base OAS2_0001 -tp OAS2_0001_MR1 -tp OAS2_0001_MR2 -all
This will create the within-subject template (we will call it the base) and run it through recon-all (so it will take approximately the same time as a regular recon-all run). A directory OAS2_0001 will be created.
Finally once the base and the two cross sectionally processed time points are fully completed, you can run the longitudinal runs:
recon-all -long OAS2_0001_MR1 OAS2_0001 -all recon-all -long OAS2_0001_MR2 OAS2_0001 -all
These runs create the directories OAS2_0001_MR1.long.OAS2_0001 and OAS2_0001_MR2.long.OAS2_0001 containing the final results. These are complete subjects directories and we will use them for any postprocessing or analysis as the results are more sensitive and repeatable than the independent cross runs. These longitudinal processes run much faster than the cross and base above. We call them the long runs, because they make use of common information taken from the template.
Inspecting Longitudinal Data
Once your results are there (and they are in this tutorial), you can take a look at different things. First let's check the base. Open the norm.mgz into the freeview:
freeview -v OAS2_0001/mri/norm.mgz -f OAS2_0001/surf/lh.pial:edgecolor=blue OAS2_0001/surf/rh.pial:edgecolor=blue OAS2_0001/surf/lh.white:edgecolor=red OAS2_ooo1/surf/rh.white:edgecolor=red
(IS THERE A SHORT WAY OF CALLING THIS WITH FREEVIEW??? for now put the full comands here...)
alternatively you can use good-ole tkmedit:
tkmedit OAS2_0001 norm.mgz --surfs
This will show you a synthesized image, basically the average anatomy of this subject across time. If the across-time registration failed you would see a blurry image or ghosts (usually never happens, but if it does, report it). You can also inspect the surfaces on the average anatomy. This will be important later in case of edits as the surfaces are transferred into the longitudinal runs and therefore should be accurate in the base.
Now it is time to look at the longitudinal results. Starting with Freesurfer 5.1 the base and the long runs are all in the same voxels space, therefore images will be registered and can be directly compared if opened on top of each other:
freeview (open both time points and surfaces (and annotations, labels ...?)
or tkmedit:
note that tkmedit cannot open more than 2 images at the same time, therefore we recommend freeview.
You can see the ....(surfs, annot,labels...) Also switch back and forth between the images by pressing ???. to directly see any longitudinal change.
Post-Processing Longitudinal Data
In order to analyze your longitudinal data, you have different options. You could, e.g., open the stats text files in each tp's /stats/ dir, containing statistics such as volume of subcortical structures or thickness averages for cortical regions. These statistics can be fed (after conversion) into statistical packages to run whatever analysis you are interested in, such as linear mixed models. In the GLM and QDEC group analysis tutorials you learn how to run some (simple) statistical analyses on thickness maps (e.g. to find cortical regions with different thickness across groups). We won't do any statistical analysis in this tutorial, but we will discuss how to prepare and view your longitudinal data. Here we prepare for a simple statistical model consiting of 2 stages:
- # we compute some measure for each subject, for example the rate of change (mm/year thinning, or percent change) # this measure can then be compared across groups (e.g. with QDEC) to detect disease or treatment effects.
To get the longitudinal data ready for this you need to create a table (space separated as a text file) in the following format:
fsid |
fsid-base |
years |
OAS2_0001_MR1 |
OAS2_0001 |
0 |
OAS2_0001_MR2 |
OAS2_0001 |
1.25 |
OAS2_0004_MR1 |
OAS2_0004 |
0 |
OAS2_0004_MR2 |
OAS2_0004 |
1.47 |
... |
|
|
where the first column is called fsid (containing each time point) and the second column is fsid-base containing the base name, to group time points within subject. You can have many more columns such as gender, age, group ... Make sure there is one colum containing a time variable (optimally measured in years if you are interested in yearly change) such as age or the time from the frist time point. Here we use years to measure the time from baseline scan (=tp1, note baseline is not equal to base!). You can see that the two subjects OAS2_0001 and OAS2_0004 each have two time points that are not equally spaced. We have created this table for you in the subjects directory and can get started.
The following commands can be used to prepare the data (don't run it, it will take a while and has allready been done for you):
long_mris_slopes --qdec ./long.qdec.table.dat --meas thickness --hemi lh --do-avg --do-rate --do-pc1 --do-spc --do-stack --do-label --time years --qcache fsaverage
This will:
- (--qdec) read in the qdec table
- (--meas) take the thickness measure of each time point
- (--hemi) work on left hemisphere
- (--do-avg) compute the temporal average (average thickness at the middle time, here it is just the average)
- (--do-rate) compute the rate of change (thickening in mm/year)
- (--do-pc1) compute the percent change (with respect to time point 1)
- (--do-spc) compute a symmetrized percent change (with respect to the temporal average)
- (--do-stack) output a stacked thickness file for each subject (time series)
- (--do-label) intersect the cortex label to ensure we don't include non cortex regions
- (--time) specify the column in the longqdec.table.dat that contains the time variable (here 'years')
- (--qcache) and automatically smooth everything and map it to fsaverage for a potential group analysis using qdec
You would then run the same command for the right hemisphere (--hemi rh). Note, if you split your table up into smaller tables containing only information for a specific subject each, you can run this on a cluster in parallel for each subject to speed things up.
So let's investigate (some of) the results. Call for example:
tksurfer OAS2_0001 lh pial -overlay $SUBJECTS_DIR/OAS2_0001/surf/qcache/lh.long.thickness-avg.fwhm15.mgh -timecourse $SUBJECTS_DIR/OAS2_0001/surf/qcache/lh.long.thickness-stack.mgh -aparc
to open up the pial surface (left hemi) of subject OAS2_0001. You are looking at the smoothed average thickness (color overlay). If you click at any location on the surface you can see a plot of the thickness values at the two time points. The -aparc flag opens the cortical parcellation which can be switched on and off, it helps to find out what region you are inspecting when clicking on the surface. For a single subject values are often noisy even after smoothing. That is why for a group analysis one needs several subjects in each group.
In a similar fashion you can open for example the symmetrized percent change. This time we open it on fsaverage, a subject used as the target to compare things across subjects. The --qcache flag has conveniently registered and mapped all results to this avarage subject (provided with Freesurfer):
tksurfer fsaverage lh pial -overlay $SUBJECTS_DIR/OAS2_0001/surf/qcache/lh.long.thickness-spc.fwhm15.fsaverage.mgh -aparc
note the .fsaverage part of the filename.
Editing Longitudinal Data
Editing longitudinal data can be complicated, but in some cases you actually save time, as some edits are only necessary in the base. You should be familiar with Edits and might want to check also the page about LongitudinalEdits.
Here are some examples of some major errors you can encounter with your data:
Skullstrip Error
Take a look at subject OAS2_0004_before in tkmedit. First open the base.
tkmedit OAS2_0004_before brainmask.mgz -aux T1.mgz -surfs
As always, you should also check out the cross-sectionals and longitudinals for the same subject to see if the problem is present in all parts of the streams.
To open the cross:
tkmedit OAS2_0004_MR1_before brainmask.mgz -aux T1.mgz -surfs
and
tkmedit OAS2_0004_MR2_before brainmask.mgz -aux T1.mgz -surfs
To open the long:
tkmedit OAS2_0004_MR1.long.OAS2_0004_before brainmask.mgz -aux T1.mgz -surfs
and
tkmedit OAS2_0004_MR2.long.OAS2_0004_before brainmask.mgz -aux T1.mgz -surfs
This will open the brainmask.mgz volume, the T1.mgz loaded as aux, and the surfaces for both hemispheres.
The trouble with this subject has occurred in the skull stripping step. Check the brainmask.mgz volume carefully, comparing it to the T1.mgz volume (loaded in aux) to make sure that the skull has been completely stripped away, leaving behind the complete cortex and the cerebellum.
For problems with skullstrip failure, it is best to fix it by adjusting the watershed parameter of the cross-sectionals (it can only be done at this part of the stream), and recreate the base and longitudinals from there. Click here for detailed instructions on how to do this.
