|
Size: 754
Comment:
|
Size: 4810
Comment:
|
| Deletions are marked like this. | Additions are marked like this. |
| Line 1: | Line 1: |
| = Processing your Longitudinal Data = This page will take you through the steps of processing your longitudinal data: first with running the cross sectionals and creating the longitudinals, and then with editing the data and rerunning the different streams with the new edits. |
[[FsTutorial|top]] | [[FsTutorial/TroubleshootingData|previous]] = Longitudinal Processing - Tutorial = This page will take you through the steps of processing your longitudinal data: first with running the cross sectionals and creating the unbiased within-subject base and longitudinals. Then we will take a look at the results, learn how to do some post-processing and about editing the data and rerunning the different streams with the new edits. You can read more about the longitudinal stream at [[LongitudinalProcessing]] and about edits at [[LongitudinalEdits]]. |
| Line 6: | Line 8: |
| == Creating the Longitudinal Data == | == Preparations == |
| Line 8: | Line 10: |
| Remember to set your SUBJECTS_DIR to the directory where your cross sectionals (the different time points collected) are saved, and source the FREESURFER. ---- |
If you are taking one of the formally organized courses, the tutorial data is already installed on the computer provided to you so please skip ahead to the '''Viewing Volumes with Tkmedit''' section. If not, then to follow this exercise exactly be sure you've downloaded the [[FsTutorial/Data|tutorial data set]] before you begin. If you choose not to download the data set you can follow these instructions on your own data, but you will have to substitute your own specific paths and subject names. If you are using the tutorial data please set the environmental variable {{{TUTORIAL_DATA}}} to the location that you have downloaded the data to (here, it has been copied to $FREESURFER_HOME/subjects): {{{ tcsh setenv TUTORIAL_DATA $FREESURFER_HOME/subjects/buckner_data/tutorial_subjs }}} Notice the command to open tcsh. If you are already running the tcsh command shell, then the 'tcsh' command is not necessary. First you need to set your SUBJECTS_DIR to the appropriate place: {{{ setenv SUBJECTS_DIR $TUTORIAL_DATA cd $SUBJECTS_DIR }}} this will set your {{{SUBJECTS_DIR}}} to the location where your tutorial data is if you have defined the variable {{{TUTORIAL_DATA}}} as indicated at the top of this tutorial. If you are not using the tutorial data you should set your {{{SUBJECTS_DIR}}} to the directory in which the subject you will use for this tutorial is located. Alternatively you can set SUBJECTS_DIR to the directory where your cross sectionals reside (the different time points for each subject). And don't forget to source FREESURFER. |
| Line 15: | Line 34: |
| Line 16: | Line 36: |
| First, run all your cross sectionals. Run recon-all -all for all tpNs. ---- |
== Creating Longitudinal Data == Processing your data currently consists of three steps: First, run all your cross sectionals. Run recon-all -all for all tpNs (i.e. all time points for all subjects): |
| Line 19: | Line 42: |
| recon-all -all -subjid tpN | recon-all -subjid <tpNid> -all |
| Line 21: | Line 44: |
Second, create your template/base from the tpNs. Here you can choose a name for the templateID, e.g. 'bert' or 'bert_base' if 'bert' is already used for the first time point of this subject: {{{ recon-all -base <templateID> -tp <tp1id> -tp <tp2id> ... -all }}} Finally, create the longitudinals using the template and tpNs. Repeat the following steps for all tpNs. The resulting directories will be in the format of tp1id.long.templateID {{{ recon-all -long <tpNid> <templateID> -all }}} So for example for a subject with two time points OAS2_0001_MR1 and OAS2_0001_MR2 you would run: {{{ recon-all -subjid OAS2_0001_MR1 -all recon-all -subjid OAS2_0001_MR2 -all }}} (here you can specify -i path/to/dicomfile -i ... to import dicoms, if the input is not available in OAS2_0001_MR1/mri/orig/001.mgz ... see ??HOWTORUNDATA???). We call these runs the cross sectionals (or cross runs) because the two time points are processed completely independently as if they were from different subjects. Once the norm.mgz is available on both time points, you can create the unbiased template/base. We will call it OAS2_0001 : {{{ recon-all -base OAS2_0001 -tp OAS2_0001_MR1 -tp OAS2_0001_MR2 -all }}} This will create the within-subject template and run it through recon-all (so it will take approximately the same time as a regular recon-all run). A directory OAS2_0001 will be created. Finally once the base and the two cross sectionally processed time points are fully completed, you can run the longitudinal runs: {{{ recon-all -long OAS2_0001_MR1 OAS2_0001 -all recon-all -long OAS2_0001_MR2 OAS2_0001 -all }}} These runs will create the directories OAS2_0001_MR1.long.OAS2_0001 and OAS2_0001_MR2.long.OAS2_0001 containing the final results. These are complete subjects directories and we will use them for any postprocessing or analysis as the results are more sensitive and repeatable than the independent cross runs. These longitudinal processes will run much faster than the cross and base runs above. We call them the long runs, because they make use of common information taken from the template. |
|
| Line 23: | Line 79: |
== Editing your Data == |
== Inspecting Longitudinal Data == == Post-Processing Longitudinal Data == == Editing Longitudinal Data == |
Longitudinal Processing - Tutorial
This page will take you through the steps of processing your longitudinal data: first with running the cross sectionals and creating the unbiased within-subject base and longitudinals. Then we will take a look at the results, learn how to do some post-processing and about editing the data and rerunning the different streams with the new edits. You can read more about the longitudinal stream at LongitudinalProcessing and about edits at LongitudinalEdits.
Preparations
If you are taking one of the formally organized courses, the tutorial data is already installed on the computer provided to you so please skip ahead to the Viewing Volumes with Tkmedit section. If not, then to follow this exercise exactly be sure you've downloaded the tutorial data set before you begin. If you choose not to download the data set you can follow these instructions on your own data, but you will have to substitute your own specific paths and subject names. If you are using the tutorial data please set the environmental variable TUTORIAL_DATA to the location that you have downloaded the data to (here, it has been copied to $FREESURFER_HOME/subjects):
tcsh setenv TUTORIAL_DATA $FREESURFER_HOME/subjects/buckner_data/tutorial_subjs
Notice the command to open tcsh. If you are already running the tcsh command shell, then the 'tcsh' command is not necessary.
First you need to set your SUBJECTS_DIR to the appropriate place:
setenv SUBJECTS_DIR $TUTORIAL_DATA cd $SUBJECTS_DIR
this will set your SUBJECTS_DIR to the location where your tutorial data is if you have defined the variable TUTORIAL_DATA as indicated at the top of this tutorial. If you are not using the tutorial data you should set your SUBJECTS_DIR to the directory in which the subject you will use for this tutorial is located.
Alternatively you can set SUBJECTS_DIR to the directory where your cross sectionals reside (the different time points for each subject). And don't forget to source FREESURFER.
cd /path/to/your/data setenv SUBJECTS_DIR $PWD source $FREESURFER_HOME
Creating Longitudinal Data
Processing your data currently consists of three steps:
First, run all your cross sectionals. Run recon-all -all for all tpNs (i.e. all time points for all subjects):
recon-all -subjid <tpNid> -all
Second, create your template/base from the tpNs. Here you can choose a name for the templateID, e.g. 'bert' or 'bert_base' if 'bert' is already used for the first time point of this subject:
recon-all -base <templateID> -tp <tp1id> -tp <tp2id> ... -all
Finally, create the longitudinals using the template and tpNs. Repeat the following steps for all tpNs. The resulting directories will be in the format of tp1id.long.templateID
recon-all -long <tpNid> <templateID> -all
So for example for a subject with two time points OAS2_0001_MR1 and OAS2_0001_MR2 you would run:
recon-all -subjid OAS2_0001_MR1 -all recon-all -subjid OAS2_0001_MR2 -all
(here you can specify -i path/to/dicomfile -i ... to import dicoms, if the input is not available in OAS2_0001_MR1/mri/orig/001.mgz ... see ??HOWTORUNDATA???). We call these runs the cross sectionals (or cross runs) because the two time points are processed completely independently as if they were from different subjects.
Once the norm.mgz is available on both time points, you can create the unbiased template/base. We will call it OAS2_0001 :
recon-all -base OAS2_0001 -tp OAS2_0001_MR1 -tp OAS2_0001_MR2 -all
This will create the within-subject template and run it through recon-all (so it will take approximately the same time as a regular recon-all run). A directory OAS2_0001 will be created.
Finally once the base and the two cross sectionally processed time points are fully completed, you can run the longitudinal runs:
recon-all -long OAS2_0001_MR1 OAS2_0001 -all recon-all -long OAS2_0001_MR2 OAS2_0001 -all
These runs will create the directories OAS2_0001_MR1.long.OAS2_0001 and OAS2_0001_MR2.long.OAS2_0001 containing the final results. These are complete subjects directories and we will use them for any postprocessing or analysis as the results are more sensitive and repeatable than the independent cross runs. These longitudinal processes will run much faster than the cross and base runs above. We call them the long runs, because they make use of common information taken from the template.
Inspecting Longitudinal Data
Post-Processing Longitudinal Data
