Contents
FreeSurfer Scripts
Aliases
If you find yourself typing the same commands over and over, a nice shortcut is to make an alias for it. An alias is a short nickname for the full command. For example, instead of typing out the command ls -l each time you want to use it, you can create an alias for it like this:
alias l "ls -l"
Now, if you type l and hit enter in a terminal, you'll get the same response as you would if you typed ls -l.
For every new terminal window you open, you will have to type the above command again in order for it to work. To avoid this, you can create a text file with all the aliases you commonly use. This file will need to be saved in your home directory with the name:
.alias
You're not done yet. You also need to source that text file within your configuration file (i.e. .cshrc or .bashrc found in the home directory). You would add this line to that file:
source ~/.alias
Now, every time you open a new terminal window, it will automatically source your alias file.
Note: If you are using bash and not tcsh or csh, you will have to set your aliases somewhat differently. Aliases that use input from the command line will need to be set up as functions. Here are some examples for how to set up aliases in bash:
alias fv='freeview' alias data='cd $SUBJECTS_DIR' function fvd () { command freeview -v "$@"/mri/brainmask.mgz; }
Below are some ideas for aliases you may find useful:
alias ../.. 'cd ../..' alias ../../.. 'cd ../../..' alias e emacs alias .. cd .. alias unpack 'unpacksdcmdir -src . -targ . -scanonly scan.log' alias fvwm "freeview -v brainmask.mgz wm.mgz:colormap=heatscale -f ../surf/lh.white:edgecolor='blue' ../surf/rh.white:edgecolor='blue' ../surf/lh.pial:edgecolor='red' ../surf/rh.pial:edgecolor='red'"
(Note: That last alias is intended to be all on one line.)
For the below, \!* indicates to substitute it with whatever is typed on the commandline.
alias fv 'freeview \!*'
So if you typed:
fv orig.mgz
The alias will work as if you typed freeview orig.mgz. Similarly, you could use:
alias tkm "tkmedit \!* brainmask.mgz rh.white -aux T1.mgz -aux-surface lh.white -segmentation aseg.mgz -segmentation-opacity 0.2 -tcl ~/surfaces.tcl"
And only have to type tkm subj001 to open a subject with tkmedit.
This alias will set the SUBJECTS_DIR variable to whichever directory you are currently in.
alias sd 'setenv SUBJECTS_DIR `pwd`'
The below alias illustrates that one alias could do multiple things. In this case, you can source FreeSurfer, change directories to where your subjects are located and set the SUBJECTS_DIR variable all by simply typing fs.
alias fs 'setenv FREESURFER_HOME /home/apps/freesurfer; source $FREESURFER_HOME/SetUpFreeSurfer.csh; cd /home/apps/freesurfer/subjects; setenv SUBJECTS_DIR /home/apps/freesurfer/subjects'
Finally, the aliases below may be useful but they will only work if they always remain at the bottom of your alias file:
alias subdir 'echo $SUBJECTS_DIR' alias fshome 'echo $FREESURFER_HOME' alias csubdir 'cd $SUBJECTS_DIR' alias cfshome 'cd $FREESURFER_HOME'
Batch Processing with recon-all
The script below is an example of how you can search for a subject's MPRAGE among several dicoms and then automatically create the recon-all command with its name and location. This script works with the way dicoms at the Martinos Center used to be named. It assumes the dicoms have been unpacked and searches the unpack log file for the typical MPRAGE dimensions, 256 x 256 x 128. From that location, it can find the name of the MPRAGE dicom (the 8th field in the log file, hence awk '{print $8}') and prints it to a log file with the specified recon-all command.
#!/bin/tcsh -f (The above should be the first line of the script but # represents a comment in the formatted text, so it doesn't show up)
set s = $1 setenv SUBJECTS_DIR /path/to/your/data set log = $SUBJECTS_DIR/recon-all-commands.log set dcmdir = /path/to/your/dicoms set subjid = echo $s |gawk -F- '{print $2}' if (-e $dcmdir/$s/scan.log) then . echo "found scan.log, finding mprages" set dat = $dcmdir/$s/scan.log else . echo "no scan.log" endif set mpr = (cat $dat | grep "256 256 128" |grep ok | awk '{print $8}') echo "found mprages, $mpr" echo recon-all -i $dcmdir/$s/$mpr -all -s $subjid >> $log
Longitudinal Batch Processing on Launchpad
The script long_submit_jobs can help with submitting jobs on the cluster (launchpad) for a longitudinal study. It allows placing a limit of maximal number of jobs. Furthermore, it can start the base (subject template) once the norm.mgz is available at all time points and will start the longitudinal processes once the cross and base are through. It can also be used to check completeness and resubmit jobs that crashed etc.
Longitudinal Qdec Table
The input is a longitudinal qdec table, i.e., a ascii text file with one row of headers and several rows of data. Columns are separated by a space. Column headers need to be (for processing): fsid fsid-base. More columns can exist and should be added for statistical analysis, for processing these 2 columns are sufficient and should always be the first columns in the file. fsid contains the cross sectional id of the subject time point, fsid-base the subject's base id (the same for all the time points of this subject, cannot be identical to any time point). Here is an example:
fsid |
fsid-base |
years |
... |
OAS2_0001_MR1 |
OAS2_0001 |
0 |
|
OAS2_0001_MR2 |
OAS2_0001 |
1.25 |
|
OAS2_0004_MR1 |
OAS2_0004 |
0 |
|
OAS2_0004_MR2 |
OAS2_0004 |
1.47 |
|
... |
|
|
|
where the first column is called fsid (containing each time point) and the second column is fsid-base containing the base name, to group time points within subject. You can have many more columns such as gender, age, group, etc. Usually this table is placed into a directory called scripts inside the subjects directory that will also contain all processed results from all subjects and all time points, once we are done.
Input Directory Structure
The script requires the a directory with all subjects and all time points data. This directory will be called subjects directory and needs to be setup for processing in the following way:
the subjects directory needs to contain a subdirectory with the name of each subject's time point (e.g.
- underneath each subject's time point directory there needs to be a directory mri/orig/
- inside needs to be at least a 001.mgz file (or several inputs 002.mgz etc, if within session scans should be averaged)
This structure can be created by invoking
recon-all -sd <subjects_dir> -s <subject_tp_id> -i <input_path_to_dicom> [-i <input_path_to_dicom2>]
Note that <subject_dir> is fixed for the whole study, while <subject_tp_id> is, for example: me_1, me_2, you1, ... (as specified in the longitudinal qdec table first column fsid). This recon-all command will take the dicom (other formats are possible) and create a folder called: <subjects_dir>/<subject_tp_id>/mri/orig/001.mgz and potentially 002.mgz etc if more input images are passed. It is recommended to only use the best MPRAGE (or multi echo MPRAGE) image instead of averaging several within sessions scans.
Make sure that the subject id's passed to recon-all are identical to the id's specified in the longituidnal qdec table (first column)!
Processing
The processing script long_submit_jobs is designed to work with our local cluster launchpad. It can be adopted for other clusters by anyone with a little scripting experience. Here we will only describe usage, assuming you have access to launchpad.
- Log into launchpad
Source the FreeSurfer version you are using (probably a version that you copied and fixed for this study).
Change into the <subjects_dir>/scripts directory (where the longituidnal qdec table was prepared)
Make sure that the <subjects_dir> contains all data (all subjects all time points, named as in the long-qdec table file)
Make sure that the second column in the long-qdec table file (fsid-base) has a name for each subject that is different from the time points and that groups the time points into subjects.
Make sure you have enough disc space and writing permissions to the <subjects_dir>
If you haven't done so, familiarize youself with the 3 processing stages (cross, base, long) for longitudinal studies LongitudnalProcessing
Run long_submit_jobs as described below to start submitting jobs to the cluster.
You can use long_submit_jobs for running only the cross sectional, only the base (subject templates), the longitudinal time points or any combination. While it is possible to split the location of cross, base and long outputs to different <subjects_directories> we recommend to keep everything in the same directory.
Here is an example command:
long_submit_jobs --qdec long.qdec.table.dat --cdir $PWD/../ --cross --base --long --pause 5 --max 200 --queue max100
This will use the long.qdec.table.dat text file for processing. Cross sectional input images and all output (cross, base and long) will be placed in the parent folder ($PWD/../ , here we assume you are currently in the scripts subfolder). We specify a pause of 5 seconds between submissions (to avoid IO difficulties). --max 200 will force the script to wait once you have 200 jobs on the cluster (to avoid filling up the queue to much). With --queue you can select a cluster queue, here we select max100 that will ensure that no more than 100 jobs will be actually running on the cluster. The same could be achieved by specifying --max 100. Run long_submit_jobs --help to learn about these and more options.
Have fun ...
tkmedit tcl script
As mentioned in the course, you can create a tcl script to use with tkmedit or tksurfer so that they open with your preferences already selected (such as surface color, surface thickness, etc.). Below is an example of a tcl script that interacts with tkmedit:
SetZoomLevel 2 RedrawScreen SetSurfaceLineColor 0 0 0 0 1 SetSurfaceLineColor 1 0 0 0 1 SetSurfaceLineWidth 0 0 2 SetSurfaceLineWidth 1 0 2 SetSurfaceLineWidth 0 2 2 SetSurfaceLineWidth 1 2 2 SetDisplayFlag 5 0 SetCursorColor .5 0 .5
The above script does the following things (the order below corresponds to the order of each line in the script above):
- Zooms in once
RedrawScreen applies this zoom
- Sets one hemisphere of the white surface to blue
- Sets the other hemisphere of the white surface to blue
- Sets one hemisphere of the white surface to a thickness of 2
- Sets the other hemisphere of the white surface to a thickness of 2
- Sets one hemisphere of the pial surface to a thickness of 2
- Sets the other hemisphere of the pial surface to a thickness of 2
- Turns off the original surface
- Changes the cursor (+) to purple
The script can be created in any text editor (i.e. emacs, vi, gedit) and should be saved with the tcl extension (i.e. surfaces.tcl) in a convenient location.
A list of other options that can be added to the script are here: https://surfer.nmr.mgh.harvard.edu/fswiki/TkMeditGuide/TkMeditReference/TkMeditScripting
In order to use the script, you would call it using the -tcl flag with your tkmedit command. For example:
tkmedit subj001 brainmask.mgz -aux T1.mgz -surfs -tcl /path/to/surfaces.tcl