SuperSynth: Multi-task 3D U-net for scans of any resolution and contrast (including low field and ex vivo)


Author: Juan Eugenio Iglesias
E-mail: jiglesiasgonzalez [at] mgh.harvard.edu

Rather than directly contacting the author, please post your questions on this module to the FreeSurfer mailing list at freesurfer [at] nmr.mgh.harvard.edu

Relevant publications:
"A Modality-agnostic Multi-task Foundation Model for Human Brain Imaging" Liu et al., in preparation.

Contents

  1. General Description
  2. Installation
  3. Usage
  4. FAQ


1. General Description

This is a U-Net trained to make a set of useful predictions from any 3D brain image (in vivo, ex vivo, single hemispheres, etc) using a common backbone. Like our other "Synth" tools (SynthSeg, SynthSR, SynthMorph...), it is trained on synthetic data to support inputs of any resolution and contrast (including low-field scans). It also support ex vivo scans that are missing cerebellum and brainstem, or single hemisphere. It predicts:



example.png

2. Installation

The first time you run this module, it will prompt you to download a machine learning model files. Follow the instructions on the screen to obtain the file.

3. Usage

The entry point / main script is mri_super_synth. There are two way of running the code:

  1. For a single scan: just provide input file with --i, output directory with --o, and type of volume with --mode.
  2. For a set of scans: you need to prepare a CSV file, where each row has 3 columns separated with commas:

-Column 1: input file

-Column 2: output directory

-Column 3: mode (must be invivo, exvivo, cerebrum, left-hemi, or right-hemi)

Please note that there is no leading/header row in the CSV file. The first row already corresponds to an input volume. Tip: you can comment out a line by starting it with #

The command line options are:

4. FAQ

No, this version does not inpaint lesions. The lesions are actually segmented following a strategy similar to WMHSynthSeg.

If you use the hemi mode, you will not get the cerebellum or brainstem. Use the exvivo mode instead (with the caveat that you may lose some voxels around the medial wall, which may get assigned to the contralateral hemisphere).

For 32 vs 64-bit reasons, inference is tiled on the GPU but not on the CPU, so results are expected to be slightly different on the 2 platforms. You can use --force_tiling option on the CPU to force tiling and get the same results as on the GPU.