Differences between revisions 15 and 16
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:
<<BR>>
'''''This functionality is available in development versions news than October 3rd 2025'''''
<<BR>>
<<BR>>

SuperSynth: Multi-task 3D U-net for scans of any resolution and contrast (including low field and ex vivo)


This functionality is available in development versions news than October 3rd 2025


Author: Juan Eugenio Iglesias
E-mail: jiglesiasgonzalez [at] mgh.harvard.edu

Rather than directly contacting the author, please post your questions on this module to the FreeSurfer mailing list at freesurfer [at] nmr.mgh.harvard.edu

Relevant publications:
"A Modality-agnostic Multi-task Foundation Model for Human Brain Imaging" Liu et al., in preparation.

Contents

  1. General Description
  2. Installation
  3. Usage
  4. FAQ


1. General Description

This is a U-Net trained to make a set of useful predictions from any 3D brain image (in vivo, ex vivo, single hemispheres, etc) using a common backbone. Like our other "Synth" tools (e.g., SynthSeg or SynthSR), it is trained on synthetic data to support inputs of any resolution and contrast (including low-field scans). It also support ex vivo scans that are missing cerebellum and brainstem, or single hemisphere. It predicts:

  • Segmentation of brain and extra-cerebral regions of interest, as well as white matter lesions.
  • Registration to MNI atlas
  • Joint super-resolution and synthesis of 1mm isotropic T1w, T2w, and FLAIR scans.
  • Extraction of cortical ribbon


example.png

2. Installation

The first time you run this module, it will prompt you to download a machine learning model file (unless you have already installed NextBrain, in which case, the model will already be there). Follow the instructions on the screen to obtain the file.

3. Usage

The entry point / main script is mri_super_synth. There are two way of running the code:

  1. For a single scan: just provide input file with --i, output directory with --o, and type of volume with --mode.
  2. For a set of scans: you need to prepare a CSV file, where each row has 3 columns separated with commas:

-Column 1: input file

-Column 2: output directory

-Column 3: mode (must be invivo, exvivo, cerebrum, left-hemi, or right-hemi)

Please note that there is no leading/header row in the CSV file. The first row already corresponds to an input volume. Tip: you can comment out a line by starting it with #

The command line options are:

  • --i [IMAGE_OR_CSV_FILE] Input image to segment (mode A) or CSV file with list of scans (mode B) (required argument)

  • --o [OUTPUT_DIRECTORY] Directory where outputs will be written (ignored in mode B)

  • --mode [MODE] Type of input. Must be invivo, exvivo, cerebrum, left-hemi, or right-hemi (ignored in mode B)

  • --threads [THREADS] Number of cores to be used. You can use -1 to use all available cores. Default is -1 (optional)
  • --device [DEV] Device used for computations (cpu or cuda). The default is to use cuda if a GPU is available (optional)
  • --force_tiling Use this flag to force tiling on CPU and get the same results as on GPU, as explained in the FAQ below (optional)

4. FAQ

  • Do the synthetic 1mm T1/T2/FLAIR inpaint lesions like SynthSR?

No, this version does not inpaint lesions. The lesions are actually segmented following a strategy similar to WMHSynthSeg.

  • I have an ex vivo hemisphere with cerebellum and/or brainstem

If you use the hemi mode, you will not get the cerebellum or brainstem. Use the exvivo mode instead (with the caveat that you may lose some voxels around the medial wall, which may get assigned to the contralateral hemisphere).

  • What is the deal with the --force_tiling option

For 32 vs 64-bit reasons, inference is tiled on the GPU but not on the CPU, so results are expected to be slightly different on the 2 platforms. You can use --force_tiling option on the CPU to force tiling and get the same results as on the GPU.

SuperSynth (last edited 2025-10-02 08:43:44 by JuanIglesias)