15 T scanner
15 T data is stored here:
In February 2016, Andre ran some initial tests using a new 15 T birdcage built by Azma (called InitialTestsBirdcage). There were some shifts between some of the runs and some were discarded, but initial results looked ok for 100um, 75um and 50um. Averages were created using mri_robust_template:
mri_robust_template --mov <path to all input runs> --template <output_file_name.mgh> --satit --average 0
More runs were collected at 50um to increase SNR (I36_lh_mtl_15T), however the contrast wasn't great even with this large number of runs. It was also discovered that getting higher resolution isn't possible without changes in software (the matrix size or number of slices seems to be set to 512, and increasing it farther leads to random noise instead of images. Higher resolution such as 40um without increasing the matrix size causes too much wrapping).
The same sample (I36_lh_mtl_15T_070716) was rescanned with lower bandwidth and higher TR/TE to see if this would lead to better contrast, as evaluated by Jean. 18 runs were acquired at 50um resolution, but this didn't seem to be enough (we had to cut the scan short to put it in cryo for sectioning). The results were inconclusive, so we scanned another sample for a ridiculously long amount of time (I40_hp_15T_50um, 72 runs). The results were averaged in multiple ways (all with mri_robust_template): averaging all 72 runs, averaging every four runs (1-4, 5-8, ... , 69-72) so that motion between runs is less, then averaging all 18 of those 4run averages together, and averaging every two 4run averages together, then averaging all 9 of the 8run averages. One of the first two averages were better, but still disappointing considering the amount of scan time. Images are located here:
It seemed that even with high SNR (from 72 runs) using a sample with good contrast (I40 has good MR contrast in hemi and 7T solenoid scans), the contrast and structures Jean was looking for weren't apparent. We decided to try lower resolution scans with different TR/TE values to get a better idea of what parameters to use to get good contrast. This was done at 100um first, though the echo train and different flip angles weren't sufficient enough:
Bruce wanted to collect lower res data (200um) with a longer echo train (8 echoes) and higher number of flips (echoes 2-30, every 2 degrees). This was acquired on the same sample:
Higher flips seem to have better contrast, though Bruce wanted to do some optimization to see what parameters would be best. As of December 2016, Bruce wanted some gm/wm labels on this 200um data set, to see if he could do some optimization (this still needs to be done as of Feb 2017). The goal would be to find a good set of parameters to use for higher res scans, and not to continue collecting 200um data on the 15T scanner.
Since October 2016, no new data has been acquired. Andre tried to set up trufi diffusion, but ran into issues due to outdated Siemens software. Also, Andre wanted to investigate point scanning at the 15 T (which is highly inefficient but would lead to interesting results), but this required setting up streaming and using the portable RAID. Andre is still interested in pursuing these scan, so it may be worth setting up a time with him in March to try it out.
4.7 T scanner
In November 2016, we started to investigate using the 4.7 T scanner for hemi or whole brain scans. Lee met with Chris Farrar to try to set this up, but there were many issues (mostly since we have to use the largest gradient set, which nobody else uses). After discussing with Bruce, we put using this scanner on hold until we get support from the center. Here are some notes from this setup:
- There is currently no receive array, there is only a birdcage. We would need to build our own if we want to acquire multiple channel data.
- The amount of power used for the coil was hard set to a number below what we need, so the transmitter voltage (or some Bruker equivalent) did not converge. Chris said that he could contact Bruker so that this could be changed since he thought it was set at a conservative level. We had to set it to a lower, non-ideal value to scan anything.
- Chris recommended using only ~64 slices (128x128 matrix size) when running a 1mm FLASH scan. This isn't enough to cover a whole hemi. I'm not sure exactly whether he thought a larger FOV would put too much strain on the system, or that he saying to use that for testing. We intially tried using 128 slices and it wouldn't run. I can follow up more with him and/or Joe about that.
- I couldn't import Giorgia's/AY's diffusion protocols to edit. We use a different gradient coil, and it wouldn't load in the program. I could follow up with them to get a print out or some other usable form so I can see what they run.
The Bruker software (ParaVision) is more Siemens-y, but it is still very different. It was difficult to set up a flash scan, and I wasn't sure what to for a diffusion protocol at all without an example. It would probably take some help to get hemi/whole brain scans set up, or at least some time reading the manuals and getting familiar with it.
The viewing software is hard to use, even to quickly check if the sample fits inside the FOV. ParaVision also crashed and closed out completely twice while I was trying to view an image.
- You need to wear earplugs in the room while a scan is running, since the back of the scanner doesn't close completely (or it's just very loud). This makes it hard to try to debug/work on a protocol with someone while a scan is going on.
- We are one of (or the only) group that uses a large birdcage without any gradient inserts, so we would have to have Chris take out/put in the gradients when we start/finish. Eventually, we could do this ourselves but he said only experienced users are allowed to do that.