Prepare your Dataset

1. prepare the folder structure manually

the structure should be something like this:

nipype_tutorial
|-- data
    |-- demographics.txt
    |-- sub001
    |   |-- behavdata_run001.txt
    |   |-- behavdata_run002.txt
    |   |-- onset_run001_cond001.txt
    |   |-- onset_run001_cond002.txt
    |   |-- onset_run001_cond003.txt
    |   |-- onset_run001_cond004.txt
    |   |-- onset_run002_cond001.txt
    |   |-- onset_run002_cond002.txt
    |   |-- onset_run002_cond003.txt
    |   |-- onset_run002_cond004.txt
    |   |-- run001.nii.gz
    |   |-- run002.nii.gz
    |   |-- struct.nii.gz
    |-- sub0..
    |-- sub010
        |-- behav...
        |-- onset_...
        |-- run...
        |-- struct.nii.gz


2. with the following code:

# Specify important variables
ZIP_FILE=~/Downloads/ds102_raw.tgz   #location of download file
TUTORIAL_DIR=~/nipype_tutorial       #location of experiment folder
TMP_DIR=$TUTORIAL_DIR/tmp            #location of temporary folder
DATA_DIR=$TUTORIAL_DIR/data          #location of data folder

# Unzip ds102 dataset into TMP_DIR
mkdir -p $TMP_DIR
tar -zxvf $ZIP_FILE -C $TMP_DIR

# Copy data of first ten subjects into DATA_DIR
for id in $(seq -w 1 10)
do
    echo "Creating dataset for subject: sub0$id"
    mkdir -p $DATA_DIR/sub0$id
    cp $TMP_DIR/ds102/sub0$id/anatomy/highres001.nii.gz \
       $DATA_DIR/sub0$id/struct.nii.gz

    for session in run001 run002
    do
        cp $TMP_DIR/ds102/sub0$id/BOLD/task001_$session/bold.nii.gz \
           $DATA_DIR/sub0$id/$session.nii.gz
        cp $TMP_DIR/ds102/sub0$id/behav/task001_$session/behavdata.txt \
           $DATA_DIR/sub0$id/behavdata_$session.txt

        for con_id in {1..4}
        do
            cp $TMP_DIR/ds102/sub0$id/model/model001/onsets/task001_$session/cond00$con_id.txt \
               $DATA_DIR/sub0$id/onset_${session}_cond00$con_id.txt
        done
    done

    echo "sub0$id done."
done

# Copy information about demographics, conditions and tasks into DATA_DIR
cp $TMP_DIR/ds102/demographics.txt $DATA_DIR/demographics.txt
cp $TMP_DIR/ds102/models/model001/* $DATA_DIR/.

# Delete the temporary folder
rm -rf $TMP_DIR

For those who use their own dataset

If you want to use your own dataset, make sure that you know the following parameters:

  • Number of volumes, number of slices per volume, slice order and TR of the functional scan.
  • Number of conditions during a session, as well as onset and duration of stimulation during each condition.

Make the dataset ready for Nipype

Convert your data into NIfTI format

       There are many different tools that you can use to convert your files. For example, if you like to have a nice GUI to convert your files, use MRICron‘s MRIConvert. But for this Beginner’s Guide we will use FreeSurfer’s mri_convert function, as it is rather easy to use and doesn’t require many steps.

       But first, as always, be aware of your folder structure. So let’s assume that we’ve stored our dicoms in a folder called raw_data and that the folder structure looks something like this:

raw_dicom
|-- sub001
|   |-- t1w_3d_MPRAGE
|   |   |-- 00001.dcm
|   |   |-- ...
|   |   |-- 00176.dcm
|   |-- fmri_run1_long
|   |   |-- 00001.dcm
|   |   |-- ...
|   |   |-- 00240.dcm
|   |-- fmri_run2_long
|       |-- ...
|-- sub0..
|-- sub010

       This means, that we have one folder per subject with each containing another folder, one for the structural T1 weighted image and 2 for the functional T2 weighted images. The conversion of the dicom files in those folders is rather easy. If you use FreeSurfer’s mri_convert function, the command is as as follows: mri_convert <in volume> <out volume>. You have to replace <in volume> by the actual path to any one dicom file in the folder and <out volume> with the name for your outputfile.

       So, to accomplish this with some few terminal command, we first have to tell the system the path and names of the folders that we later want to feed to the mri_convert function. This is done by the following variables (line 1 to 6). If this is done, we only have to run the loop (line 8 to 17) to actually run mri_convert for each subject and each scanner image.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
TUTORIAL_DIR=~/nipype_tutorial     # location of experiment folder
RAW_DIR=$TUTORIAL_DIR/raw_dicom    # location of raw data folder
T1_FOLDER=t1w_3d_MPRAGE            # dicom folder containing anatomical scan
FUNC_FOLDER1=fmri_run1_long        # dicom folder containing 1st     functional scan
FUNC_FOLDER2=fmri_run2_long        # dicom folder containing 2nd functional scan
DATA_DIR=$TUTORIAL_DIR/data        # location of output folder

for id in $(seq -w 1 10)
do
    mkdir -p $DATA_DIR/sub0$id
    mri_convert $RAW_DIR/sub0$id/$T1_FOLDER/00001.dcm    $DATA_DIR/sub0$id/struct.nii.gz
    mri_convert $RAW_DIR/sub0$id/$FUNC_FOLDER1/00001.dcm $DATA_DIR/sub0$id/run001.nii.gz
    mri_convert $RAW_DIR/sub0$id/$FUNC_FOLDER2/00001.dcm $DATA_DIR/sub0$id/run002.nii.gz
done

Run FreeSurfer’s recon-all

        Not mandatory but highly recommended is to run FreeSurfer’s recon-all process on the anatomical scans of your subject. recon-all is FreeSurfer’s cortical reconstruction process that automatically creates a parcellation of cortical and a segmentation of subcortical regions. A more detailed description about the recon-all process can be found on the official homepage.

        As I said, you don’t have to use FreeSurfer’s recon-all process, but you want to! Because many of FreeSurfer’s other algorithms require the output of recon-all. The only negative point about recon-all is that it takes rather long to process a single subject. My average times are between 12-24h, but it is also possible that the process takes up to 40h. All of it depends on the system you are using. So far, recon-all can’t be run in parallel. Luckily, if you have an 8 core processor with enough memory, you should be able to process 8 subjects in parallel.

Run recon-all on the tutorial dataset (terminal version)

         The code to run recon-all on a single subject is rather simple, i.e. recon-all -all -subjid sub001. The only thing that you need to keep in mind is to tell your system the path to the freesurfer folder by specifying the variable SUBJECTS_DIR and that each subject you want to run the process on has a according anatomical scan in this freesurfer folder under SUBJECTS_DIR.

         To run recon-all on the 10 subjects of the tutorial dataset you can run the following code:

# Specify important variables
export TUTORIAL_DIR=~/nipype_tutorial         #location of experiment folder
export DATA_DIR=$TUTORIAL_DIR/data            #location of data folder
export SUBJECTS_DIR=$TUTORIAL_DIR/freesurfer  #location of freesurfer folder

for id in $(seq -w 1 10)
do
    echo "working on sub0$id"
    mkdir -p $SUBJECTS_DIR/sub0$id/mri/orig
    mri_convert $DATA_DIR/sub0$id/struct.nii.gz \
                $SUBJECTS_DIR/sub0$id/mri/orig/001.mgz
    recon-all -all -subjid sub0$id
    echo "sub0$id finished"
done

         This code will run the subjects in sequential order. If you want to process the 10 subjects in (manual) parallel order, delete line 12 - recon-all -all -subjid sub0$id - from the code above, run it and than run the following code, each line in its own terminal:

export SUBJECTS_DIR=~/nipype_tutorial/freesurfer; recon-all -all -subjid sub001
export SUBJECTS_DIR=~/nipype_tutorial/freesurfer; recon-all -all -subjid sub002
...
export SUBJECTS_DIR=~/nipype_tutorial/freesurfer; recon-all -all -subjid sub010
 ******* If your MRI data was recorded on a 3T scanner, I highly recommend to use the -nuintensitycor-3T flag on the recon-all command, e.g. recon-all -all -subjid sub0$id -nuintensitycor-3T. This flag was created specifically for 3T scans and improves the brain segmentation accuracy by optimizing non-uniformity correction using N3.
Run recon-all on the tutorial dataset (Nipype version)

           If you run recon-all only by itself, I recommend you to use the terminal version shown above. But of course, you can also create a pipeline and use Nipype to do the same steps. This might be better if you want to make better use of the parallelization implemented in Nipype or if you want to put recon-all in a bigger workflow.

           I won’t explain to much how this workflow actually works, as the structure and creation of a common pipeline is covered in more detail in the next section. But to use Nipype to run FreeSurfer’s recon-all process do as follows:


# Import modules
import os
from os.path import join as opj
from nipype.interfaces.freesurfer import ReconAll
from nipype.interfaces.utility import IdentityInterface
from nipype.pipeline.engine import Workflow, Node

# Specify important variables
experiment_dir = '~/nipype_tutorial'             # location of experiment folder
data_dir = opj(experiment_dir, 'data')  # location of data folder
fs_folder = opj(experiment_dir, 'freesurfer')  # location of freesurfer folder
subject_list = ['sub001', 'sub002', 'sub003',
                'sub004', 'sub005', 'sub006',
                'sub007', 'sub008', 'sub009',
                'sub010']                        # subject identifier
T1_identifier = 'struct.nii.gz'                  # Name of T1-weighted image

# Create the output folder - FreeSurfer can only run if this folder exists
os.system('mkdir -p %s'%fs_folder)

# Create the pipeline that runs the recon-all command
reconflow = Workflow(name="reconflow")
reconflow.base_dir = opj(experiment_dir, 'workingdir_reconflow')

# Some magical stuff happens here (not important for now)
infosource = Node(IdentityInterface(fields=['subject_id']),
                  name="infosource")
infosource.iterables = ('subject_id', subject_list)

# This node represents the actual recon-all command
reconall = Node(ReconAll(directive='all',
                         #flags='-nuintensitycor-3T',
                         subjects_dir=fs_folder),
                name="reconall")

# This function returns for each subject the path to struct.nii.gz
def pathfinder(subject, foldername, filename):
    from os.path import join as opj
    struct_path = opj(foldername, subject, filename)
    return struct_path

# This section connects all the nodes of the pipeline to each other
reconflow.connect([(infosource, reconall, [('subject_id', 'subject_id')]),
                   (infosource, reconall, [(('subject_id', pathfinder,
                                             data_dir, T1_identifier),
                                            'T1_files')]),
                   ])

# This command runs the recon-all pipeline in parallel (using 8 cores)
reconflow.run('MultiProc', plugin_args={'n_procs': 8})

          After this script has run, all important outputs will be stored directly under ~/nipype_tutorial/freesurfer. But the running of the reconflow pipeline also created some temporary files. As defined by the script above, those files were stored under ~/nipype_tutorial/workingdir_reconflow. Now that the script has run you can delete this folder again. Either do this manually, use the shell command rm -rf ~/nipype_tutorial/workingdir_reconflow or add the following lines to the end of the python script above:

# Delete all temporary files stored under the 'workingdir_reconflow' folder
os.system('rm -rf %s'%reconflow.base_dir)


In the code above, if we don’t create the freesurfer output folder on line 19, we would get following error:


TraitError: The 'subjects_dir' trait of a ReconAllInputSpec instance must be an existing
directory name, but a value of '~/nipype_tutorial/freesurfer' <type 'str'> was specified.

Also, if your data was recorded on a 3T scanner and you want to use the mentioned -nuintensitycor-3T flag, just uncomment line 32, i.e. delete the # sign before flags='-nuintensitycor-3T' on line 32.

You can download this code as a script here: tutorial_2_recon_python.py



原文网址:miykael.github.io/nipype-beginner-s-guide/prepareData.html  

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值