nnUNet-v2 官方调用教程(节选关键内容)_nnunetv2(1)

Datasets consist of three components: raw images, corresponding segmentation maps and a dataset.json file specifying
some metadata.

If you are migrating from nnU-Net v1, read this to convert your existing Tasks.

What do training cases look like?

Each training case is associated with an identifier = a unique name for that case. This identifier is used by nnU-Net to
connect images with the correct segmentation.

A training case consists of images and their corresponding segmentation.

Images is plural because nnU-Net supports arbitrarily many input channels. In order to be as flexible as possible,
nnU-net requires each input channel to be stored in a separate image (with the sole exception being RGB natural
images). So these images could for example be a T1 and a T2 MRI (or whatever else you want). The different input
channels MUST have the same geometry (same shape, spacing (if applicable) etc.) and
must be co-registered (if applicable). Input channels are identified by nnU-Net by their FILE_ENDING: a four-digit integer at the end
of the filename. Image files must therefore follow the following naming convention: {CASE_IDENTIFIER}_{XXXX}.{FILE_ENDING}.
Hereby, XXXX is the 4-digit modality/channel identifier (should be unique for each modality/chanel, e.g., “0000” for T1, “0001” for
T2 MRI, …) and FILE_ENDING is the file extension used by your image format (.png, .nii.gz, …). See below for concrete examples.
The dataset.json file connects channel names with the channel identifiers in the ‘channel_names’ key (see below for details).

Side note: Typically, each channel/modality needs to be stored in a separate file and is accessed with the XXXX channel identifier.
Exception are natural images (RGB; .png) where the three color channels can all be stored in one file (see the road segmentation dataset as an example).

Segmentations must share the same geometry with their corresponding images (same shape etc.). Segmentations are
integer maps with each value representing a semantic class. The background must be 0. If there is no background, then
do not use the label 0 for something else! Integer values of your semantic classes must be consecutive (0, 1, 2, 3,
…). Of course, not all labels have to be present in each training case. Segmentations are saved as {CASE_IDENTIFER}.{FILE_ENDING} .

Within a training case, all image geometries (input channels, corresponding segmentation) must match. Between training
cases, they can of course differ. nnU-Net takes care of that.

Important: The input channels must be consistent! Concretely, all images need the same input channels in the same
order and all input channels have to be present every time
. This is also true for inference!

Supported file formats

nnU-Net expects the same file format for images and segmentations! These will also be used for inference. For now, it
is thus not possible to train .png and then run inference on .jpg.

One big change in nnU-Net V2 is the support of multiple input file types. Gone are the days of converting everything to .nii.gz!
This is implemented by abstracting the input and output of images + segmentations through BaseReaderWriter. nnU-Net
comes with a broad collection of Readers+Writers and you can even add your own to support your data format!
See here.

As a nice bonus, nnU-Net now also natively supports 2D input images and you no longer have to mess around with
conversions to pseudo 3D niftis. Yuck. That was disgusting.

Note that internally (for storing and accessing preprocessed images) nnU-Net will use its own file format, irrespective
of what the raw data was provided in! This is for performance reasons.

By default, the following file formats are supported:

  • NaturalImage2DIO: .png, .bmp, .tif
  • NibabelIO: .nii.gz, .nrrd, .mha
  • NibabelIOWithReorient: .nii.gz, .nrrd, .mha. This reader will reorient images to RAS!
  • SimpleITKIO: .nii.gz, .nrrd, .mha
  • Tiff3DIO: .tif, .tiff. 3D tif images! Since TIF does not have a standardized way of storing spacing information,
    nnU-Net expects each TIF file to be accompanied by an identically named .json file that contains three numbers
    (no units, no comma. Just separated by whitespace), one for each dimension.

The file extension lists are not exhaustive and depend on what the backend supports. For example, nibabel and SimpleITK
support more than the three given here. The file endings given here are just the ones we tested!

IMPORTANT: nnU-Net can only be used with file formats that use lossless (or no) compression! Because the file
format is defined for an entire dataset (and not separately for images and segmentations, this could be a todo for
the future), we must ensure that there are no compression artifacts that destroy the segmentation maps. So no .jpg and
the likes!

Dataset folder structure

Datasets must be located in the nnUNet_raw folder (which you either define when installing nnU-Net or export/set every
time you intend to run nnU-Net commands!).
Each segmentation dataset is stored as a separate ‘Dataset’. Datasets are associated with a dataset ID, a three digit
integer, and a dataset name (which you can freely choose): For example, Dataset005_Prostate has ‘Prostate’ as dataset name and
the dataset id is 5. Datasets are stored in the nnUNet_raw folder

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值