nnunetv2(一)项目结构、配置文件和nnUNetv2_convert_MSD_dataset命令

如有错误,欢迎评论

项目结构

## 0.0 文档说明
./documentation/__init__.py
├── benchmarking.md
├── changelog.md
├── convert_msd_dataset.md
├── dataset_format_inference.md
├── dataset_format.md                   #数据格式
├── explanation_normalization.md        #归一化方式
├── explanation_plans_files.md          #设计实验的方法
├── extending_nnunet.md
├── how_to_use_nnunet.md                #如何使用nnunet
├── __init__.py
├── installation_instructions.md        #如何安装nnunet
├── manual_data_splits.md               #手动区分训练集和验证集
├── pretraining_and_finetuning.md
├── region_based_training.md
├── run_inference_with_pretrained_models.md
├── set_environment_variables.md
├── setting_up_paths.md
└── tldr_migration_guide_from_v1.md
./setup.py
./nnunetv2/paths.py
./nnunetv2/configuration.py
## 1.0数据格式转换代码,通过下面的代码,可以将其他数据文件存储目录结构调整成nnunet可以使用的目录结构
./nnunetv2/dataset_conversion/__init__.py
./nnunetv2/dataset_conversion/generate_dataset_json.py
./nnunetv2/dataset_conversion/datasets_for_integration_tests/__init__.py
./nnunetv2/dataset_conversion/datasets_for_integration_tests/Dataset998_IntegrationTest_Hippocampus_ignore.py
./nnunetv2/dataset_conversion/datasets_for_integration_tests/Dataset996_IntegrationTest_Hippocampus_regions_ignore.py
./nnunetv2/dataset_conversion/datasets_for_integration_tests/Dataset999_IntegrationTest_Hippocampus.py
./nnunetv2/dataset_conversion/datasets_for_integration_tests/Dataset997_IntegrationTest_Hippocampus_regions.py
./nnunetv2/dataset_conversion/convert_raw_dataset_from_old_nnunet_format.py
./nnunetv2/dataset_conversion/convert_MSD_dataset.py # 命令使用
./nnunetv2/dataset_conversion/Dataset137_BraTS21.py
./nnunetv2/dataset_conversion/Dataset120_RoadSegmentation.py
./nnunetv2/dataset_conversion/Dataset114_MNMs.py
./nnunetv2/dataset_conversion/Dataset218_Amos2022_task1.py
./nnunetv2/dataset_conversion/Dataset988_dummyDataset4.py
./nnunetv2/dataset_conversion/Dataset027_ACDC.py
./nnunetv2/dataset_conversion/Dataset220_KiTS2023.py
./nnunetv2/dataset_conversion/Dataset073_Fluo_C3DH_A549_SIM.py
./nnunetv2/dataset_conversion/Dataset115_EMIDEC.py
./nnunetv2/dataset_conversion/Dataset219_Amos2022_task2.py

## 1.1、图像读取模块
./nnunetv2/imageio/nibabel_reader_writer.py
./nnunetv2/imageio/simpleitk_reader_writer.py
./nnunetv2/imageio/tif_reader_writer.py
./nnunetv2/imageio/__init__.py
./nnunetv2/imageio/natural_image_reager_writer.py
./nnunetv2/imageio/reader_writer_registry.py
./nnunetv2/imageio/base_reader_writer.py

## 1.2、工具包模块
./nnunetv2/utilities/get_network_from_plans.py
./nnunetv2/utilities/default_n_proc_DA.py
./nnunetv2/utilities/overlay_plots.py
./nnunetv2/utilities/helpers.py
./nnunetv2/utilities/collate_outputs.py
./nnunetv2/utilities/ddp_allgather.py
./nnunetv2/utilities/__init__.py
./nnunetv2/utilities/json_export.py
./nnunetv2/utilities/label_handling/__init__.py
./nnunetv2/utilities/label_handling/label_handling.py
./nnunetv2/utilities/network_initialization.py
./nnunetv2/utilities/dataset_name_id_conversion.py
./nnunetv2/utilities/file_path_utilities.py
./nnunetv2/utilities/find_class_by_name.py
./nnunetv2/utilities/utils.py
./nnunetv2/utilities/plans_handling/plans_handler.py
./nnunetv2/utilities/plans_handling/__init__.py


## 2.0、生成nnunet可以使用的数据类型,并且对数据进行统计分析,设计模型需要使用的各种超参
./nnunetv2/experiment_planning/plan_and_preprocess_api.py
./nnunetv2/experiment_planning/plan_and_preprocess_entrypoints.py
./nnunetv2/experiment_planning/__init__.py
./nnunetv2/experiment_planning/dataset_fingerprint/__init__.py
./nnunetv2/experiment_planning/dataset_fingerprint/fingerprint_extractor.py
./nnunetv2/experiment_planning/plans_for_pretraining/__init__.py
./nnunetv2/experiment_planning/plans_for_pretraining/move_plans_between_datasets.py
./nnunetv2/experiment_planning/experiment_planners/__init__.py
./nnunetv2/experiment_planning/experiment_planners/network_topology.py
./nnunetv2/experiment_planning/experiment_planners/resencUNet_planner.py
./nnunetv2/experiment_planning/experiment_planners/default_experiment_planner.py
./nnunetv2/experiment_planning/verify_dataset_integrity.py

## 2.1、前处理模块
./nnunetv2/preprocessing/cropping/cropping.py
./nnunetv2/preprocessing/cropping/__init__.py
./nnunetv2/preprocessing/resampling/__init__.py
./nnunetv2/preprocessing/resampling/default_resampling.py
./nnunetv2/preprocessing/resampling/utils.py
./nnunetv2/preprocessing/__init__.py
./nnunetv2/preprocessing/normalization/map_channel_name_to_normalization.py
./nnunetv2/preprocessing/normalization/default_normalization_schemes.py
./nnunetv2/preprocessing/normalization/__init__.py
./nnunetv2/preprocessing/preprocessors/__init__.py
./nnunetv2/preprocessing/preprocessors/default_preprocessor.py

## 2.2、后处理模块
./nnunetv2/postprocessing/remove_connected_components.py
./nnunetv2/postprocessing/__init__.py

## 2.3、模型加载
./nnunetv2/__init__.py
./nnunetv2/model_sharing/model_download.py
./nnunetv2/model_sharing/__init__.py
./nnunetv2/model_sharing/model_import.py
./nnunetv2/model_sharing/entry_points.py
./nnunetv2/model_sharing/model_export.py

## 3.0、训练模块
./nnunetv2/training/__init__.py
#3.1、数据加载模块
./nnunetv2/training/dataloading/data_loader_3d.py
./nnunetv2/training/dataloading/__init__.py
./nnunetv2/training/dataloading/utils.py
./nnunetv2/training/dataloading/nnunet_dataset.py
./nnunetv2/training/dataloading/data_loader_2d.py
./nnunetv2/training/dataloading/base_data_loader.py
#3.2、数据增强模块
./nnunetv2/training/data_augmentation/__init__.py
./nnunetv2/training/data_augmentation/compute_initial_patch_size.py
./nnunetv2/training/data_augmentation/custom_transforms/region_based_training.py
./nnunetv2/training/data_augmentation/custom_transforms/deep_supervision_donwsampling.py
./nnunetv2/training/data_augmentation/custom_transforms/__init__.py
./nnunetv2/training/data_augmentation/custom_transforms/transforms_for_dummy_2d.py
./nnunetv2/training/data_augmentation/custom_transforms/masking.py
./nnunetv2/training/data_augmentation/custom_transforms/cascade_transforms.py
./nnunetv2/training/data_augmentation/custom_transforms/manipulating_data_dict.py
./nnunetv2/training/data_augmentation/custom_transforms/limited_length_multithreaded_augmenter.py
#3.3、损失函数模块
./nnunetv2/training/loss/deep_supervision.py
./nnunetv2/training/loss/__init__.py
./nnunetv2/training/loss/dice.py
./nnunetv2/training/loss/robust_ce_loss.py
./nnunetv2/training/loss/compound_losses.py
#3.4、学习率模块
./nnunetv2/training/lr_scheduler/__init__.py
./nnunetv2/training/lr_scheduler/polylr.py
#3.5、日志模块
./nnunetv2/training/logging/nnunet_logger.py
./nnunetv2/training/logging/__init__.py
#3.6、训练最佳实践模块
./nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py
./nnunetv2/training/nnUNetTrainer/__init__.py
./nnunetv2/training/nnUNetTrainer/variants/optimizer/nnUNetTrainerAdan.py
./nnunetv2/training/nnUNetTrainer/variants/optimizer/nnUNetTrainerAdam.py
./nnunetv2/training/nnUNetTrainer/variants/optimizer/__init__.py
./nnunetv2/training/nnUNetTrainer/variants/network_architecture/nnUNetTrainerNoDeepSupervision.py
./nnunetv2/training/nnUNetTrainer/variants/network_architecture/__init__.py
./nnunetv2/training/nnUNetTrainer/variants/network_architecture/nnUNetTrainerBN.py
./nnunetv2/training/nnUNetTrainer/variants/loss/__init__.py
./nnunetv2/training/nnUNetTrainer/variants/loss/nnUNetTrainerTopkLoss.py
./nnunetv2/training/nnUNetTrainer/variants/loss/nnUNetTrainerDiceLoss.py
./nnunetv2/training/nnUNetTrainer/variants/loss/nnUNetTrainerCELoss.py
./nnunetv2/training/nnUNetTrainer/variants/__init__.py
./nnunetv2/training/nnUNetTrainer/variants/lr_schedule/nnUNetTrainerCosAnneal.py
./nnunetv2/training/nnUNetTrainer/variants/lr_schedule/__init__.py
./nnunetv2/training/nnUNetTrainer/variants/sampling/__init__.py
./nnunetv2/training/nnUNetTrainer/variants/sampling/nnUNetTrainer_probabilisticOversampling.py
./nnunetv2/training/nnUNetTrainer/variants/data_augmentation/nnUNetTrainerNoMirroring.py
./nnunetv2/training/nnUNetTrainer/variants/data_augmentation/nnUNetTrainerDA5.py
./nnunetv2/training/nnUNetTrainer/variants/data_augmentation/nnUNetTrainerNoDA.py
./nnunetv2/training/nnUNetTrainer/variants/data_augmentation/__init__.py
./nnunetv2/training/nnUNetTrainer/variants/data_augmentation/nnUNetTrainerDAOrd0.py
./nnunetv2/training/nnUNetTrainer/variants/benchmarking/nnUNetTrainerBenchmark_5epochs.py
./nnunetv2/training/nnUNetTrainer/variants/benchmarking/__init__.py
./nnunetv2/training/nnUNetTrainer/variants/benchmarking/nnUNetTrainerBenchmark_5epochs_noDataLoading.py
./nnunetv2/training/nnUNetTrainer/variants/training_length/nnUNetTrainer_Xepochs_NoMirroring.py
./nnunetv2/training/nnUNetTrainer/variants/training_length/__init__.py
./nnunetv2/training/nnUNetTrainer/variants/training_length/nnUNetTrainer_Xepochs.py



## 4.0、推理模块
./nnunetv2/inference/examples.py
./nnunetv2/inference/__init__.py
./nnunetv2/inference/sliding_window_prediction.py
./nnunetv2/inference/predict_from_raw_data.py
./nnunetv2/inference/export_prediction.py
./nnunetv2/inference/data_iterators.py
./nnunetv2/ensembling/ensemble.py
./nnunetv2/ensembling/__init__.py
./nnunetv2/run/run_training.py
./nnunetv2/run/load_pretrained_weights.py
./nnunetv2/run/__init__.py
./nnunetv2/batch_running/generate_lsf_runs_customDecathlon.py
./nnunetv2/batch_running/__init__.py
./nnunetv2/batch_running/release_trainings/__init__.py
./nnunetv2/batch_running/release_trainings/nnunetv2_v1/__init__.py
./nnunetv2/batch_running/release_trainings/nnunetv2_v1/generate_lsf_commands.py
./nnunetv2/batch_running/release_trainings/nnunetv2_v1/collect_results.py
./nnunetv2/batch_running/benchmarking/__init__.py
./nnunetv2/batch_running/benchmarking/generate_benchmarking_commands.py
./nnunetv2/batch_running/benchmarking/summarize_benchmark_results.py
./nnunetv2/batch_running/collect_results_custom_Decathlon_2d.py
./nnunetv2/batch_running/collect_results_custom_Decathlon.py

## 5.0、测试模块
./nnunetv2/tests/__init__.py
./nnunetv2/tests/integration_tests/__init__.py
./nnunetv2/tests/integration_tests/run_integration_test_bestconfig_inference.py
./nnunetv2/tests/integration_tests/add_lowres_and_cascade.py
./nnunetv2/tests/integration_tests/cleanup_integration_test.py


## 6.0、验证模块
./nnunetv2/evaluation/accumulate_cv_results.py
./nnunetv2/evaluation/__init__.py
./nnunetv2/evaluation/find_best_configuration.py
./nnunetv2/evaluation/evaluate_predictions.py

参考:知乎

setup.py

可以使用pip install .命令来安装nnunet v2

pyproject.toml

配置文件

[project]
name = "nnunetv2" # 项目名称
version = "2.5" # 版本
requires-python = ">=3.9" # python版本3.9以上
description = "nnU-Net is a framework for out-of-the box image segmentation." # 描述:开箱即用
readme = "readme.md" # 指定README文件
license = { file = "LICENSE" } # 许可证
authors = [ # 作者信息
    { name = "Fabian Isensee", email = "f.isensee@dkfz-heidelberg.de"},
    { name = "Helmholtz Imaging Applied Computer Vision Lab" }
]
classifiers = [ # 分类,提供了一系列关于项目状态、受众、编程语言、许可证和主题相关的信息
    "Development Status :: 5 - Production/Stable",
    "Intended Audience :: Developers",
    "Intended Audience :: Science/Research",
    "Intended Audience :: Healthcare Industry",
    "Programming Language :: Python :: 3",
    "License :: OSI Approved :: Apache Software License",
    "Topic :: Scientific/Engineering :: Artificial Intelligence",
    "Topic :: Scientific/Engineering :: Image Recognition",
    "Topic :: Scientific/Engineering :: Medical Science Apps.",
]
keywords = [ # 项目关键字
    'deep learning',
    'image segmentation',
    'semantic segmentation',
    'medical image analysis',
    'medical image segmentation',
    'nnU-Net',
    'nnunet'
]
dependencies = [ # 依赖包
    "torch>=2.1.2",
    "acvl-utils>=0.2,<0.3",  # 0.3 may bring breaking changes. Careful!
    "dynamic-network-architectures>=0.3.1,<0.4",  # 0.3.1 and lower are supported, 0.4 may have breaking changes. Let's be careful here
    "tqdm",
    "dicom2nifti",
    "scipy",
    "batchgenerators>=0.25",
    "numpy",
    "scikit-learn",
    "scikit-image>=0.19.3",
    "SimpleITK>=2.2.1",
    "pandas",
    "graphviz",
    'tifffile',
    'requests',
    "nibabel",
    "matplotlib",
    "seaborn",
    "imagecodecs",
    "yacs",
    "batchgeneratorsv2",
    "einops"
]

[project.urls] # 项目主页
homepage = "https://github.com/MIC-DKFZ/nnUNet"
repository = "https://github.com/MIC-DKFZ/nnUNet"

[project.scripts] # 项目脚本入口点
# 数据集准备
nnUNetv2_plan_and_preprocess = "nnunetv2.experiment_planning.plan_and_preprocess_entrypoints:plan_and_preprocess_entry"
# 从数据集中提取指纹
nnUNetv2_extract_fingerprint = "nnunetv2.experiment_planning.plan_and_preprocess_entrypoints:extract_fingerprint_entry"
# 还没有进行预处理
nnUNetv2_plan_experiment = "nnunetv2.experiment_planning.plan_and_preprocess_entrypoints:plan_experiment_entry"
# 数据预处理
nnUNetv2_preprocess = "nnunetv2.experiment_planning.plan_and_preprocess_entrypoints:preprocess_entry"
# 模型训练
nnUNetv2_train = "nnunetv2.run.run_training:run_training_entry"
# 模型加载并预测
nnUNetv2_predict_from_modelfolder = "nnunetv2.inference.predict_from_raw_data:predict_entry_point_modelfolder"
# 模型预测
nnUNetv2_predict = "nnunetv2.inference.predict_from_raw_data:predict_entry_point"
# 将旧版本的nnunet数据集转换为新格式
nnUNetv2_convert_old_nnUNet_dataset = "nnunetv2.dataset_conversion.convert_raw_dataset_from_old_nnunet_format:convert_entry_point"
# 在交叉验证结果中查找最佳模型配置
nnUNetv2_find_best_configuration = "nnunetv2.evaluation.find_best_configuration:find_best_configuration_entry_point"
# 后处理
nnUNetv2_determine_postprocessing = "nnunetv2.postprocessing.remove_connected_components:entry_point_determine_postprocessing_folder"
nnUNetv2_apply_postprocessing = "nnunetv2.postprocessing.remove_connected_components:entry_point_apply_postprocessing"
# 多个模型集成
nnUNetv2_ensemble = "nnunetv2.ensembling.ensemble:entry_point_ensemble_folders"
# 累积交叉验证结果
nnUNetv2_accumulate_crossval_results = "nnunetv2.evaluation.find_best_configuration:accumulate_crossval_results_entry_point"
# 生成预测结果
nnUNetv2_plot_overlay_pngs = "nnunetv2.utilities.overlay_plots:entry_point_generate_overlay"
# 从url中下载预训练模型
nnUNetv2_download_pretrained_model_by_url = "nnunetv2.model_sharing.entry_points:download_by_url"
nnUNetv2_install_pretrained_model_from_zip = "nnunetv2.model_sharing.entry_points:install_from_zip_entry_point"
# 将预训练模型导出
nnUNetv2_export_model_to_zip = "nnunetv2.model_sharing.entry_points:export_pretrained_model_entry"
# 数据集之间移动
nnUNetv2_move_plans_between_datasets = "nnunetv2.experiment_planning.plans_for_pretraining.move_plans_between_datasets:entry_point_move_plans_between_datasets"
# 模型评估
nnUNetv2_evaluate_folder = "nnunetv2.evaluation.evaluate_predictions:evaluate_folder_entry_point"
nnUNetv2_evaluate_simple = "nnunetv2.evaluation.evaluate_predictions:evaluate_simple_entry_point"
# 转换task为dataset
nnUNetv2_convert_MSD_dataset = "nnunetv2.dataset_conversion.convert_MSD_dataset:entry_point"

[project.optional-dependencies] # 项目可选依赖
dev = [
    "black",
    "ruff",
    "pre-commit"
]

[build-system] # 构建系统的要求
requires = ["setuptools>=67.8.0"]
build-backend = "setuptools.build_meta" # 构建后端

[tool.codespell] # 跳过以下后缀文件
skip = '.git,*.pdf,*.svg'
#
# ignore-words-list = ''

configuration.py

nnUNet.nnunetv2.configuration

import os

from nnunetv2.utilities.default_n_proc_DA import get_allowed_n_proc_DA
# 检测环境变量中nnUNet_def_n_proc,如果不存在,默认进程数为8
default_num_processes = 8 if 'nnUNet_def_n_proc' not in os.environ else int(os.environ['nnUNet_def_n_proc'])
# 在一个样本中,最低分辨率轴上的间距是次高分辨率轴上间距的3倍或更大,那么这个样本就被视为是各向异性的
ANISO_THRESHOLD = 3  # determines when a sample is considered anisotropic (3 means that the spacing in the low
# resolution axis must be 3x as large as the next largest spacing)
# 根据当前运行环境设置在数据增强过程中使用的进程数
default_n_proc_DA = get_allowed_n_proc_DA()

nnUNetv2_convert_MSD_dataset

nnunetv2.dataset_conversion.convert_MSD_dataset

entry_point():使用命令行的代码入口,用来解析命令行参数,并且调用函数。
例如如下命令:nnUNetv2_convert_MSD_dataset -i nnUNetFrame/DATASET/nnUNet_raw/Task04_Hippocampus

def entry_point():
    parser = argparse.ArgumentParser()
    # 指定已经下载并且解压的MSD数据文件夹路径,参数不可缺
    parser.add_argument('-i', type=str, required=True,
                        help='Downloaded and extracted MSD dataset folder. CANNOT be nnUNetv1 dataset! Example: '
                             '/home/fabian/Downloads/Task05_Prostate')
    # 用指定的整数来覆盖数据集的ID,参数可选
    parser.add_argument('-overwrite_id', type=int, required=False, default=None,
                        help='Overwrite the dataset id. If not set we use the id of the MSD task (inferred from '
                             'folder name). Only use this if you already have an equivalently numbered dataset!')
    # 进程数,默认是12
    parser.add_argument('-np', type=int, required=False, default=default_num_processes,
                        help=f'Number of processes used. Default: {default_num_processes}')
    args = parser.parse_args()
    convert_msd_dataset(args.i, args.overwrite_id, args.np)

split_4d_nifti:把task文件转换成dataset的具体操作

def split_4d_nifti(filename, output_folder):# 对于四维的图像,其中一个维度可能是时间或者序列
    """对四维的图像分割成三维的图像,对三维的图像保留,并且复制到指定路径"""
    # 读取图像文件
    img_itk = sitk.ReadImage(filename)
    # 获取图像维度
    dim = img_itk.GetDimension()
    # 取出文件名
    file_base = os.path.basename(filename)
    if dim == 3:
        # 去掉后面的.nii.gz
        shutil.copy(filename, join(output_folder, file_base[:-7] + "_0000.nii.gz"))
        return
    elif dim != 4:
        raise RuntimeError("Unexpected dimensionality: %d of file %s, cannot split" % (dim, filename))
    else:
        # 转换为numpy
        img_npy = sitk.GetArrayFromImage(img_itk)
        # 获取图像间距
        spacing = img_itk.GetSpacing()
        # 图像原点
        origin = img_itk.GetOrigin()
        # 图像的方向矩阵
        direction = np.array(img_itk.GetDirection()).reshape(4,4)
        # now modify these to remove the fourth dimension
        spacing = tuple(list(spacing[:-1]))
        origin = tuple(list(origin[:-1]))
        direction = tuple(direction[:-1, :-1].reshape(-1))
        # 遍历每个时间点
        for i, t in enumerate(range(img_npy.shape[0])):
            img = img_npy[t]# 提取当前时间点的三维图像
            img_itk_new = sitk.GetImageFromArray(img)# 将array转换为图像
            img_itk_new.SetSpacing(spacing)# 设置间距
            img_itk_new.SetOrigin(origin)# 设置原点
            img_itk_new.SetDirection(direction)# 设置方向
            sitk.WriteImage(img_itk_new, join(output_folder, file_base[:-7] + "_%04.0d.nii.gz" % i))# 保存图像

convert_msd_dataset:将task变成dataset的主要函数

def convert_msd_dataset(source_folder: str, overwrite_target_id: Optional[int] = None,
                        num_processes: int = default_num_processes) -> None:
    """将task变成dataset的主要函数"""
    if source_folder.endswith('/') or source_folder.endswith('\\'):
        source_folder = source_folder[:-1]

    # 找寻数据结构,必须包含三个子目录(labelsTr,imagesTs,imagesTr)和一个子文件(dataset.json)
    labelsTr = join(source_folder, 'labelsTr')
    imagesTs = join(source_folder, 'imagesTs')
    imagesTr = join(source_folder, 'imagesTr')
    assert isdir(labelsTr), f"labelsTr subfolder missing in source folder"
    assert isdir(imagesTs), f"imagesTs subfolder missing in source folder"
    assert isdir(imagesTr), f"imagesTr subfolder missing in source folder"
    dataset_json = join(source_folder, 'dataset.json')
    assert isfile(dataset_json), f"dataset.json missing in source_folder"

    # infer source dataset id and name,推断id和数据集名字
    task, dataset_name = os.path.basename(source_folder).split('_')
    task_id = int(task[4:])

    # check if target dataset id is taken
    target_id = task_id if overwrite_target_id is None else overwrite_target_id
    # 查找是否有包含dataset_id的数据集,如果有就返回
    existing_datasets = find_candidate_datasets(target_id)
    # 判断数据集是否唯一,不唯一就报错,输出报错信息
    assert len(existing_datasets) == 0, f"Target dataset id {target_id} is already taken, please consider changing " \
                                        f"it using overwrite_target_id. Conflicting dataset: {existing_datasets} (check nnUNet_results, nnUNet_preprocessed and nnUNet_raw!)"

    # 处理后的数据集的数据结构命名
    target_dataset_name = f"Dataset{target_id:03d}_{dataset_name}"
    target_folder = join(nnUNet_raw, target_dataset_name)
    target_imagesTr = join(target_folder, 'imagesTr')
    target_imagesTs = join(target_folder, 'imagesTs')
    target_labelsTr = join(target_folder, 'labelsTr')
    # 新建目录
    maybe_mkdir_p(target_imagesTr)
    maybe_mkdir_p(target_imagesTs)
    maybe_mkdir_p(target_labelsTr)

    with multiprocessing.get_context("spawn").Pool(num_processes) as p:
        results = []

        # convert 4d train images
        # 不以点或者横线开头的,以.nii.gz结尾的文件
        source_images = [i for i in subfiles(imagesTr, suffix='.nii.gz', join=False) if
                         not i.startswith('.') and not i.startswith('_')]
        source_images = [join(imagesTr, i) for i in source_images]

        results.append(
            p.starmap_async(# 异步
                # 对四维的图像分割成三维的图像,对三维的图像保留,并且复制到指定路径
                split_4d_nifti, zip(source_images, [target_imagesTr] * len(source_images))
            )
        )

        # convert 4d test images
        source_images = [i for i in subfiles(imagesTs, suffix='.nii.gz', join=False) if
                         not i.startswith('.') and not i.startswith('_')]
        source_images = [join(imagesTs, i) for i in source_images]

        results.append(
            p.starmap_async(
                split_4d_nifti, zip(source_images, [target_imagesTs] * len(source_images))
            )
        )

        # copy segmentations
        source_images = [i for i in subfiles(labelsTr, suffix='.nii.gz', join=False) if
                         not i.startswith('.') and not i.startswith('_')]
        for s in source_images:
            shutil.copy(join(labelsTr, s), join(target_labelsTr, s))

        [i.get() for i in results]

    dataset_json = load_json(dataset_json)
    dataset_json['labels'] = {j: int(i) for i, j in dataset_json['labels'].items()}
    dataset_json['file_ending'] = ".nii.gz"
    dataset_json["channel_names"] = dataset_json["modality"]
    del dataset_json["modality"]
    del dataset_json["training"]
    del dataset_json["test"]
    save_json(dataset_json, join(nnUNet_raw, target_dataset_name, 'dataset.json'), sort_keys=False)
根据你提供的引用内容,你可能遇到了使用OpenCV编译时的一些错误。具体来说,你可能遇到了以下几种错误: 1. "fatal error: boostdesc_bgm.i: No such file or directory" 错误通常是由于缺少依赖文件引起的。你可以检查一下你的编译环境和编译参考是否正确。 2. "undefined reference to `google::base::CheckOpMessageBuilder::CheckOpMessageBuilder(char const*)'" 错误通常是由于缺少对应的库文件引起的。你可以尝试在CMakeLists.txt中添加对应的库文件路径来解决这个问题。 3. "undefined reference to `msd_init' collect2.exe: error: ld returned 1 exit status" 错误提示了一个未定义的函数msd_init。这种错误通常是由于缺少对应的函数定义或链接库引起的。你可以尝试检查一下你的代码中是否缺少了对msd_init函数的定义,或者是否需要链接对应的库文件来解决这个问题。 综上所述,你可以根据具体的错误提示进行相应的调查和解决方案。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *3* [fatal error: boostdesc_bgm.i: No such file or directory](https://blog.csdn.net/curious_undergather/article/details/111639199)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] - *2* [ROS编译报错“undefined reference to ‘xxx‘”的原因总结](https://blog.csdn.net/The_Dream_Runner/article/details/122668418)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值