The effect of neural network architecture on virtual H&E staining的readme自用

Introduction

The codebase was developed using python 3.7.3. All the dependencies can be installed using requirements.txt. There are five main steps, namely, tiling, training, inference, merge, and evaluation. All of these steps are configuration driven and the config files already contain almost all the settings except for some directories and epoch information. Instructions to set these variables are mentioned below in their respective sections.
Note: The main repository folder will be referred to as root in the instructions below.
要点:

  1. 用python3.7.3
  2. 所有需要的包都在requirements.txt这个文件里

Tiling:

Tiling is the process of splitting a full-resolution tissue image into smaller patches for training and inference. Tiling configs for both train and test sets can be found in the folder root/configs/tiling. For the train set, use config_tiling_paired_train.json. Set “data_source” to the directory containing full-resolution tiff images and “output_root” to the tiles output director. To tile the full-resolution images use the following commands:

python execute.py tile --config root/configs/tiling/config_tiling_paired_train.json --multiprocess
python execute.py tile --config root/configs/tiling/config_tiling_paired_test.json --multiprocess

要点:

  1. tile是用来分割的,原图是很大的WSI
  2. 训练集和测试集分割前都需要去其config文件里面修改
  3. 使用两个命令直接在terminal里面执行即可

Training:

Six training config files could be found in the folder root/configs/train, 2 per variant of pix2pix. Each training will generate an experiment folder like Exp-Pix2Pix-02092021-124502 with the following folders and files:
checkpoint: contains epoch’s model weights
config.json: config with which the training was started
ds_wsi: for downsampled whole slide images
evaluation: for evaluation file
inference: for epoch and sample-wise virtually stained tiles
logs: for training logs
output: for post-epoch validation sample visualization
readme.txt: contains git commit has
wsi: for full-resolution tissue whole slide images.

Use the following python command to run the trainings with different training configs:

python execute.py train --exp_root "experiment-directory" --data_root "tiles-directory"/train/512/ --config root/configs/train/config_train_pix2pix_light.json

experiments-directory: where you want to save the experiments.
tiles-directory: where the tiles are saved.

Inference:

Similar to training, there are six inference configuration files in the folder root/configs/inference, 2 per variant of pix2pix. Add the epoch numbers, for which you want to run inference, to the “epochs” list in the configs. Inference will save epoch and sample-wise tiles under the inference folder in Use the following python command to run the inference with different inference configs:

python execute.py inference --exp_path "experiment-directory"/"experiment-name" --data_root "tiles-directory"/test/2048/ --config root/configs/inference/config_inference_pix2pix_light.json

experiments-directory: directory where experiments are saved.
experiment-name: Name of the specific experiment something like: Exp-Pix2Pix-06032022-201735
tiles-directory: where the tiles are saved.

Merge:

The next step is merging the experiment specific tiles. Merge also has six config files in the folder root/configs/merge. Add the epoch numbers, for which you want to merge inference results, to the “epochs” list in the configs. Merge will generate ful.Use the following python command to run merge tiles with different merge configs:

python execute.py merge_tiles --exp_path "experiment-directory"/"experiment-name" --inference_root "experiment-directory"/"experiment-name"/inference --multiprocess --config ./configs/merge/config_merge_pix2pix_light.json

Additionally, you could also downsample the whole slide image by a factor of 10 and save them as jpegs. Use the following python command for that:
poetry run python execute.py downsample --exp_path “experiment-directory”/“experiment-name”

experiments-directory: directory where experiments are saved.
experiment-name: Name of the specific experiment something like: Exp-Pix2Pix-06032022-201735

Evaluation:

The final step is tile-based evaluation. Evaluation produces an excel file containing tile-wise SSIM, PCC, and PSNR scores.

python execute.py evaluate --exp_path "experiment-directory"/"experiment-name" --data_root "tiles-directory"/test/2048  --inference_root "experiment-directory"/"experiment-name"/inference --multiprocess --config ./configs/evaluation/config_evaluate_pix2pix_light.json

experiments-directory: directory where experiments are saved.
experiment-name: Name of the specific experiment something like: Exp-Pix2Pix-06032022-201735
tiles-directory: where the tiles are saved.

Project DOI

10.5281/zenodo.7589356

How to cite this work

Khan, U., Koivukoski, S., Valkonen, M., Latonen, L., & Ruusuvuori, P. (2023). The effect of neural network architecture on virtual H&E staining: Systematic assessment of histological feasibility. Patterns

  • 22
    点赞
  • 22
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值