语义分割——张月

#PaddleSeg实现语义分割Baseline

手把手教你基于PaddleSeg实现语义分割任务


一、作业任务

本次任务将基于PaddleSeg展开语义分割任务的学习与实践,baseline会提供PaddleSeg套件的基本使用,相关细节如有遗漏,可参考[10分钟上手PaddleSeg

](https://aistudio.baidu.com/aistudio/projectdetail/1672610?channelType=0&channel=0)

  1. 选择提供的五个数据集中的一个数据集作为本次作业的训练数据,并根据baseline跑通项目

  1. 可视化1-3张预测图片与预测结果(本次提供的数据集没有验证与测试集,可将训练数据直接进行预测,展示训练数据的预测结果即可)

加分项:

  1. 尝试划分验证集,再进行训练

  1. 选择其他网络,调整训练参数进行更好的尝试



二、数据集说明


本项目使用的数据集是:语义分割数据集合集,包含马分割,眼底血管分割,车道线分割,场景分割以及人像分割。

该数据集已加载到本环境中,位于:

data/data103787/segDataset.zip

# unzip: 解压指令
# -o: 表示解压后进行输出
# -q: 表示静默模式,即不输出解压过程的日志
# -d: 解压到指定目录下,如果该文件夹不存在,会自动创建
!unzip -oq data/data103787/segDataset.zip -d segDataset

解压完成后,会在左侧文件目录多出一个segDataset的文件夹,该文件夹下有5个子文件夹:

  • horse -- 马分割数据<二分类任务>

  • fundusVessels -- 眼底血管分割数据

灰度图,每个像素值对应相应的类别 -- 因此label不宜观察,但符合套件训练需要
  • laneline -- 车道线分割数据

  • facade -- 场景分割数据

  • cocome -- 人像分割数据

label非直接的图片,为json格式的标注文件,有需要的小伙伴可以看一看PaddleSeg的 PaddleSeg实战——人像分割
# tree: 查看文件夹树状结构
# -L: 表示遍历深度
!tree segDataset -L 2

segDataset

├── cocome

│ ├── Annotations

│ └── Images

├── facade

│ ├── Annotations

│ └── Images

├── FundusVessels

│ ├── Annotations

│ └── Images

├── horse

│ ├── Annotations

│ └── Images

└── laneline

├── Annotations

└── Images

15 directories, 0 files

查看数据label的像素分布,可从中得知分割任务的类别数: 脚本位于: show_segDataset_label_cls_id.py
关于人像分割数据分析,这里不做提示,可参考 PaddleSeg实战——人像分割
# 查看label中像素分类情况
!python show_segDataset_label_cls_id.py

100%|████████████████████████████████████████| 328/328 [00:00<00:00, 962.87it/s]

horse-cls_id: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 211, 212, 213, 214, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255]

horse为90分类

horse实际应转换为2分类(将非0像素转换为像素值为1)

100%|████████████████████████████████████████| 845/845 [00:05<00:00, 166.60it/s]

facade-cls_id: [0, 1, 2, 3, 4, 5, 6, 7, 8]

facade为9分类

100%|████████████████████████████████████████| 200/200 [00:01<00:00, 169.99it/s]

fundusvessels-cls_id: [0, 1]

fundusvessels为2分类

100%|███████████████████████████████████████| 4878/4878 [01:37<00:00, 50.27it/s]

laneline-cls_id: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]

laneline为20分类

三、数据预处理

这里就以horse数据作为示例

  • 首先,通过上边的像素值分析以及horse本身的标签表现,我们确定horse数据集为二分类任务

  • 然而,实际label中,却包含多个像素值,因此需要将horse中的所有label进行一个预处理

  • 预处理内容为: 0值不变,非0值变为1,然后再保存label

  • 并且保存文件格式为png,单通道图片为Label图片,最好保存为png——否则可能出现异常像素

对应horse的预处理脚本,位于:

parse_horse_label.py

!python parse_horse_label.py

100%|████████████████████████████████████████| 328/328 [00:00<00:00, 342.19it/s]

[0, 1]

100%|████████████████████████████████████████| 328/328 [00:00<00:00, 846.04it/s]

horse-cls_id: [0, 1]

horse为2分类

  • 预处理完成后,配置训练的索引文件txt,方便后边套件读取数据

txt创建脚本位于: horse_create_train_list.py
同时,生成的txt位于: segDataset/horse/train_list.txt
# 创建训练的数据索引txt
# 格式如下
# line1: train_img1.jpg train_label1.png
# line2: train_img2.jpg train_label2.png
!python horse_create_train_list.py

100%|██████████████████████████████████████| 328/328 [00:00<00:00, 16299.37it/s]

四、使用套件开始训练

  • 1.解压套件: 已挂载到本项目, 位于:data/data102250/PaddleSeg-release-2.1.zip

# 解压套件
!unzip -oq data/data102250/PaddleSeg-release-2.1.zip
# 通过mv指令实现文件夹重命名
!mv PaddleSeg-release-2.1 PaddleSeg
  • 2.选择模型,baseline选择bisenet, 位于: PaddleSeg/configs/quick_start/bisenet_optic_disc_512x512_1k.yml

  • 3.配置模型文件

首先修改训练数据集加载的dataset类型:
然后配置训练数据集如下:
类似的,配置验证数据集: -- 注意修改train_path为val_path

其它模型可能需要到: PaddleSeg/configs/$base$ 中的数据集文件进行配置,但修改参数与bisenet中的数据集修改参数相同

  • 4.开始训练

使用PaddleSeg的train.py,传入模型文件即可开启训练

!python PaddleSeg/train.py\
--config PaddleSeg/configs/quick_start/bisenet_optic_disc_512x512_1k.yml\
--batch_size 4\
--iters 2000\
--learning_rate 0.01\
--save_interval 200\
--save_dir PaddleSeg/output\
--seed 2021\
--log_iters 20\
--do_eval\
--use_vdl

# --batch_size 4\  # 批大小
# --iters 2000\    # 迭代次数 -- 根据数据大小,批大小估计迭代次数
# --learning_rate 0.01\ # 学习率
# --save_interval 200\ # 保存周期 -- 迭代次数计算周期
# --save_dir PaddleSeg/output\ # 输出路径
# --seed 2021\ # 训练中使用到的随机数种子
# --log_iters 20\ # 日志频率 -- 迭代次数计算周期
# --do_eval\ # 边训练边验证
# --use_vdl # 使用vdl可视化记录
# 用于断训==即中断后继续上一次状态进行训练
# --resume_model model_dir

/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:26: DeprecationWarning: np.int is a deprecated alias for the builtin int. To silence this warning, use int by itself. Doing this will not modify any behavior and is safe. When replacing np.int, you may wish to use e.g. np.int64 or np.int32 to specify the precision. If you wish to review your current use, check the release note link for additional information.

Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

def convert_to_list(value, n, name, dtype=np.int):

/home/aistudio/PaddleSeg/paddleseg/cvlibs/param_init.py:89: DeprecationWarning: invalid escape sequence \s

"""

/home/aistudio/PaddleSeg/paddleseg/models/losses/binary_cross_entropy_loss.py:82: DeprecationWarning: invalid escape sequence |

"""

/home/aistudio/PaddleSeg/paddleseg/models/losses/lovasz_loss.py:50: DeprecationWarning: invalid escape sequence \i

"""

/home/aistudio/PaddleSeg/paddleseg/models/losses/lovasz_loss.py:77: DeprecationWarning: invalid escape sequence \i

"""

/home/aistudio/PaddleSeg/paddleseg/models/losses/lovasz_loss.py:120: DeprecationWarning: invalid escape sequence \i

"""

2023-02-12 15:36:46 [INFO]

------------Environment Information-------------

platform: Linux-4.15.0-140-generic-x86_64-with-debian-stretch-sid

Python: 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0]

Paddle compiled with cuda: True

NVCC: Cuda compilation tools, release 10.1, V10.1.243

cudnn: 7.6

GPUs used: 1

CUDA_VISIBLE_DEVICES: None

GPU: ['GPU 0: Tesla V100-SXM2-16GB']

GCC: gcc (Ubuntu 7.5.0-3ubuntu1~16.04) 7.5.0

PaddlePaddle: 2.0.2

OpenCV: 4.6.0

------------------------------------------------

2023-02-12 15:36:46 [INFO]

---------------Config Information---------------

batch_size: 4

iters: 2000

loss:

coef:

- 1

- 1

- 1

- 1

- 1

types:

- ignore_index: 255

type: CrossEntropyLoss

lr_scheduler:

end_lr: 0

learning_rate: 0.01

power: 0.9

type: PolynomialDecay

model:

pretrained: null

type: BiSeNetV2

optimizer:

momentum: 0.9

type: sgd

weight_decay: 4.0e-05

train_dataset:

dataset_root: segDataset/horse

mode: train

num_classes: 2

train_path: segDataset/horse/train_list.txt

transforms:

- target_size:

- 512

- 512

type: Resize

- type: RandomHorizontalFlip

- type: Normalize

type: Dataset

val_dataset:

dataset_root: segDataset/horse

mode: val

num_classes: 2

transforms:

- target_size:

- 512

- 512

type: Resize

- type: Normalize

type: Dataset

val_path: segDataset/horse/train_list.txt

------------------------------------------------

W0212 15:36:47.001350 833 device_context.cc:362] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 10.1

W0212 15:36:47.001399 833 device_context.cc:372] device: 0, cuDNN Version: 7.6.

/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dataloader/dataloader_iter.py:89: DeprecationWarning: np.bool is a deprecated alias for the builtin bool. To silence this warning, use bool by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.bool_ here.

Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

if isinstance(slot[0], (np.ndarray, np.bool, numbers.Number)):

2023-02-12 15:36:52 [INFO] [TRAIN] epoch: 1, iter: 20/2000, loss: 3.0839, lr: 0.009914, batch_cost: 0.1106, reader_cost: 0.01121, ips: 36.1672 samples/sec | ETA 00:03:38

2023-02-12 15:36:54 [INFO] [TRAIN] epoch: 1, iter: 40/2000, loss: 2.5048, lr: 0.009824, batch_cost: 0.0954, reader_cost: 0.00009, ips: 41.9385 samples/sec | ETA 00:03:06

2023-02-12 15:36:56 [INFO] [TRAIN] epoch: 1, iter: 60/2000, loss: 2.1568, lr: 0.009734, batch_cost: 0.1020, reader_cost: 0.00009, ips: 39.2282 samples/sec | ETA 00:03:17

2023-02-12 15:36:58 [INFO] [TRAIN] epoch: 1, iter: 80/2000, loss: 2.1561, lr: 0.009644, batch_cost: 0.0917, reader_cost: 0.00008, ips: 43.6023 samples/sec | ETA 00:02:56

2023-02-12 15:37:00 [INFO] [TRAIN] epoch: 2, iter: 100/2000, loss: 1.9445, lr: 0.009553, batch_cost: 0.1021, reader_cost: 0.00376, ips: 39.1681 samples/sec | ETA 00:03:14

2023-02-12 15:37:02 [INFO] [TRAIN] epoch: 2, iter: 120/2000, loss: 1.9663, lr: 0.009463, batch_cost: 0.0939, reader_cost: 0.00007, ips: 42.6185 samples/sec | ETA 00:02:56

2023-02-12 15:37:04 [INFO] [TRAIN] epoch: 2, iter: 140/2000, loss: 1.7642, lr: 0.009372, batch_cost: 0.0910, reader_cost: 0.00007, ips: 43.9580 samples/sec | ETA 00:02:49

2023-02-12 15:37:05 [INFO] [TRAIN] epoch: 2, iter: 160/2000, loss: 2.0309, lr: 0.009282, batch_cost: 0.0906, reader_cost: 0.00008, ips: 44.1610 samples/sec | ETA 00:02:46

2023-02-12 15:37:07 [INFO] [TRAIN] epoch: 3, iter: 180/2000, loss: 1.7682, lr: 0.009191, batch_cost: 0.1000, reader_cost: 0.00358, ips: 40.0115 samples/sec | ETA 00:03:01

2023-02-12 15:37:09 [INFO] [TRAIN] epoch: 3, iter: 200/2000, loss: 1.8372, lr: 0.009100, batch_cost: 0.0910, reader_cost: 0.00008, ips: 43.9602 samples/sec | ETA 00:02:43

2023-02-12 15:37:09 [INFO] Start evaluating (total_samples: 328, total_iters: 328)...

/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/math_op_patch.py:238: UserWarning: The dtype of left and right variables are not the same, left dtype is VarType.INT32, but right dtype is VarType.BOOL, the right dtype will convert to VarType.INT32

format(lhs_dtype, rhs_dtype, lhs_dtype))

/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/math_op_patch.py:238: UserWarning: The dtype of left and right variables are not the same, left dtype is VarType.INT64, but right dtype is VarType.BOOL, the right dtype will convert to VarType.INT64

format(lhs_dtype, rhs_dtype, lhs_dtype))

328/328 [==============================] - 8s 25ms/step - batch_cost: 0.0247 - reader cost: 1.1265e-

2023-02-12 15:37:17 [INFO] [EVAL] #Images: 328 mIoU: 0.7197 Acc: 0.8703 Kappa: 0.6628

2023-02-12 15:37:17 [INFO] [EVAL] Class IoU:

[0.839 0.6004]

2023-02-12 15:37:17 [INFO] [EVAL] Class Acc:

[0.9108 0.7541]

2023-02-12 15:37:18 [INFO] [EVAL] The model with the best validation mIoU (0.7197) was saved at iter 200.

2023-02-12 15:37:20 [INFO] [TRAIN] epoch: 3, iter: 220/2000, loss: 1.8859, lr: 0.009009, batch_cost: 0.0923, reader_cost: 0.00008, ips: 43.3261 samples/sec | ETA 00:02:44

2023-02-12 15:37:21 [INFO] [TRAIN] epoch: 3, iter: 240/2000, loss: 1.7648, lr: 0.008918, batch_cost: 0.0906, reader_cost: 0.00009, ips: 44.1446 samples/sec | ETA 00:02:39

2023-02-12 15:37:23 [INFO] [TRAIN] epoch: 4, iter: 260/2000, loss: 1.7697, lr: 0.008827, batch_cost: 0.0983, reader_cost: 0.00440, ips: 40.7023 samples/sec | ETA 00:02:50

2023-02-12 15:37:25 [INFO] [TRAIN] epoch: 4, iter: 280/2000, loss: 1.7950, lr: 0.008735, batch_cost: 0.0926, reader_cost: 0.00008, ips: 43.2007 samples/sec | ETA 00:02:39

2023-02-12 15:37:27 [INFO] [TRAIN] epoch: 4, iter: 300/2000, loss: 1.7281, lr: 0.008644, batch_cost: 0.0917, reader_cost: 0.00009, ips: 43.6295 samples/sec | ETA 00:02:35

2023-02-12 15:37:29 [INFO] [TRAIN] epoch: 4, iter: 320/2000, loss: 1.5692, lr: 0.008552, batch_cost: 0.0916, reader_cost: 0.00008, ips: 43.6913 samples/sec | ETA 00:02:33

2023-02-12 15:37:31 [INFO] [TRAIN] epoch: 5, iter: 340/2000, loss: 1.6022, lr: 0.008461, batch_cost: 0.0950, reader_cost: 0.00434, ips: 42.1123 samples/sec | ETA 00:02:37

2023-02-12 15:37:33 [INFO] [TRAIN] epoch: 5, iter: 360/2000, loss: 1.5971, lr: 0.008369, batch_cost: 0.0930, reader_cost: 0.00008, ips: 42.9961 samples/sec | ETA 00:02:32

2023-02-12 15:37:35 [INFO] [TRAIN] epoch: 5, iter: 380/2000, loss: 1.6144, lr: 0.008277, batch_cost: 0.0917, reader_cost: 0.00008, ips: 43.6379 samples/sec | ETA 00:02:28

2023-02-12 15:37:36 [INFO] [TRAIN] epoch: 5, iter: 400/2000, loss: 1.6662, lr: 0.008185, batch_cost: 0.0949, reader_cost: 0.00008, ips: 42.1340 samples/sec | ETA 00:02:31

2023-02-12 15:37:36 [INFO] Start evaluating (total_samples: 328, total_iters: 328)...

328/328 [==============================] - 9s 26ms/step - batch_cost: 0.0260 - reader cost: 1.0770e-

2023-02-12 15:37:45 [INFO] [EVAL] #Images: 328 mIoU: 0.7923 Acc: 0.9069 Kappa: 0.7629

2023-02-12 15:37:45 [INFO] [EVAL] Class IoU:

[0.8803 0.7043]

2023-02-12 15:37:45 [INFO] [EVAL] Class Acc:

[0.9459 0.8044]

2023-02-12 15:37:45 [INFO] [EVAL] The model with the best validation mIoU (0.7923) was saved at iter 400.

2023-02-12 15:37:47 [INFO] [TRAIN] epoch: 6, iter: 420/2000, loss: 1.6809, lr: 0.008093, batch_cost: 0.0951, reader_cost: 0.00342, ips: 42.0401 samples/sec | ETA 00:02:30

2023-02-12 15:37:49 [INFO] [TRAIN] epoch: 6, iter: 440/2000, loss: 1.5039, lr: 0.008001, batch_cost: 0.1029, reader_cost: 0.00041, ips: 38.8867 samples/sec | ETA 00:02:40

2023-02-12 15:37:52 [INFO] [TRAIN] epoch: 6, iter: 460/2000, loss: 1.7104, lr: 0.007909, batch_cost: 0.1188, reader_cost: 0.00009, ips: 33.6626 samples/sec | ETA 00:03:02

2023-02-12 15:37:54 [INFO] [TRAIN] epoch: 6, iter: 480/2000, loss: 1.5400, lr: 0.007816, batch_cost: 0.1070, reader_cost: 0.00008, ips: 37.3963 samples/sec | ETA 00:02:42

2023-02-12 15:37:56 [INFO] [TRAIN] epoch: 7, iter: 500/2000, loss: 1.5530, lr: 0.007724, batch_cost: 0.1231, reader_cost: 0.00380, ips: 32.5009 samples/sec | ETA 00:03:04

2023-02-12 15:37:58 [INFO] [TRAIN] epoch: 7, iter: 520/2000, loss: 1.6878, lr: 0.007631, batch_cost: 0.0998, reader_cost: 0.00010, ips: 40.0984 samples/sec | ETA 00:02:27

2023-02-12 15:38:00 [INFO] [TRAIN] epoch: 7, iter: 540/2000, loss: 1.4482, lr: 0.007538, batch_cost: 0.0915, reader_cost: 0.00008, ips: 43.6968 samples/sec | ETA 00:02:13

2023-02-12 15:38:02 [INFO] [TRAIN] epoch: 7, iter: 560/2000, loss: 1.6344, lr: 0.007445, batch_cost: 0.0916, reader_cost: 0.00007, ips: 43.6546 samples/sec | ETA 00:02:11

2023-02-12 15:38:04 [INFO] [TRAIN] epoch: 8, iter: 580/2000, loss: 1.4450, lr: 0.007352, batch_cost: 0.0940, reader_cost: 0.00384, ips: 42.5640 samples/sec | ETA 00:02:13

2023-02-12 15:38:06 [INFO] [TRAIN] epoch: 8, iter: 600/2000, loss: 1.5000, lr: 0.007259, batch_cost: 0.0919, reader_cost: 0.00008, ips: 43.5249 samples/sec | ETA 00:02:08

2023-02-12 15:38:06 [INFO] Start evaluating (total_samples: 328, total_iters: 328)...

328/328 [==============================] - 8s 24ms/step - batch_cost: 0.0242 - reader cost: 1.0834e-

2023-02-12 15:38:14 [INFO] [EVAL] #Images: 328 mIoU: 0.8085 Acc: 0.9153 Kappa: 0.7837

2023-02-12 15:38:14 [INFO] [EVAL] Class IoU:

[0.8907 0.7263]

2023-02-12 15:38:14 [INFO] [EVAL] Class Acc:

[0.9503 0.8222]

2023-02-12 15:38:14 [INFO] [EVAL] The model with the best validation mIoU (0.8085) was saved at iter 600.

2023-02-12 15:38:16 [INFO] [TRAIN] epoch: 8, iter: 620/2000, loss: 1.5132, lr: 0.007166, batch_cost: 0.0937, reader_cost: 0.00009, ips: 42.6996 samples/sec | ETA 00:02:09

2023-02-12 15:38:18 [INFO] [TRAIN] epoch: 8, iter: 640/2000, loss: 1.4764, lr: 0.007072, batch_cost: 0.0917, reader_cost: 0.00008, ips: 43.5979 samples/sec | ETA 00:02:04

2023-02-12 15:38:20 [INFO] [TRAIN] epoch: 9, iter: 660/2000, loss: 1.5203, lr: 0.006978, batch_cost: 0.0946, reader_cost: 0.00350, ips: 42.2742 samples/sec | ETA 00:02:06

2023-02-12 15:38:22 [INFO] [TRAIN] epoch: 9, iter: 680/2000, loss: 1.4643, lr: 0.006885, batch_cost: 0.0987, reader_cost: 0.00008, ips: 40.5222 samples/sec | ETA 00:02:10

2023-02-12 15:38:24 [INFO] [TRAIN] epoch: 9, iter: 700/2000, loss: 1.4382, lr: 0.006791, batch_cost: 0.1184, reader_cost: 0.00008, ips: 33.7907 samples/sec | ETA 00:02:33

2023-02-12 15:38:26 [INFO] [TRAIN] epoch: 9, iter: 720/2000, loss: 1.4574, lr: 0.006697, batch_cost: 0.0920, reader_cost: 0.00008, ips: 43.4778 samples/sec | ETA 00:01:57

2023-02-12 15:38:28 [INFO] [TRAIN] epoch: 10, iter: 740/2000, loss: 1.4501, lr: 0.006603, batch_cost: 0.0961, reader_cost: 0.00379, ips: 41.6446 samples/sec | ETA 00:02:01

2023-02-12 15:38:30 [INFO] [TRAIN] epoch: 10, iter: 760/2000, loss: 1.4092, lr: 0.006508, batch_cost: 0.0927, reader_cost: 0.00008, ips: 43.1403 samples/sec | ETA 00:01:54

2023-02-12 15:38:32 [INFO] [TRAIN] epoch: 10, iter: 780/2000, loss: 1.4574, lr: 0.006414, batch_cost: 0.1044, reader_cost: 0.00007, ips: 38.3167 samples/sec | ETA 00:02:07

2023-02-12 15:38:34 [INFO] [TRAIN] epoch: 10, iter: 800/2000, loss: 1.4963, lr: 0.006319, batch_cost: 0.0904, reader_cost: 0.00007, ips: 44.2594 samples/sec | ETA 00:01:48

2023-02-12 15:38:34 [INFO] Start evaluating (total_samples: 328, total_iters: 328)...

328/328 [==============================] - 8s 25ms/step - batch_cost: 0.0248 - reader cost: 1.0473e-

2023-02-12 15:38:42 [INFO] [EVAL] #Images: 328 mIoU: 0.8292 Acc: 0.9278 Kappa: 0.8093

2023-02-12 15:38:42 [INFO] [EVAL] Class IoU:

[0.9078 0.7506]

2023-02-12 15:38:42 [INFO] [EVAL] Class Acc:

[0.9421 0.8839]

2023-02-12 15:38:42 [INFO] [EVAL] The model with the best validation mIoU (0.8292) was saved at iter 800.

2023-02-12 15:38:44 [INFO] [TRAIN] epoch: 10, iter: 820/2000, loss: 1.4318, lr: 0.006224, batch_cost: 0.0997, reader_cost: 0.00007, ips: 40.1288 samples/sec | ETA 00:01:57

2023-02-12 15:38:46 [INFO] [TRAIN] epoch: 11, iter: 840/2000, loss: 1.3246, lr: 0.006129, batch_cost: 0.0975, reader_cost: 0.00400, ips: 41.0200 samples/sec | ETA 00:01:53

2023-02-12 15:38:48 [INFO] [TRAIN] epoch: 11, iter: 860/2000, loss: 1.4408, lr: 0.006034, batch_cost: 0.0914, reader_cost: 0.00008, ips: 43.7534 samples/sec | ETA 00:01:44

2023-02-12 15:38:50 [INFO] [TRAIN] epoch: 11, iter: 880/2000, loss: 1.4807, lr: 0.005939, batch_cost: 0.0935, reader_cost: 0.00007, ips: 42.7694 samples/sec | ETA 00:01:44

2023-02-12 15:38:52 [INFO] [TRAIN] epoch: 11, iter: 900/2000, loss: 1.4006, lr: 0.005844, batch_cost: 0.0918, reader_cost: 0.00007, ips: 43.5540 samples/sec | ETA 00:01:41

2023-02-12 15:38:54 [INFO] [TRAIN] epoch: 12, iter: 920/2000, loss: 1.3515, lr: 0.005748, batch_cost: 0.0995, reader_cost: 0.00392, ips: 40.1967 samples/sec | ETA 00:01:47

2023-02-12 15:38:55 [INFO] [TRAIN] epoch: 12, iter: 940/2000, loss: 1.4768, lr: 0.005652, batch_cost: 0.0954, reader_cost: 0.00012, ips: 41.9193 samples/sec | ETA 00:01:41

2023-02-12 15:38:57 [INFO] [TRAIN] epoch: 12, iter: 960/2000, loss: 1.4356, lr: 0.005556, batch_cost: 0.0909, reader_cost: 0.00008, ips: 43.9877 samples/sec | ETA 00:01:34

2023-02-12 15:38:59 [INFO] [TRAIN] epoch: 12, iter: 980/2000, loss: 1.3687, lr: 0.005460, batch_cost: 0.1009, reader_cost: 0.00025, ips: 39.6492 samples/sec | ETA 00:01:42

2023-02-12 15:39:01 [INFO] [TRAIN] epoch: 13, iter: 1000/2000, loss: 1.4323, lr: 0.005364, batch_cost: 0.1000, reader_cost: 0.00779, ips: 39.9885 samples/sec | ETA 00:01:40

2023-02-12 15:39:01 [INFO] Start evaluating (total_samples: 328, total_iters: 328)...

328/328 [==============================] - 8s 25ms/step - batch_cost: 0.0248 - reader cost: 1.0690e-

2023-02-12 15:39:10 [INFO] [EVAL] #Images: 328 mIoU: 0.8391 Acc: 0.9309 Kappa: 0.8217

2023-02-12 15:39:10 [INFO] [EVAL] Class IoU:

[0.9104 0.7678]

2023-02-12 15:39:10 [INFO] [EVAL] Class Acc:

[0.9559 0.8616]

2023-02-12 15:39:10 [INFO] [EVAL] The model with the best validation mIoU (0.8391) was saved at iter 1000.

2023-02-12 15:39:12 [INFO] [TRAIN] epoch: 13, iter: 1020/2000, loss: 1.3648, lr: 0.005267, batch_cost: 0.0939, reader_cost: 0.00009, ips: 42.6211 samples/sec | ETA 00:01:31

2023-02-12 15:39:14 [INFO] [TRAIN] epoch: 13, iter: 1040/2000, loss: 1.3887, lr: 0.005170, batch_cost: 0.0920, reader_cost: 0.00008, ips: 43.4610 samples/sec | ETA 00:01:28

2023-02-12 15:39:16 [INFO] [TRAIN] epoch: 13, iter: 1060/2000, loss: 1.3997, lr: 0.005073, batch_cost: 0.0936, reader_cost: 0.00028, ips: 42.7269 samples/sec | ETA 00:01:28

2023-02-12 15:39:18 [INFO] [TRAIN] epoch: 14, iter: 1080/2000, loss: 1.3455, lr: 0.004976, batch_cost: 0.1142, reader_cost: 0.00351, ips: 35.0174 samples/sec | ETA 00:01:45

2023-02-12 15:39:20 [INFO] [TRAIN] epoch: 14, iter: 1100/2000, loss: 1.3160, lr: 0.004879, batch_cost: 0.0923, reader_cost: 0.00008, ips: 43.3207 samples/sec | ETA 00:01:23

2023-02-12 15:39:22 [INFO] [TRAIN] epoch: 14, iter: 1120/2000, loss: 1.4844, lr: 0.004781, batch_cost: 0.1273, reader_cost: 0.03600, ips: 31.4196 samples/sec | ETA 00:01:52

2023-02-12 15:39:24 [INFO] [TRAIN] epoch: 14, iter: 1140/2000, loss: 1.3197, lr: 0.004684, batch_cost: 0.1033, reader_cost: 0.00010, ips: 38.7075 samples/sec | ETA 00:01:28

2023-02-12 15:39:27 [INFO] [TRAIN] epoch: 15, iter: 1160/2000, loss: 1.3779, lr: 0.004586, batch_cost: 0.1312, reader_cost: 0.00473, ips: 30.4925 samples/sec | ETA 00:01:50

2023-02-12 15:39:29 [INFO] [TRAIN] epoch: 15, iter: 1180/2000, loss: 1.3469, lr: 0.004487, batch_cost: 0.0922, reader_cost: 0.00008, ips: 43.3729 samples/sec | ETA 00:01:15

2023-02-12 15:39:31 [INFO] [TRAIN] epoch: 15, iter: 1200/2000, loss: 1.2963, lr: 0.004389, batch_cost: 0.0921, reader_cost: 0.00008, ips: 43.4542 samples/sec | ETA 00:01:13

2023-02-12 15:39:31 [INFO] Start evaluating (total_samples: 328, total_iters: 328)...

328/328 [==============================] - 8s 25ms/step - batch_cost: 0.0250 - reader cost: 1.0326e-

2023-02-12 15:39:39 [INFO] [EVAL] #Images: 328 mIoU: 0.8470 Acc: 0.9346 Kappa: 0.8314

2023-02-12 15:39:39 [INFO] [EVAL] Class IoU:

[0.9151 0.779 ]

2023-02-12 15:39:39 [INFO] [EVAL] Class Acc:

[0.9584 0.8687]

2023-02-12 15:39:39 [INFO] [EVAL] The model with the best validation mIoU (0.8470) was saved at iter 1200.

2023-02-12 15:39:41 [INFO] [TRAIN] epoch: 15, iter: 1220/2000, loss: 1.3381, lr: 0.004290, batch_cost: 0.0927, reader_cost: 0.00009, ips: 43.1343 samples/sec | ETA 00:01:12

2023-02-12 15:39:43 [INFO] [TRAIN] epoch: 16, iter: 1240/2000, loss: 1.2681, lr: 0.004191, batch_cost: 0.0989, reader_cost: 0.00476, ips: 40.4601 samples/sec | ETA 00:01:15

2023-02-12 15:39:45 [INFO] [TRAIN] epoch: 16, iter: 1260/2000, loss: 1.3759, lr: 0.004092, batch_cost: 0.1041, reader_cost: 0.00010, ips: 38.4265 samples/sec | ETA 00:01:17

2023-02-12 15:39:47 [INFO] [TRAIN] epoch: 16, iter: 1280/2000, loss: 1.3259, lr: 0.003992, batch_cost: 0.1061, reader_cost: 0.00009, ips: 37.6880 samples/sec | ETA 00:01:16

2023-02-12 15:39:49 [INFO] [TRAIN] epoch: 16, iter: 1300/2000, loss: 1.4755, lr: 0.003892, batch_cost: 0.1011, reader_cost: 0.00011, ips: 39.5510 samples/sec | ETA 00:01:10

2023-02-12 15:39:51 [INFO] [TRAIN] epoch: 17, iter: 1320/2000, loss: 1.3906, lr: 0.003792, batch_cost: 0.1000, reader_cost: 0.00363, ips: 40.0013 samples/sec | ETA 00:01:07

2023-02-12 15:39:53 [INFO] [TRAIN] epoch: 17, iter: 1340/2000, loss: 1.2680, lr: 0.003692, batch_cost: 0.0950, reader_cost: 0.00009, ips: 42.1000 samples/sec | ETA 00:01:02

2023-02-12 15:39:55 [INFO] [TRAIN] epoch: 17, iter: 1360/2000, loss: 1.3732, lr: 0.003591, batch_cost: 0.0912, reader_cost: 0.00009, ips: 43.8383 samples/sec | ETA 00:00:58

2023-02-12 15:39:57 [INFO] [TRAIN] epoch: 17, iter: 1380/2000, loss: 1.3466, lr: 0.003490, batch_cost: 0.0916, reader_cost: 0.00008, ips: 43.6793 samples/sec | ETA 00:00:56

2023-02-12 15:39:59 [INFO] [TRAIN] epoch: 18, iter: 1400/2000, loss: 1.2885, lr: 0.003389, batch_cost: 0.0933, reader_cost: 0.00382, ips: 42.8952 samples/sec | ETA 00:00:55

2023-02-12 15:39:59 [INFO] Start evaluating (total_samples: 328, total_iters: 328)...

328/328 [==============================] - 9s 26ms/step - batch_cost: 0.0259 - reader cost: 1.0569e-0

2023-02-12 15:40:07 [INFO] [EVAL] #Images: 328 mIoU: 0.8537 Acc: 0.9391 Kappa: 0.8393

2023-02-12 15:40:07 [INFO] [EVAL] Class IoU:

[0.9215 0.7859]

2023-02-12 15:40:07 [INFO] [EVAL] Class Acc:

[0.9504 0.9046]

2023-02-12 15:40:08 [INFO] [EVAL] The model with the best validation mIoU (0.8537) was saved at iter 1400.

2023-02-12 15:40:10 [INFO] [TRAIN] epoch: 18, iter: 1420/2000, loss: 1.4176, lr: 0.003287, batch_cost: 0.0930, reader_cost: 0.00007, ips: 43.0090 samples/sec | ETA 00:00:53

2023-02-12 15:40:11 [INFO] [TRAIN] epoch: 18, iter: 1440/2000, loss: 1.3012, lr: 0.003185, batch_cost: 0.0978, reader_cost: 0.00007, ips: 40.9043 samples/sec | ETA 00:00:54

2023-02-12 15:40:14 [INFO] [TRAIN] epoch: 18, iter: 1460/2000, loss: 1.3627, lr: 0.003083, batch_cost: 0.1175, reader_cost: 0.00008, ips: 34.0365 samples/sec | ETA 00:01:03

2023-02-12 15:40:16 [INFO] [TRAIN] epoch: 19, iter: 1480/2000, loss: 1.2010, lr: 0.002980, batch_cost: 0.1231, reader_cost: 0.00332, ips: 32.4884 samples/sec | ETA 00:01:04

2023-02-12 15:40:19 [INFO] [TRAIN] epoch: 19, iter: 1500/2000, loss: 1.3420, lr: 0.002877, batch_cost: 0.1195, reader_cost: 0.00009, ips: 33.4605 samples/sec | ETA 00:00:59

2023-02-12 15:40:21 [INFO] [TRAIN] epoch: 19, iter: 1520/2000, loss: 1.3371, lr: 0.002773, batch_cost: 0.1055, reader_cost: 0.00009, ips: 37.9326 samples/sec | ETA 00:00:50

2023-02-12 15:40:23 [INFO] [TRAIN] epoch: 19, iter: 1540/2000, loss: 1.2492, lr: 0.002669, batch_cost: 0.0916, reader_cost: 0.00008, ips: 43.6735 samples/sec | ETA 00:00:42

2023-02-12 15:40:24 [INFO] [TRAIN] epoch: 20, iter: 1560/2000, loss: 1.3581, lr: 0.002565, batch_cost: 0.0938, reader_cost: 0.00374, ips: 42.6621 samples/sec | ETA 00:00:41

2023-02-12 15:40:26 [INFO] [TRAIN] epoch: 20, iter: 1580/2000, loss: 1.2089, lr: 0.002460, batch_cost: 0.0951, reader_cost: 0.00029, ips: 42.0732 samples/sec | ETA 00:00:39

2023-02-12 15:40:28 [INFO] [TRAIN] epoch: 20, iter: 1600/2000, loss: 1.3466, lr: 0.002355, batch_cost: 0.0903, reader_cost: 0.00008, ips: 44.2772 samples/sec | ETA 00:00:36

2023-02-12 15:40:28 [INFO] Start evaluating (total_samples: 328, total_iters: 328)...

328/328 [==============================] - 8s 25ms/step - batch_cost: 0.0244 - reader cost: 1.0305e-0

2023-02-12 15:40:36 [INFO] [EVAL] #Images: 328 mIoU: 0.8563 Acc: 0.9403 Kappa: 0.8424

2023-02-12 15:40:36 [INFO] [EVAL] Class IoU:

[0.923 0.7895]

2023-02-12 15:40:36 [INFO] [EVAL] Class Acc:

[0.9509 0.9077]

2023-02-12 15:40:37 [INFO] [EVAL] The model with the best validation mIoU (0.8563) was saved at iter 1600.

2023-02-12 15:40:39 [INFO] [TRAIN] epoch: 20, iter: 1620/2000, loss: 1.3142, lr: 0.002249, batch_cost: 0.1076, reader_cost: 0.00007, ips: 37.1770 samples/sec | ETA 00:00:40

2023-02-12 15:40:41 [INFO] [TRAIN] epoch: 20, iter: 1640/2000, loss: 1.3157, lr: 0.002142, batch_cost: 0.0911, reader_cost: 0.00025, ips: 43.9270 samples/sec | ETA 00:00:32

2023-02-12 15:40:43 [INFO] [TRAIN] epoch: 21, iter: 1660/2000, loss: 1.3624, lr: 0.002035, batch_cost: 0.1004, reader_cost: 0.00455, ips: 39.8283 samples/sec | ETA 00:00:34

2023-02-12 15:40:45 [INFO] [TRAIN] epoch: 21, iter: 1680/2000, loss: 1.2859, lr: 0.001927, batch_cost: 0.0928, reader_cost: 0.00008, ips: 43.0993 samples/sec | ETA 00:00:29

2023-02-12 15:40:47 [INFO] [TRAIN] epoch: 21, iter: 1700/2000, loss: 1.2057, lr: 0.001819, batch_cost: 0.1010, reader_cost: 0.00010, ips: 39.6040 samples/sec | ETA 00:00:30

2023-02-12 15:40:48 [INFO] [TRAIN] epoch: 21, iter: 1720/2000, loss: 1.2932, lr: 0.001710, batch_cost: 0.0940, reader_cost: 0.00008, ips: 42.5660 samples/sec | ETA 00:00:26

2023-02-12 15:40:50 [INFO] [TRAIN] epoch: 22, iter: 1740/2000, loss: 1.2963, lr: 0.001600, batch_cost: 0.0985, reader_cost: 0.00425, ips: 40.6139 samples/sec | ETA 00:00:25

2023-02-12 15:40:52 [INFO] [TRAIN] epoch: 22, iter: 1760/2000, loss: 1.3161, lr: 0.001489, batch_cost: 0.0992, reader_cost: 0.00008, ips: 40.3095 samples/sec | ETA 00:00:23

2023-02-12 15:40:54 [INFO] [TRAIN] epoch: 22, iter: 1780/2000, loss: 1.2861, lr: 0.001377, batch_cost: 0.0948, reader_cost: 0.00055, ips: 42.1946 samples/sec | ETA 00:00:20

2023-02-12 15:40:56 [INFO] [TRAIN] epoch: 22, iter: 1800/2000, loss: 1.3267, lr: 0.001265, batch_cost: 0.0902, reader_cost: 0.00008, ips: 44.3566 samples/sec | ETA 00:00:18

2023-02-12 15:40:56 [INFO] Start evaluating (total_samples: 328, total_iters: 328)...

328/328 [==============================] - 8s 25ms/step - batch_cost: 0.0249 - reader cost: 1.1226e-

2023-02-12 15:41:04 [INFO] [EVAL] #Images: 328 mIoU: 0.8582 Acc: 0.9413 Kappa: 0.8447

2023-02-12 15:41:04 [INFO] [EVAL] Class IoU:

[0.9244 0.792 ]

2023-02-12 15:41:04 [INFO] [EVAL] Class Acc:

[0.9506 0.9126]

2023-02-12 15:41:05 [INFO] [EVAL] The model with the best validation mIoU (0.8582) was saved at iter 1800.

2023-02-12 15:41:07 [INFO] [TRAIN] epoch: 23, iter: 1820/2000, loss: 1.2467, lr: 0.001151, batch_cost: 0.0967, reader_cost: 0.00349, ips: 41.3716 samples/sec | ETA 00:00:17

2023-02-12 15:41:08 [INFO] [TRAIN] epoch: 23, iter: 1840/2000, loss: 1.2936, lr: 0.001036, batch_cost: 0.0925, reader_cost: 0.00008, ips: 43.2324 samples/sec | ETA 00:00:14

2023-02-12 15:41:10 [INFO] [TRAIN] epoch: 23, iter: 1860/2000, loss: 1.3253, lr: 0.000919, batch_cost: 0.0911, reader_cost: 0.00008, ips: 43.8919 samples/sec | ETA 00:00:12

2023-02-12 15:41:12 [INFO] [TRAIN] epoch: 23, iter: 1880/2000, loss: 1.1602, lr: 0.000801, batch_cost: 0.0932, reader_cost: 0.00008, ips: 42.9291 samples/sec | ETA 00:00:11

2023-02-12 15:41:14 [INFO] [TRAIN] epoch: 24, iter: 1900/2000, loss: 1.2222, lr: 0.000681, batch_cost: 0.0979, reader_cost: 0.00368, ips: 40.8422 samples/sec | ETA 00:00:09

2023-02-12 15:41:16 [INFO] [TRAIN] epoch: 24, iter: 1920/2000, loss: 1.2447, lr: 0.000558, batch_cost: 0.0932, reader_cost: 0.00008, ips: 42.8980 samples/sec | ETA 00:00:07

2023-02-12 15:41:18 [INFO] [TRAIN] epoch: 24, iter: 1940/2000, loss: 1.3214, lr: 0.000432, batch_cost: 0.0921, reader_cost: 0.00009, ips: 43.4075 samples/sec | ETA 00:00:05

2023-02-12 15:41:20 [INFO] [TRAIN] epoch: 24, iter: 1960/2000, loss: 1.3017, lr: 0.000302, batch_cost: 0.0951, reader_cost: 0.00008, ips: 42.0653 samples/sec | ETA 00:00:03

2023-02-12 15:41:22 [INFO] [TRAIN] epoch: 25, iter: 1980/2000, loss: 1.2300, lr: 0.000166, batch_cost: 0.0964, reader_cost: 0.00418, ips: 41.4881 samples/sec | ETA 00:00:01

2023-02-12 15:41:24 [INFO] [TRAIN] epoch: 25, iter: 2000/2000, loss: 1.3523, lr: 0.000011, batch_cost: 0.0931, reader_cost: 0.00010, ips: 42.9703 samples/sec | ETA 00:00:00

2023-02-12 15:41:24 [INFO] Start evaluating (total_samples: 328, total_iters: 328)...

328/328 [==============================] - 8s 26ms/step - batch_cost: 0.0255 - reader cost: 1.0482e-

2023-02-12 15:41:32 [INFO] [EVAL] #Images: 328 mIoU: 0.8616 Acc: 0.9425 Kappa: 0.8488

2023-02-12 15:41:32 [INFO] [EVAL] Class IoU:

[0.9256 0.7976]

2023-02-12 15:41:32 [INFO] [EVAL] Class Acc:

[0.9544 0.9066]

2023-02-12 15:41:32 [INFO] [EVAL] The model with the best validation mIoU (0.8616) was saved at iter 2000.

<class 'paddle.nn.layer.conv.Conv2D'>'s flops has been counted

Customize Function has been applied to <class 'paddle.nn.layer.norm.SyncBatchNorm'>

Cannot find suitable count function for <class 'paddle.nn.layer.pooling.MaxPool2D'>. Treat it as zero FLOPs.

<class 'paddle.nn.layer.pooling.AdaptiveAvgPool2D'>'s flops has been counted

<class 'paddle.nn.layer.pooling.AvgPool2D'>'s flops has been counted

Cannot find suitable count function for <class 'paddle.nn.layer.activation.Sigmoid'>. Treat it as zero FLOPs.

<class 'paddle.nn.layer.common.Dropout'>'s flops has been counted

/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/tensor/creation.py:143: DeprecationWarning: np.object is a deprecated alias for the builtin object. To silence this warning, use object by itself. Doing this will not modify any behavior and is safe.

Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

if data.dtype == np.object:

/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/math_op_patch.py:238: UserWarning: The dtype of left and right variables are not the same, left dtype is VarType.FP32, but right dtype is VarType.INT32, the right dtype will convert to VarType.FP32

format(lhs_dtype, rhs_dtype, lhs_dtype))

Total Flops: 8061050880 Total Params: 2328346

# 单独进行评估 -- 上边do_eval就是这个工作
!python PaddleSeg/val.py\
--config PaddleSeg/configs/quick_start/bisenet_optic_disc_512x512_1k.yml\
--model_path PaddleSeg/output/best_model/model.pdparams
# model_path: 模型参数路径
  • 5.开始预测

# 进行预测
!python PaddleSeg/predict.py\
--config PaddleSeg/configs/quick_start/bisenet_optic_disc_512x512_1k.yml\
--model_path PaddleSeg/output/best_model/model.pdparams\
--image_path segDataset/horse/Images\
--save_dir PaddleSeg/output/horse
# image_path: 预测图片路径/文件夹 -- 这里直接对训练数据进行预测,得到预测结果
# save_dir: 保存预测结果的路径 -- 保存的预测结果为图片

/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:26: DeprecationWarning: np.int is a deprecated alias for the builtin int. To silence this warning, use int by itself. Doing this will not modify any behavior and is safe. When replacing np.int, you may wish to use e.g. np.int64 or np.int32 to specify the precision. If you wish to review your current use, check the release note link for additional information.

Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

def convert_to_list(value, n, name, dtype=np.int):

2023-02-12 15:42:07 [INFO]

---------------Config Information---------------

batch_size: 4

iters: 1000

loss:

coef:

- 1

- 1

- 1

- 1

- 1

types:

- type: CrossEntropyLoss

lr_scheduler:

end_lr: 0

learning_rate: 0.01

power: 0.9

type: PolynomialDecay

model:

pretrained: null

type: BiSeNetV2

optimizer:

momentum: 0.9

type: sgd

weight_decay: 4.0e-05

train_dataset:

dataset_root: segDataset/horse

mode: train

num_classes: 2

train_path: segDataset/horse/train_list.txt

transforms:

- target_size:

- 512

- 512

type: Resize

- type: RandomHorizontalFlip

- type: Normalize

type: Dataset

val_dataset:

dataset_root: segDataset/horse

mode: val

num_classes: 2

transforms:

- target_size:

- 512

- 512

type: Resize

- type: Normalize

type: Dataset

val_path: segDataset/horse/train_list.txt

------------------------------------------------

W0212 15:42:07.459519 1514 device_context.cc:362] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 10.1

W0212 15:42:07.459563 1514 device_context.cc:372] device: 0, cuDNN Version: 7.6.

2023-02-12 15:42:10 [INFO] Number of predict images = 328

2023-02-12 15:42:10 [INFO] Loading pretrained model from PaddleSeg/output/best_model/model.pdparams

2023-02-12 15:42:10 [INFO] There are 356/356 variables loaded into BiSeNetV2.

2023-02-12 15:42:10 [INFO] Start to predict...

328/328 [==============================] - 16s 49ms/st

五、可视化预测结果

通过PaddleSeg预测输出的结果为图片,对应位于:PaddleSeg/output/horse

其中包含两种结果:

  • 一种为掩膜图像,即叠加预测结果与原始结果的图像 -- 位于: PaddleSeg/output/horse/added_prediction

  • 另一种为预测结果的伪彩色图像,即预测的结果图像 -- 位于: PaddleSeg/output/horse/pseudo_color_prediction

# 查看预测结果文件夹分布
!tree PaddleSeg/output/horse -L 1

分别展示两个文件夹中的预测结果(下载每个预测结果文件夹中的一两张图片到本地,然后上传notebook)

上传说明:

以下为展示结果

---数据集 horse 的预测结果展示---

掩膜图像:

伪彩色图像:

特别声明,使用horse数据集进行提交时,预测结果展示不允许使用horse242.jpg和horse242.png的预测结果,否则将可能认定为未进行本baseline作业的训练、预测过程

六、提交作业流程

  1. 生成项目版本

  1. (有能力的同学可以多捣鼓一下)根据需要可将notebook转为markdown,自行提交到github上

  1. (一定要先生成项目版本哦)公开项目

七、寄语

最后祝愿大家都能完成本次作业,圆满进入工作室,将是你们AI之路成长的开始!

希望未来的你们能够坚持在自己热爱的方向上继续自己的梦想!

最后,再次祝愿大家都能顺利结业!

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值