repo地址:https://github.com/jfzhang95/pytorch-deeplab-xception
工具地址:https://github.com/wkentaro/labelme
论文地址:https://arxiv.org/pdf/1706.05587.pdf
1. 首先用 https://github.com/wkentaro/labelme/blob/master/examples/instance_segmentation/labelme2coco.py 对位于同一文件夹下的图片和标签文件(json格式)进行处理,类似以下方式,我这里用的文件是我修改过的,有需要自取。
python labelme2coco.py --type val \
--directory ./data_190304/val2017 \
--output ./data_190304/annotations
2. 准备coco文件夹结构,如下所示
红框中是要准备的,两个coco开头的文件夹是多余的,.pth是代码为训练准备的数据文件;将这个coco文件夹放在pytorch-deeplab-xception即可~
3. 按照repo中安装环境,然后bash train_coco.sh即可。
训练日志展示
0%| | 0/23 [00:00<?, ?it/s]Namespace(backbone='resnet', base_size=513, batch_size=4, checkname='deeplab-resnet', crop_size=513, cuda=True, dataset='coco', epochs=1000, eval_interval=1, freeze_bn=False, ft=False, gpu_ids=[0], loss_type='ce', lr=0.01, lr_scheduler='poly', momentum=0.9, nesterov=False, no_cuda=False, no_val=False, out_stride=16, resume=None, seed=1, start_epoch=0, sync_bn=False, test_batch_size=4, use_balanced_weights=False, use_sbd=True, weight_decay=0.0005, workers=4)
loading annotations into memory...
Done (t=0.01s)
creating index...
index created!
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Using poly LR Scheduler!
Starting Epoch: 0
Total Epoches: 1000
/usr/lib/python3.5/site-packages/torch/nn/_reduction.py:49: UserWarning: size_average and reduce args will be deprecated, please use reduction='mean' instead.
warnings.warn(warning.format(ret))
Train loss: 0.448: 0%| | 0/23 [00:35<?, ?it/s]
Train loss: 0.448: 4%|▍ | 1/23 [00:36<13:29, 36.78s/it]
Train loss: 0.376: 4%|▍ | 1/23 [00:38<13:29, 36.78s/it]
Train loss: 0.376: 9%|▊ | 2/23 [00:38<09:12, 26.29s/it]
Train loss: 0.327: 9%|▊ | 2/23 [00:39<09:12, 26.29s/it]
Train loss: 0.327: 13%|█▎ | 3/23 [00:40<06:17, 18.87s/it]
Train loss: 0.290: 13%|█▎ | 3/23 [00:40<06:17, 18.87s/it]
Train loss: 0.290: 17%|█▋ | 4/23 [00:40<04:13, 13.33s/it]
Train loss: 0.260: 17%|█▋ | 4/23 [01:14<04:13, 13.33s/it]
Train loss: 0.260: 22%|██▏ | 5/23 [01:15<05:56, 19.82s/it]
Train loss: 0.238: 22%|██▏ | 5/23 [01:15<05:56, 19.82s/it]
Train loss: 0.238: 26%|██▌ | 6/23 [01:15<03:57, 14.00s/it]
Train loss: 0.221: 26%|██▌ | 6/23 [01:16<03:57, 14.00s/it]
Train loss: 0.221: 30%|███ | 7/23 [01:17<02:43, 10.20s/it]
Train loss: 0.207: 30%|███ | 7/23 [01:17<02:43, 10.20s/it]
Train loss: 0.207: 35%|███▍ | 8/23 [01:17<01:48, 7.26s/it]
Train loss: 0.189: 35%|███▍ | 8/23 [01:47<01:48, 7.26s/it]
Train loss: 0.189: 39%|███▉ | 9/23 [01:48<03:19, 14.25s/it]
Train loss: 0.176: 39%|███▉ | 9/23 [01:48<03:19, 14.25s/it]
......