实验向:Deeplab v3

repo地址:https://github.com/jfzhang95/pytorch-deeplab-xception

工具地址:https://github.com/wkentaro/labelme

论文地址:https://arxiv.org/pdf/1706.05587.pdf


1. 首先用 https://github.com/wkentaro/labelme/blob/master/examples/instance_segmentation/labelme2coco.py 对位于同一文件夹下的图片和标签文件(json格式)进行处理,类似以下方式,我这里用的文件是我修改过的,有需要自取。

python labelme2coco.py --type val \
--directory ./data_190304/val2017 \
--output ./data_190304/annotations 

2. 准备coco文件夹结构,如下所示

红框中是要准备的,两个coco开头的文件夹是多余的,.pth是代码为训练准备的数据文件;将这个coco文件夹放在pytorch-deeplab-xception即可~

3.  按照repo中安装环境,然后bash train_coco.sh即可。


训练日志展示

  0%|          | 0/23 [00:00<?, ?it/s]Namespace(backbone='resnet', base_size=513, batch_size=4, checkname='deeplab-resnet', crop_size=513, cuda=True, dataset='coco', epochs=1000, eval_interval=1, freeze_bn=False, ft=False, gpu_ids=[0], loss_type='ce', lr=0.01, lr_scheduler='poly', momentum=0.9, nesterov=False, no_cuda=False, no_val=False, out_stride=16, resume=None, seed=1, start_epoch=0, sync_bn=False, test_batch_size=4, use_balanced_weights=False, use_sbd=True, weight_decay=0.0005, workers=4)
loading annotations into memory...
Done (t=0.01s)
creating index...
index created!
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Using poly LR Scheduler!
Starting Epoch: 0
Total Epoches: 1000
/usr/lib/python3.5/site-packages/torch/nn/_reduction.py:49: UserWarning: size_average and reduce args will be deprecated, please use reduction='mean' instead.
  warnings.warn(warning.format(ret))

Train loss: 0.448:   0%|          | 0/23 [00:35<?, ?it/s]
Train loss: 0.448:   4%|▍         | 1/23 [00:36<13:29, 36.78s/it]
Train loss: 0.376:   4%|▍         | 1/23 [00:38<13:29, 36.78s/it]
Train loss: 0.376:   9%|▊         | 2/23 [00:38<09:12, 26.29s/it]
Train loss: 0.327:   9%|▊         | 2/23 [00:39<09:12, 26.29s/it]
Train loss: 0.327:  13%|█▎        | 3/23 [00:40<06:17, 18.87s/it]
Train loss: 0.290:  13%|█▎        | 3/23 [00:40<06:17, 18.87s/it]
Train loss: 0.290:  17%|█▋        | 4/23 [00:40<04:13, 13.33s/it]
Train loss: 0.260:  17%|█▋        | 4/23 [01:14<04:13, 13.33s/it]
Train loss: 0.260:  22%|██▏       | 5/23 [01:15<05:56, 19.82s/it]
Train loss: 0.238:  22%|██▏       | 5/23 [01:15<05:56, 19.82s/it]
Train loss: 0.238:  26%|██▌       | 6/23 [01:15<03:57, 14.00s/it]
Train loss: 0.221:  26%|██▌       | 6/23 [01:16<03:57, 14.00s/it]
Train loss: 0.221:  30%|███       | 7/23 [01:17<02:43, 10.20s/it]
Train loss: 0.207:  30%|███       | 7/23 [01:17<02:43, 10.20s/it]
Train loss: 0.207:  35%|███▍      | 8/23 [01:17<01:48,  7.26s/it]
Train loss: 0.189:  35%|███▍      | 8/23 [01:47<01:48,  7.26s/it]
Train loss: 0.189:  39%|███▉      | 9/23 [01:48<03:19, 14.25s/it]
Train loss: 0.176:  39%|███▉      | 9/23 [01:48<03:19, 14.25s/it]

......

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值