FCN训练自己数据集(person-segmentation)、SIFT-FLOW、SBD和VOC实验总结

最近花了将近一周的时间,基于提供的源码,通过参考网上的博客,跑通了FCN在三个数据集上的训练以及测试。在这里写下总结,即是记录,又希望能够对其他刚刚接触FCN的人有所帮助。

欢迎加群:1012878218,一起学习、交流强化学习,里面会有关于深度学习、机器学习、强化学习的各种资料 。   

FCN的源码地址:https://github.com/shelhamer/fcn.berkeleyvision.org

  

训练自己的数据集:

1.首先准备自己的数据集:

我的数据集是对人体轮廓进行分割。

最开始的标签是这样的:

但是这个标签是有问题的,最后会说明。

2.对源码进行修改

将工程所要用到的所有py文件放在一个目录下,否则会报错。

首先是solve.py文件

#!usr/bin/python
# -*- coding: utf-8 -*-

import caffe
import surgery, score

import numpy as np
import os
import sys
sys.path.append('/home/panyun/sourcecode/caffe/python')	#引入sys库并且增加需要的caffe的python路径

try:
    import setproctitle
    setproctitle.setproctitle(os.path.basename(os.getcwd()))
except:
    pass

# init
# caffe.set_device(int(sys.argv[0]))	# 这里设置使用GPU编号 先注释掉
caffe.set_device(int(0))
caffe.set_mode_gpu()

solver = caffe.SGDSolver('/home/zhaoys/myf/fcn/people-fcn8s/solver.prototxt')	# 设置solver.prototxt
weights = '/home/zhaoys/myf/fcn/people-fcn8s/caffemodel/fcn8s-heavy-pascal.caffemodel'	# 设置下载好的用来finetune的模型
deploy_proto = '/home/zhaoys/myf/fcn/people-fcn8s/deploy.prototxt'
# solver.net.copy_from(weights)
fcn8snet = caffe.Net(deploy_proto, weights, caffe.TRAIN)
surgery.transplant(solver.net, fcn8snet)
del fcn8snet

# surgeries
interp_layers = [k for k in solver.net.params.keys() if 'up' in k]	#修改层名称的时候不要把反卷积层的name中的"up"去掉
surgery.interp(solver.net, interp_layers)

# scoring
val = np.loadtxt('/home/zhaoys/myf/dataset/people_segmentation/test.txt', dtype=str)	# 传入已经写好的训练txt文件

for _ in range(25):	# 设置epoch数和每个epoch中的step数
    solver.step(4000)
    score.seg_tests(solver, False, val, layer='score')

首先,在代码里面对路径进行了修改。其次,在fcn8s的模型上面进行了fine-tune,由于网络模型和我自己数据集输入的图片大小不同,这里采用了移植的方式。最后,设置epoch数和每个epoch中进行迭代的次数。

train_net: "/home/zhaoys/myf/fcn/people-fcn8s/train.prototxt"
test_net: "/home/zhaoys/myf/fcn/people-fcn8s/val.prototxt"

test_iter: 736
# make test net, but don't invoke it from the solver itself
test_interval: 999999999
display: 40
average_loss: 20
lr_policy: "fixed"
# lr for unnormalized softmax
base_lr: 1e-12
# high momentum
momentum: 0.99
# no gradient accumulation
iter_size: 1
max_iter: 100000
weight_decay: 0.0005
snapshot: 4000
snapshot_prefix: "/home/zhaoys/myf/fcn/people-fcn8s/snapshot/train"
test_initialization: false
debug_info: false

solve.prototxt文件设置。

layer {
  name: "data"
  type: "Python"
  top: "data"
  top: "label"
  python_param {
    module: "voc_layers"
    layer: "SBDDSegDataLayer"
    param_str: "{\'sbdd_dir\': \'/home/zhaoys/myf/dataset/people_segmentation\', \'seed\': 1337, \'split\': \'train\', \'mean\': (184.17504, 193.82224, 204.57951)}"
  }
}
layer {
  name: "conv1_1"
  type: "Convolution"
  bottom: "data"
  top: "conv1_1"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  convolution_param {
    num_output: 64
    pad: 100
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "relu1_1"
  type: "ReLU"
  bottom: "conv1_1"
  top: "conv1_1"
}
layer {
  name: "conv1_2"
  type: "Convolution"
  bottom: "conv1_1"
  top: "conv1_2"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "relu1_2"
  type: "ReLU"
  bottom: "conv1_2"
  top: "conv1_2"
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1_2"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2_1"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2_1"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "relu2_1"
  type: "ReLU"
  bottom: "conv2_1"
  top: "conv2_1"
}
layer {
  name: "conv2_2"
  type: "Convolution"
  bottom: "conv2_1"
  top: "conv2_2"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "relu2_2"
  type: "ReLU"
  bottom: "conv2_2"
  top: "conv2_2"
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2_2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv3_1"
  type: "Convolution"
  bottom: "pool2"
  top: "conv3_1"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "relu3_1"
  type: "ReLU"
  bottom: "conv3_1"
  top: "conv3_1"
}
layer {
  name: "conv3_2"
  type: "Convolution"
  bottom: "conv3_1"
  top: "conv3_2"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "relu3_2"
  type: "ReLU"
  bottom: "conv3_2"
  top: "conv3_2"
}
layer {
  name: "conv3_3"
  type: "Convolution"
  bottom: "conv3_2"
  top: "conv3_3"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "relu3_3"
  type: "ReLU"
  bottom: "conv3_3"
  top: "conv3_3"
}
layer {
  name: "pool3"
  type: "Pooling"
  bottom: "conv3_3"
  top: "pool3"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv4_1"
  type: "Convolution"
  bottom: "pool3"
  top: "conv4_1"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0.0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "relu4_1"
  type: "ReLU"
  bottom: "conv4_1"
  top: "conv4_1"
}
laye
  • 3
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 7
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值