
版权声明:本文为博主原创文章,转载请注明原地址。 https://blog.csdn.net/xiamentingtao/article/details/78807146
以下仅仅为一个总结,参考了网上的众多资料,仅备忘记。
主要链接
- deeplab主页:http://liangchiehchen.com/projects/DeepLab.html
- 官方代码:https://bitbucket.org/aquariusjay/deeplab-public-ver2
- python 版caffe实现:https://github.com/TheLegendAli/DeepLab-Context2
- model下载:http://liangchiehchen.com/projects/DeepLab_Models.html
- DeepLabv2_VGG16 预训练模型
- DeepLabv2_ResNet101预训练模型
- pytorch实现的deeplab:https://github.com/isht7/pytorch-deeplab-resnet
- 网上开源代码:martinkersner/train-DeepLab
主要运行步骤
如下我们主要以网上开源的一个版本讲解:https://github.com/xmojiao/deeplab_v2 .
主要的步骤可以参考:
1. 图像语义分割:从头开始训练deeplab v2系列之一【源码解析】
2. 图像语义分割:从头开始训练deeplab v2系列之二【VOC2012数据集】
3. Deeplab v2 调试全过程(Ubuntu 16.04+cuda8.0)
下面说一些这上面没有的我遇到的一些问题。
1. 安装 matio:
上面资料中都使用 matio-1.5.2.tar.gz ,但是我装不上,可能与我的库不兼容,于是我下载了最新的 matio-1.5.11 ,并且按照如下命令运行安装:
cd matio-1.5.11
./configure --prefix=/data1/... (填写自己的安装目录)
make
make check (可略)
make install
最后在bash.rc上加上 LD_LIBRARY_PATH=/your/path/to/libmatio.so.2
参考:http://blog.csdn.net/houqiqi/article/details/46469981
2. 使用的Caffe版本比较陈旧,导致会出现很多和最新环境不兼容的情况,我使用的是cuDNN6.0,cuda8.0
出现:
./include/caffe/util/cudnn.hpp: In function ‘void caffe::cudnn::createPoolingDesc(cudnnPoolingStruct**, caffe::PoolingParameter_PoolMethod, cudnnPoolingMode_t*, int, int, int, int, int, int)’:
./include/caffe/util/cudnn.hpp:127:41: error: too few arguments to function ‘cudnnStatus_t cudnnSetPooling2dDescriptor(cudnnPoolingDescriptor_t, cudnnPoolingMode_t, cudnnNanPropagation_t, int, int, int, int, int, int)’
pad_h, pad_w, stride_h, stride_w));
这是由于所使用的cuDNN版本不一致的导致的,作者配置环境是cuDNN 4.0,但是5.0版本后的cuDNN接口有所变化。
解决方法 :将以下几个文件用最新BVLC版本的caffe对应文件替换并重新编译
./include/caffe/util/cudnn.hpp
./include/caffe/layers/cudnn_conv_layer.hpp
./include/caffe/layers/cudnn_relu_layer.hpp
./include/caffe/layers/cudnn_sigmoid_layer.hpp
./include/caffe/layers/cudnn_tanh_layer.hpp
./src/caffe/layers/cudnn_conv_layer.cpp
./src/caffe/layers/cudnn_conv_layer.cu
./src/caffe/layers/cudnn_relu_layer.cpp
./src/caffe/layers/cudnn_relu_layer.cu
./src/caffe/layers/cudnn_sigmoid_layer.cpp
./src/caffe/layers/cudnn_sigmoid_layer.cu
./src/caffe/layers/cudnn_tanh_layer.cpp
./src/caffe/layers/cudnn_tanh_layer.cu
参考:http://blog.csdn.net/tianrolin/article/details/71246472
3. 如何解决deeplab v2识别结果为全黑图像的问题?
正如作者而言:http://liangchiehchen.com/projects/DeepLab_FAQ.html
Q: When evaluating the DeepLab outputs (without CRF), I got all-background results (i.e., all black results). Is there anything wrong?
A: Please double check if the name of your fc8 is fc8_voc12 in the generated test_val.prototxt or test_test.prototxt (after running run_pascal.sh). The name should be matched for initialization.
主要是作者预训练的模型与你实际测试的模型出现了偏差。主要是fc8 的问题。
以https://github.com/xmojiao/deeplab_v2 为例,如果直接使用此代码,则run_pascal.sh出现在voc12目录下,而run_pascal.sh上EXP2=. ,这与官方的预设不同。官方代码,假定run_pascal.sh应该出现在voc12上一级目录下。 这样最后测试时会出现fc8_voc12_1,fc8_voc12_2,fc8_voc12_3,fc8_voc12_4被忽略的情况。
另外由于测试时需要将test.prototxt复制为test_val.prototxt,所以应该主要修改test.prototxt。
test.prototxt改动如下
layer {
name: "data"
type: "ImageSegData"
top: "data"
top: "label"
top: "data_dim"
include {
phase: TEST
}
transform_param {
mirror: false
crop_size: 513
mean_value: 104.008
mean_value: 116.669
mean_value: 122.675
}
image_data_param {
root_folder: "${DATA_ROOT}"
source: "../${EXP}/list/${TEST_SET}.txt" (改变)
batch_size: 1
label_type: NONE
}
}
run_pascal.sh改变如下
#!/bin/sh
## MODIFY PATH for YOUR SETTING
ROOT_DIR=/data1/caiyong.wang/data/deeplab_data
CAFFE_DIR=../deeplab-public-ver2
CAFFE_BIN=${CAFFE_DIR}/.build_release/tools/caffe.bin
EXP=voc12 #适应原始训练好的模型目录 (改变)
EXP2=. #当前目录下 (改变)
if [ "${EXP2}" = "." ]; then
NUM_LABELS=21
DATA_ROOT=${ROOT_DIR}/VOC_aug/dataset/
else
NUM_LABELS=0
echo "Wrong EXP name"
fi
## Specify which model to train
########### voc12 ################
NET_ID=deeplab_largeFOV
## Variables used for weakly or semi-supervisedly training
#TRAIN_SET_SUFFIX=
TRAIN_SET_SUFFIX=_aug
#TRAIN_SET_STRONG=train
#TRAIN_SET_STRONG=train200
#TRAIN_SET_STRONG=train500
#TRAIN_SET_STRONG=train1000
#TRAIN_SET_STRONG=train750
#TRAIN_SET_WEAK_LEN=5000
DEV_ID=0
#####
## Create dirs
CONFIG_DIR=${EXP2}/config/${NET_ID}
MODEL_DIR=${EXP2}/model/${NET_ID}
mkdir -p ${MODEL_DIR}
LOG_DIR=${EXP2}/log/${NET_ID}
mkdir -p ${LOG_DIR}
export GLOG_log_dir=${LOG_DIR}
## Run
RUN_TRAIN=0
RUN_TEST=1
RUN_TRAIN2=0
RUN_TEST2=0
## Training #1 (on train_aug)
if [ ${RUN_TRAIN} -eq 1 ]; then
#
LIST_DIR=${EXP2}/list
TRAIN_SET=train${TRAIN_SET_SUFFIX}
if [ -z ${TRAIN_SET_WEAK_LEN} ]; then
TRAIN_SET_WEAK=${TRAIN_SET}_diff_${TRAIN_SET_STRONG}
comm -3 ${LIST_DIR}/${TRAIN_SET}.txt ${LIST_DIR}/${TRAIN_SET_STRONG}.txt > ${LIST_DIR}/${TRAIN_SET_WEAK}.txt
else
TRAIN_SET_WEAK=${TRAIN_SET}_diff_${TRAIN_SET_STRONG}_head${TRAIN_SET_WEAK_LEN}
comm -3 ${LIST_DIR}/${TRAIN_SET}.txt ${LIST_DIR}/${TRAIN_SET_STRONG}.txt | head -n ${TRAIN_SET_WEAK_LEN} > ${LIST_DIR}/${TRAIN_SET_WEAK}.txt
fi
#
MODEL=${EXP2}/model/${NET_ID}/init.caffemodel
#
echo Training net ${EXP2}/${NET_ID}
for pname in train solver; do
sed "$(eval echo $(cat sub.sed))" \
${CONFIG_DIR}/${pname}.prototxt > ${CONFIG_DIR}/${pname}_${TRAIN_SET}.prototxt
done
CMD="${CAFFE_BIN} train \
--solver=${CONFIG_DIR}/solver_${TRAIN_SET}.prototxt \
--gpu=${DEV_ID}"
if [ -f ${MODEL} ]; then
CMD="${CMD} --weights=${MODEL}"
fi
echo Running ${CMD} && ${CMD}
fi
## Test #1 specification (on val or test)
if [ ${RUN_TEST} -eq 1 ]; then
#
for TEST_SET in val; do
TEST_ITER=`cat ${EXP2}/list/${TEST_SET}.txt | wc -l`
MODEL=${EXP2}/model/${NET_ID}/test.caffemodel
if [ ! -f ${MODEL} ]; then
MODEL=`ls -t ${EXP2}/model/${NET_ID}/train_iter_*.caffemodel | head -n 1`
fi
#
echo Testing net ${EXP2}/${NET_ID}
FEATURE_DIR=${EXP2}/features/${NET_ID}
mkdir -p ${FEATURE_DIR}/${TEST_SET}/fc8
mkdir -p ${FEATURE_DIR}/${TEST_SET}/fc9
mkdir -p ${FEATURE_DIR}/${TEST_SET}/seg_score
sed "$(eval echo $(cat sub.sed))" \
${CONFIG_DIR}/test.prototxt > ${CONFIG_DIR}/test_${TEST_SET}.prototxt
CMD="${CAFFE_BIN} test \
--model=${CONFIG_DIR}/test_${TEST_SET}.prototxt \
--weights=${MODEL} \
--gpu=${DEV_ID} \
--iterations=${TEST_ITER}"
echo Running ${CMD} && ${CMD}
done
fi
## Training #2 (finetune on trainval_aug)
if [ ${RUN_TRAIN2} -eq 1 ]; then
#
LIST_DIR=${EXP2}/list
TRAIN_SET=trainval${TRAIN_SET_SUFFIX}
if [ -z ${TRAIN_SET_WEAK_LEN} ]; then
TRAIN_SET_WEAK=${TRAIN_SET}_diff_${TRAIN_SET_STRONG}
comm -3 ${LIST_DIR}/${TRAIN_SET}.txt ${LIST_DIR}/${TRAIN_SET_STRONG}.txt > ${LIST_DIR}/${TRAIN_SET_WEAK}.txt
else
TRAIN_SET_WEAK=${TRAIN_SET}_diff_${TRAIN_SET_STRONG}_head${TRAIN_SET_WEAK_LEN}
comm -3 ${LIST_DIR}/${TRAIN_SET}.txt ${LIST_DIR}/${TRAIN_SET_STRONG}.txt | head -n ${TRAIN_SET_WEAK_LEN} > ${LIST_DIR}/${TRAIN_SET_WEAK}.txt
fi
#
MODEL=${EXP2}/model/${NET_ID}/init2.caffemodel
if [ ! -f ${MODEL} ]; then
MODEL=`ls -t ${EXP2}/model/${NET_ID}/train_iter_*.caffemodel | head -n 1`
fi
#
echo Training2 net ${EXP2}/${NET_ID}
for pname in train solver2; do
sed "$(eval echo $(cat sub.sed))" \
${CONFIG_DIR}/${pname}.prototxt > ${CONFIG_DIR}/${pname}_${TRAIN_SET}.prototxt
done
CMD="${CAFFE_BIN} train \
--solver=${CONFIG_DIR}/solver2_${TRAIN_SET}.prototxt \
--weights=${MODEL} \
--gpu=${DEV_ID}"
echo Running ${CMD} && ${CMD}
fi
## Test #2 on official test set
if [ ${RUN_TEST2} -eq 1 ]; then
#
for TEST_SET in val test; do
TEST_ITER=`cat ${EXP2}/list/${TEST_SET}.txt | wc -l`
MODEL=${EXP2}/model/${NET_ID}/test2.caffemodel
if [ ! -f ${MODEL} ]; then
MODEL=`ls -t ${EXP2}/model/${NET_ID}/train2_iter_*.caffemodel | head -n 1`
fi
#
echo Testing2 net ${EXP2}/${NET_ID}
FEATURE_DIR=${EXP2}/features2/${NET_ID}
mkdir -p ${FEATURE_DIR}/${TEST_SET}/fc8
mkdir -p ${FEATURE_DIR}/${TEST_SET}/crf
sed "$(eval echo $(cat sub.sed))" \
${CONFIG_DIR}/test.prototxt > ${CONFIG_DIR}/test_${TEST_SET}.prototxt
CMD="${CAFFE_BIN} test \
--model=${CONFIG_DIR}/test_${TEST_SET}.prototxt \
--weights=${MODEL} \
--gpu=${DEV_ID} \
--iterations=${TEST_ITER}"
echo Running ${CMD} && ${CMD}
done
fi
- 上一篇 关于FCN的数据集着色说明
- 下一篇 语义分割深度学习方法集锦