MobileNet-SSD
A caffe implementation of MobileNet-SSD detection network, with pretrained weights on VOC0712 and mAP=0.727.
Run
Download SSD source code and compile (follow the SSD README).
Download the pretrained deploy weights from the link above.
Put all the files in SSD_HOME/examples/
Run demo.py to show the detection result.
Train your own dataset
Convert your own dataset to lmdb database (follow the SSD README), and create symlinks to current directory.
ln -s PATH_TO_YOUR_TRAIN_LMDB trainval_lmdb
ln -s PATH_TO_YOUR_TEST_LMDB test_lmdb
Create the labelmap.prototxt file and put it into current directory.
Use gen_model.sh to generate your own training prototxt.
Download the training weights from the link above, and run train.sh, after about 30000 iterations, the loss should be 1.5 - 2.5.
Run test.sh to evaluate the result.
Run merge_bn.py to generate your own deploy caffemodel.
#######################以上文字来自Github######################
一、数据处理
在caffe-ssd/data目录下创建文件夹,文件夹目录结构如下:
./
├── Annotations
├── create_data.sh
├── create_ImageSets.py
├── create_list.sh
├── JPEGImages
├── labelmap_voc.prototxt
├── lmdb
├── test_name_size.txt
├── test.txt
└── trainval.txt
3 directories, 7 files
1、create_ImageSets.py 创建ImageSets文件夹,划分训练集和测试集
#!/usr/bin/env python
# -*- coding:utf-8 -*-
import os
#ImageSets directory
_IMAGE_SETS_PATH = 'ImageSets'
_MAIN_PATH = 'ImageSets/Main'
_XML_FILE_PATH = 'Annotations'
if __name__ == '__main__':
#create ImageSets directory
if os.path.exists(_IMAGE_SETS_PATH):
print 'ImageSets directory is already exists!'
if os.path.exists(_MAIN_PATH):
print 'Main directory is already exists!'
else:
os.mkdir(_IMAGE_SETS_PATH)
os.mkdir(_MAIN_PATH)
#create the test and train txt files
f_test = open(os.path.join(_MAIN_PATH, 'test.txt'), 'w')
f_train = open(os.path.join(_MAIN_PATH, 'trainval.txt'), 'w')
#ergodic the xml files directory
for root, dirs, files in os.walk(_XML_FILE_PATH):
print len(files)
i = 0
for f in files:
if (i % 60) == 0:
f_test.write(f.split('.')[0] + '\n')
else:
f_train.write(f.split('.')[0] + '\n')
i += 1
f_test.close()
f_train.close()
2、create_list.sh 生成创建lmdb文件需要的文件
#!/bin/bash
root_dir=$HOME/caffe-ssd/data
sub_dir=ImageSets/Main
bash_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
for dataset in trainval test
do
dst_file=$bash_dir/$dataset.txt
if [ -f $dst_file ]
then
rm -f $dst_file
fi
for name in actdetect
do
echo "Create list for $name $dataset..."
dataset_file=$root_dir/$name/$sub_dir/$dataset.txt
img_file=$bash_dir/$dataset"_img.txt"
cp $dataset_file $img_file
sed -i "s/^/$name\/JPEGImages\//g" $img_file
sed -i "s/$/.jpg/g" $img_file
label_file=$bash_dir/$dataset"_label.txt"
cp $dataset_file $label_file
sed -i "s/^/$name\/Annotations\//g" $label_file
sed -i "s/$/.xml/g" $label_file
paste -d' ' $img_file $label_file >> $dst_file
rm -f $label_file
rm -f $img_file
done
# Generate image name and size infomation.
if [ $dataset == "test" ]
then
$bash_dir/../../build/tools/get_image_size $root_dir $dst_file $bash_dir/$dataset"_name_size.txt"
fi
# Shuffle trainval file.
if [ $dataset == "trainval" ]
then
rand_file=$dst_file.random
cat $dst_file | perl -MList::Util=shuffle -e 'print shuffle(<STDIN>);' > $rand_file
mv $rand_file $dst_file
fi
done
3、create_data.sh 创建数据库文件
cur_dir=$(cd $( dirname ${BASH_SOURCE[0]} ) && pwd )
root_dir=$cur_dir/../..
cd $root_dir
redo=1
data_root_dir="$HOME/caffe-ssd/data"
dataset_name="actdetect"
mapfile="$root_dir/data/$dataset_name/labelmap_voc.prototxt"
anno_type="detection"
db="lmdb"
min_dim=0
max_dim=0
width=0
height=0
extra_cmd="--encode-type=jpg --encoded"
if [ $redo ]
then
extra_cmd="$extra_cmd --redo"
fi
for subset in test trainval
do
python $root_dir/scripts/create_annoset.py --anno-type=$anno_type --label-map-file=$mapfile --min-dim=$min_dim --max-dim=$max_dim --resize-width=$width --resize-height=$height --check-label $extra_cmd $data_root_dir $root_dir/data/$dataset_name/$subset.txt $data_root_dir/$dataset_name/$db/$dataset_name"_"$subset"_"$db examples/$dataset_name
done