Ubuntu的安装请参考《如何快速安装Ubuntu18.04》
samba服务的配置请参考《如何在Ubuntu配置Samba服务》
前提:由于在安装darknet2caffe环境的时候需要下载并安装torch
- 在执行 pip3 install torch的时候,总是会被killed(已杀死),所以这里需要对Ubuntu的运行内存做一下配置。
确保你的Ubuntu是在关机状态,点击内存,将内存调至4GB(至少),然后再给Ubuntu开机,此时Ubuntu的运行内存就是4GB了。
一、在Ubuntu18.04上安装caffe环境
1、分步执行下面的命令,对Ubuntu里面的软件进行更新
sudo apt-get update
sudo apt-get upgrade
2、分步执行下面的命令,安装所需的依赖软件。
sudo apt-get install -y libopencv-dev
sudo apt-get install -y build-essential cmake git pkg-config
sudo apt-get install -y libprotobuf-dev libleveldb-dev libsnappy-dev libhdf5-serial-dev protobuf-compiler
sudo apt-get install -y liblapack-dev
sudo apt-get install -y libatlas-base-dev
sudo apt-get install -y --no-install-recommends libboost-all-dev
sudo apt-get install -y libgflags-dev libgoogle-glog-dev liblmdb-dev
sudo apt-get install -y python-numpy python-scipy
sudo apt-get install -y python3-pip
sudo apt-get install -y python3-numpy python3-scipy
3、执行下面的命令,下载caffe开源软件
git clone https://github.com/BVLC/caffe.git
4、进入caffe/python/目录下,执行下面的命令,下载依赖的软件
cd caffe/python/
for req in $(cat requirements.txt); do pip3 install $req; done
5、进入caffe目录下,执行下面的命令,将 Makefile.config.example 文件复制一份并更名为 Makefile.config
cp Makefile.config.example Makefile.config
6、接下来是修改Makefile.config里面的配置,使用vim命令打开Makefile.config。
vim Makefile.config
- ① 将CPU_ONLY前面的注释去掉。
将
# CPU_ONLY := 1
改为
CPU_ONLY := 1
- ② 将OPENCV_VERSION前面的注释去掉
将
# OPENCV_VERSION := 3
改为
OPENCV_VERSION := 3
- ③ 因为我们Ubuntu的环境是python,所以请把PYTHON_INCLUDE = python2.7这个配置前面加上注释,且把PYTHON_INCLUDE=python3.5的注释打开,把所有的3.5都改成3.6,具体修改如下
- ④ 将WITH_PYTHON_LAYER := 1前面的注释去掉
将
# WITH_PYTHON_LAYER := 1
改为
WITH_PYTHON_LAYER := 1
- ⑤ 修改INCLUDE_DIRS和LIBRARY_DIRS
将
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
改为
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial
修改后的文件如下所示:
## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!
# cuDNN acceleration switch (uncomment to build with cuDNN).
# USE_CUDNN := 1
# CPU-only switch (uncomment to build without GPU support).
CPU_ONLY := 1
# uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0
# This code is taken from https://github.com/sh1r0/caffe-android-lib
# USE_HDF5 := 0
# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
# You should not set this flag if you will be reading LMDBs with any
# possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1
# Uncomment if you're using OpenCV 3
OPENCV_VERSION := 3
# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++
# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr
# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
# For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
# For CUDA >= 9.0, comment the *_20 and *_21 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
-gencode arch=compute_20,code=sm_21 \
-gencode arch=compute_30,code=sm_30 \
-gencode arch=compute_35,code=sm_35 \
-gencode arch=compute_50,code=sm_50 \
-gencode arch=compute_52,code=sm_52 \
-gencode arch=compute_60,code=sm_60 \
-gencode arch=compute_61,code=sm_61 \
-gencode arch=compute_61,code=compute_61
# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas
# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib
# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app
# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
# PYTHON_INCLUDE := /usr/include/python2.7 \
# /usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
# $(ANACONDA_HOME)/include/python2.7 \
# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include
# Uncomment to use Python 3 (default is Python 2)
PYTHON_LIBRARIES := boost_python3 python3.6m
PYTHON_INCLUDE := /usr/include/python3.6m \
/usr/lib/python3.6/dist-packages/numpy/core/include
# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib
# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib
# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1
# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial
# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib
# NCCL acceleration switch (uncomment to build with NCCL)
# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
# USE_NCCL := 1
# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1
# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute
# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1
# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0
# enable pretty build (comment to see full commands)
Q ?= @
7、修改Makefile文件里面的一些配置,使用vim 打开Makefile,进行修改。
vim Makefile
- ① 修改DYNAMIC_VERSION_REVISION的值
将
DYNAMIC_VERSION_REVISION := 0
改为
DYNAMIC_VERSION_REVISION := 0-rc3
- ② 修改 LIBRARIES 的值。
将
LIBRARIES += glog gflags protobuf boost_system boost_filesystem m
改为
LIBRARIES += glog gflags protobuf boost_system boost_filesystem boost_regex m hdf5_hl hdf5
将
LIBRARIES += opencv_imgcodecs
改为
LIBRARIES += opencv_imgcodecs opencv_videoio
- ③ 将# NCCL acceleration configuration下面的四行注释掉
将
# NCCL acceleration configuration
ifeq ($(USE_NCCL), 1)
LIBRARIES += nccl
COMMON_FLAGS += -DUSE_NCCL
endif
改为
# NCCL acceleration configuration
# ifeq ($(USE_NCCL), 1)
# LIBRARIES += nccl
# COMMON_FLAGS += -DUSE_NCCL
# endif
8、在caffe目录下,分步执行下面的命令,来编译caffe。
make -j4
make pycaffe
9、执行下面的命令,将caffe的python路径设置为环境变量,并更新环境变量。
执行下面的命令,打开.bashrc
sudo vim ~/.bashrc
在文件的末尾加上下面的语句
export PYTHONPATH=/home/hispark/code/caffe/python:$PYTHONPATH
再执行下面的命令,更新环境变量
source ~/.bashrc
10、测试caffe环境是否OK,在Ubuntu的任意目录下,执行 python3,当出现”>>>”的提示符后,再输入import caffe,如果没有任何报错信息,说明caffe环境已经搭建成功了。
python3
import caffe
二、执行下面的步骤,安装darknet2caffe环境
1、分步执行下面的命令,安装编译darknet2caffe时需要的torch环境
pip3 install torch==1.4.0 --no-cache-dir
pip3 install torchvision==0.5.0 --no-cache-dir
2、执行下面的命令,下载darknet2caffe的代码到Ubuntu。
git clone https://github.com/ChenYingpeng/darknet2caffe
3、由于python的本地版本是python3.6,开源代码为 python2.X,因此需要对代码语法做适当调整
- ① 将 darknet2caffe.py 中的所有的
if block.has_key('name'):
替换成if 'name' in block:
- ② 再将caffe_root修改为caffe的实际的绝对路径,如:/home/hispark/code/caffe/
将
caffe_root='/home/chen/caffe/'
改为
caffe_root='/home/hispark/code/caffe/' # /home/hispark/code/caffe/是你caffe的路径
- ③ 将prototxt.py 安装下图方式进行修改
修改后的protxt.py如下所示
from collections import OrderedDict
try:
import caffe.proto.caffe_pb2 as caffe_pb2
except:
try:
import caffe_pb2
except:
print('caffe_pb2.py not found. Try:')
print(' protoc caffe.proto --python_out=.')
exit()
def parse_caffemodel(caffemodel):
model = caffe_pb2.NetParameter()
print('Loading caffemodel: ', caffemodel)
with open(caffemodel, 'rb') as fp:
model.ParseFromString(fp.read())
return model
def parse_prototxt(protofile):
def line_type(line):
if line.find(':') >= 0:
return 0
elif line.find('{') >= 0:
return 1
return -1
def parse_block(fp):
block = OrderedDict()
line = fp.readline().strip()
while line != '}':
ltype = line_type(line)
if ltype == 0: # key: value
#print line
line = line.split('#')[0]
key, value = line.split(':')
key = key.strip()
value = value.strip().strip('"')
if block.has_key(key):
if type(block[key]) == list:
block[key].append(value)
else:
block[key] = [block[key], value]
else:
block[key] = value
elif ltype == 1: # blockname {
key = line.split('{')[0].strip()
sub_block = parse_block(fp)
block[key] = sub_block
line = fp.readline().strip()
line = line.split('#')[0]
return block
fp = open(protofile, 'r')
props = OrderedDict()
layers = []
line = fp.readline()
while line != '':
line = line.strip().split('#')[0]
if line == '':
line = fp.readline()
continue
ltype = line_type(line)
if ltype == 0: # key: value
key, value = line.split(':')
key = key.strip()
value = value.strip().strip('"')
if props.has_key(key):
if type(props[key]) == list:
props[key].append(value)
else:
props[key] = [props[key], value]
else:
props[key] = value
elif ltype == 1: # blockname {
key = line.split('{')[0].strip()
if key == 'layer':
layer = parse_block(fp)
layers.append(layer)
else:
props[key] = parse_block(fp)
line = fp.readline()
if len(layers) > 0:
net_info = OrderedDict()
net_info['props'] = props
net_info['layers'] = layers
return net_info
else:
return props
def is_number(s):
try:
float(s)
return True
except ValueError:
return False
def print_prototxt(net_info):
# whether add double quote
def format_value(value):
#str = u'%s' % value
#if str.isnumeric():
if is_number(value):
return value
elif value == 'true' or value == 'false' or value == 'MAX' or value == 'SUM' or value == 'AVE':
return value
else:
return '\"%s\"' % value
def print_block(block_info, prefix, indent):
blanks = ''.join([' ']*indent)
print('%s%s {' % (blanks, prefix))
for key,value in block_info.items():
if type(value) == OrderedDict:
print_block(value, key, indent+4)
elif type(value) == list:
for v in value:
print('%s %s: %s' % (blanks, key, format_value(v)))
else:
print('%s %s: %s' % (blanks, key, format_value(value)))
print('%s}' % blanks)
props = net_info['props']
layers = net_info['layers']
print('name: \"%s\"' % props['name'])
print('input: \"%s\"' % props['input'])
print('input_dim: %s' % props['input_dim'][0])
print('input_dim: %s' % props['input_dim'][1])
print('input_dim: %s' % props['input_dim'][2])
print('input_dim: %s' % props['input_dim'][3])
print('')
for layer in layers:
print_block(layer, 'layer', 0)
def save_prototxt(net_info, protofile, region=True):
fp = open(protofile, 'w')
# whether add double quote
def format_value(value):
#str = u'%s' % value
#if str.isnumeric():
if is_number(value):
return value
elif value == 'true' or value == 'false' or value == 'MAX' or value == 'SUM' or value == 'AVE':
return value
else:
return '\"%s\"' % value
def print_block(block_info, prefix, indent):
blanks = ''.join([' ']*indent)
print('%s%s {' % (blanks, prefix), end="\n", file=fp)
for key,value in block_info.items():
if type(value) == OrderedDict:
print_block(value, key, indent+4)
elif type(value) == list:
for v in value:
print('%s %s: %s' % (blanks, key, format_value(v)), end="\n", file=fp)
else:
if key[0:6] == 'biases':
key = 'biases'
print('%s %s: %s' % (blanks, key, format_value(value)), end="\n", file=fp)
print('%s}' % blanks, end="\n", file=fp)
props = net_info['props']
layers = net_info['layers']
print('name: \"%s\"' % props['name'], end="\n", file=fp)
print('input: \"%s\"' % props['input'], end="\n", file=fp)
print('input_dim: %s' % props['input_dim'][0], end="\n", file=fp)
print('input_dim: %s' % props['input_dim'][1], end="\n", file=fp)
print('input_dim: %s' % props['input_dim'][2], end="\n", file=fp)
print('input_dim: %s' % props['input_dim'][3], end="\n", file=fp)
print('', end="\n", file=fp)
for layer in layers:
if layer['type'] != 'Region' or region == True:
print_block(layer, 'layer', 0)
fp.close()
if __name__ == '__main__':
import sys
if len(sys.argv) != 2:
print('Usage: python prototxt.py model.prototxt')
exit()
net_info = parse_prototxt(sys.argv[1])
print_prototxt(net_info)
save_prototxt(net_info, 'tmp.prototxt')
4、进入darknet2caffe目录,执行下面的命令,将三个文件拷贝到caffe目录下
cp caffe_layers/upsample_layer/upsample_layer.hpp ../caffe/include/caffe/layers/
cp caffe_layers/upsample_layer/upsample_layer.c* ../caffe/src/caffe/layers/
5、进入caffe的 src/caffe/proto/目录下,修改caffe.proto文件
cd ../caffe/src/caffe/proto/
在 message LayerParameter {}
中新增 optional UpsampleParameter upsample_param = 150;
在caffe.proto最后添加UpsampleParameter
参数,如下图所示:
message UpsampleParameter {
optional int32 scale = 1 [default = 1];
}
6、在caffe目录下,执行下面的命令,重新编译caffe环境。
make clean
make -j4
make pycaffe
到此,caffe和darknet2caffe的环境就在Ubuntu中安装好了。