caffe+SSD+Ubuntu14.04+CPU的配置及运行

第一步:

 git clone https://github.com/weiliu89/caffe.git
 cd caffe
 git checkout ssd

git下载可能速度慢,可自行查找解决方案。例:

1、将下列内容加入 /etc/hosts文件中

151.101.113.194 http://global-ssl.fastly.Net
192.30.253.112 http://github.com

2、刷新DNS缓存 

sudo /etc/init.d/networking restart

第二步:

进入caffe目录下

cd  caffe/
cp  Makefile.config.example  Makefile.config

Makefile.config的修改如下:

# 
# All modification made by Intel Corporation: © 2016 Intel Corporation
# 
# All contributions by the University of California:
# Copyright (c) 2014, 2015, The Regents of the University of California (Regents)
# All rights reserved.
# 
# All other contributions:
# Copyright (c) 2014, 2015, the respective contributors
# All rights reserved.
# For the list of contributors go to https://github.com/BVLC/caffe/blob/master/CONTRIBUTORS.md
# 
# 
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# 
#     * Redistributions of source code must retain the above copyright notice,
#       this list of conditions and the following disclaimer.
#     * Redistributions in binary form must reproduce the above copyright
#       notice, this list of conditions and the following disclaimer in the
#       documentation and/or other materials provided with the distribution.
#     * Neither the name of Intel Corporation nor the names of its contributors
#       may be used to endorse or promote products derived from this software
#       without specific prior written permission.
# 
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!

# cuDNN acceleration switch (uncomment to build with cuDNN).
# USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support).
CPU_ONLY := 1

# USE_MKL2017_AS_DEFAULT_ENGINE flag is OBSOLETE
# Put this at the top your train_val.protoxt or solver.prototxt file:
# engine: "MKL2017" 
# or use this option with caffe tool:
# -engine "MKL2017"

# USE_MKLDNN_AS_DEFAULT_ENGINE flag is OBSOLETE
# Put this at the top your train_val.protoxt or solver.prototxt file:
# engine: "MKLDNN" 
# or use this option with caffe tool:
# -engine "MKLDNN"

# uncomment to disable IO dependencies and corresponding data layers
USE_OPENCV := 1
USE_LEVELDB := 1
USE_LMDB := 1

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
#	You should not set this flag if you will be reading LMDBs with any
#	possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1

# Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# If you use Intel compiler define a path to newer boost if not used
# already. 
# BOOST_ROOT := 

# Intel(r) Machine Learning Scaling Library (uncomment to build
# with MLSL for multi-node training)
# USE_MLSL :=1

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 lines for compatibility.
#CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
#	     -gencode arch=compute_20,code=sm_21 \
#	     -gencode arch=compute_30,code=sm_30 \
#	     -gencode arch=compute_35,code=sm_35 \
#	     -gencode arch=compute_50,code=sm_50 \
#	     -gencode arch=compute_50,code=compute_50

# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := mkl
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!

BLAS_DIR := $(MKLROOT)/include
BLAS_LIB := $(MKLROOT)/lib/intel64

# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas

# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
		/usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
# ANACONDA_HOME := $(HOME)/anaconda
# PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
		# $(ANACONDA_HOME)/include/python2.7 \
		# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \

# Uncomment to use Python 3 (default is Python 2)
# PYTHON_LIBRARIES := boost_python3 python3.5m
# PYTHON_INCLUDE := /usr/include/python3.5m \
#                 /usr/lib/python3.5/dist-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib

# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/i386-linux-gnu/hdf5/serial /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

# N.B. both build and distribute dirs are cleared on `make clean`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment to enable training performance monitoring
# PERFORMANCE_MONITORING := 1

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# Uncomment to disable OpenMP support.
# USE_OPENMP := 0

# The ID of the GPU that 'make runtest' will use to run unit tests.
#TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @

第三步:

编译caffe:(重新编译时 要先 make clean)

make all  -j6  //-j16根据本机的处理器配置,16是16核处理器的意思
//可使用如下命令查看: cat /proc/cpuinfo |grep "cores"|uniq
make pycaffe -j6
make test -j6
make runtest -j6(这一步不是必须的)

如果出现:

[  FAILED  ] AdamSolverTest/0.TestSnapshotShare, where TypeParam = caffe::CPUDevice<float>

如果使用了Intel MKL作为BLAS,可能是Intel MKL的浮点数计算功能没有设置正确。使用如下命令:

export MKL_CBWR=AUTO

然后重新make runtest

第四步:

将图片转化为LMDB文件;将前面生成好的VOC数据集在data文件夹下(即/home/xxx/caffe-ssd/data)新建文件夹,命名为widerface

将VOC0712中的文件拷贝过来进行修改

create_list:

#!/bin/bash

root_dir=/home/caffe/caffe/data/widerface/     //修改
sub_dir=ImageSets/Main
bash_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
for dataset in trainval test
do
  dst_file=$bash_dir/$dataset.txt
  if [ -f $dst_file ]
  then
    rm -f $dst_file
  fi
  for name in wider_face  //修改
  do
    if [[ $dataset == "test" && $name == "VOC2012" ]]
    then
。。。。。。。
余下部分省略

运行脚本:

此时可能出现 error while loading shared libraries: libmkl_rt.so: cannot open shared object file: No such file or directory。

解决方法如下:

locate  libmkl_rt.so 找到该文件所在的目录

# cat /etc/ld.so.conf
include ld.so.conf.d/*.conf
# echo "/opt/intel/mkl/lib/intel64/" >> /etc/ld.so.conf
# ldconfig

生成成功后如下:

生成如下三个.txt文件

create_data.sh 的修改:

data_root_dir="/home/caffe/caffe/data/widerface/"
dataset_name="widerface"

运行脚本:

此时如果出现no module named caffe或者是no module named caffe-proto,则:

scripts下的create_annoset.py文件没有添加pycaffe路径

添加如下:

import sys
sys.path.append(' your caffe path/python') #your caffe path就是你caffe-ssd的安装路径

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: 安装caffe-ssd-gpu在ubuntu18.04的步骤如下: 1. 安装CUDA:从Nvidia官网下载合适的CUDA安装包,按照官方文档的指引进行安装。 2. 安装依赖:运行以下命令安装所需依赖库: ``` sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev \ libopencv-dev libhdf5-serial-dev protobuf-compiler \ libgflags-dev libgoogle-glog-dev liblmdb-dev libboost-all-dev ``` 3. 下载caffe-ssd-gpu源码并编译:从Github上下载caffe-ssd-gpu的源码,按照官方文档指引进行编译。编译时需要指定编译选项为GPU模式。 4. 运行测试:安装完成后,运行测试脚本,确保安装配置成功。 以上为简要步骤,具体操作请参考对应文档和官方指引。 ### 回答2: Ubuntu18.04是目前比较常见的Linux操作系统之一,而CAFFE-SSD-GPU是深度学习的一个工具。下面是安装caffe-ssd-gpu的步骤: 1. 安装CUDA和cuDNN 首先,您需要安装CUDA和cuDNN,这是运行深度学习框架所需的必备组件。下载安装CUDA和cuDNN之前,您需要查看您的图形卡的型号,以便选择正确的CUDA版本和cuDNN版本。 在下载和安装CUDA和cuDNN之前,您需要在NVIDIA的开发者网站上注册自己,并下载适用于您机器的CUDA和cuDNN版本。此外,您还需要在命令行界面中设置以下环境变量: export PATH=/usr/local/cuda-8.0/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64:$LD_LIBRARY_PATH 2. 安装依赖项 在安装caffe之前,需要安装一些依赖项。您可以使用以下命令将这些依赖项安装到您的Ubuntu系统上: sudo apt-get update sudo apt-get install -y libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev protobuf-compiler gfortran libjpeg62 libfreeimage-dev libatlas-base-dev git python-dev python-pip libgoogle-glog-dev libbz2-dev libxml2-dev libxslt-dev libffi-dev libssl-dev libgflags-dev liblmdb-dev python-yaml python-numpy python-scipy 3. 下载和安装CAFFE 现在,您需要在您的系统上下载和安装CAFFE。从github上获取caffesource代码并进行安装: git clone https://github.com/weiliu89/caffe.git cd caffe git checkout ssd 4. 编译和安装CAFFE 使用以下命令编译和安装caffe: cp Makefile.config.example Makefile.config make all -j $(($(nproc) + 1)) make pycaffe 执行该命令后,您需要等待一段时间才能完成CAFFE的编译。如果出现任何编译错误,请检查您的CUDA和cuDNN版本是否正确,并重新安装依赖项。 5. 使用CAFFE-SSD-GPU 现在,您已经成功地在Ubuntu18.04操作系统上安装并编译了CAFFE-SSD-GPU,您可以开始使用该工具来执行深度学习任务了。 总结 安装CAFFE-SSD-GPU需要充分理解linux的命令行操作。需要先确认CUDA和cuDNN已经安装,并正确设置环境变量。然后需要下载和安装CAFFE, 并最后编译和安装CAFFE。在安装过程中如果存在问题,可以查看错误日志,重新检查步骤。如果对命令行操作不熟悉,则先学习linux基础操作。 ### 回答3: caffe-ssd-gpu是一种基于caffe框架的用于实现目标检测的神经网络模型,在Ubuntu18.04系统中安装caffe-ssd-gpu需要进行以下步骤: 1. 安装CUDA CUDA是NVIDIA公司推出的用于高性能计算的并行计算平台和编程模型,是使用GPU进行深度学习任务所必需的。在Ubuntu18.04上安装CUDA需要首先确认自己的显卡型号,并选择合适的CUDA版本进行安装。可以在NVIDIA官网上下载相应的CUDA安装包,也可以通过命令行方式进行安装。在安装过程中注意要按照提示完成相应的配置和设置。 2. 安装cuDNN cuDNN是用于深度神经网络的GPU加速库,也是必需的组件之一。在安装过程中同样需要确认CUDA的版本和自己的显卡型号,并下载相应的cuDNN安装包进行安装。 3. 安装依赖包 在安装caffe-ssd-gpu前需要先安装几个依赖包,包括protobuf、opencv、boost等。可以通过命令行方式进行安装,例如: ``` sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev libgflags-dev libgoogle-glog-dev liblmdb-dev ``` 4. 下载caffe-ssd-gpu源码 可以在GitHub上找到caffe-ssd-gpu的源码,下载后解压到自己想要的目录下。 5. 编译和安装caffe-ssd-gpu 进入caffe-ssd-gpu源码目录下,执行以下命令: ``` cd caffe-ssd-gpu mkdir build cd build cmake .. make all -j8 make install ``` 其中,make all -j8表示使用8个线程进行编译,提高编译速度。make install表示安装编译好的caffe-ssd-gpu库文件和可执行文件。 6. 测试安装是否成功 可以尝试运行caffe-ssd-gpu自带的测试程序,检查安装是否成功。在源码目录下执行以下命令: ``` ./build/tools/caffe time --model=models/VGGNet/VOC0712/SSD_300x300_ft/deploy.prototxt --gpu=0 ``` 这条命令会测试caffe-ssd-gpu在GPU上执行推断的速度,如果没有问题,则说明安装成功。 需要注意的是,在安装过程中可能会遇到各种问题,例如依赖包的版本不兼容、CUDA和cuDNN的配置出错等等。这时候需要耐心调试错误,逐个解决问题,才能确保caffe-ssd-gpu能够正常运行

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值