1.编译caffe出现:src/caffe/net.cpp:9:18: fatal error: hdf5.h: No such file or directory compilation termina
修改Makefile.config文件中这两行,改成:
# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial
2.caffe ---找不到lhdf5_hl和lhdf5的错误
from:laiyuanyu http://blog.csdn.net/autocyz/article/details/51783857
//重要的一项
将# Whatever else you find you need goes here.下面的
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
修改为:
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu /usr/lib/x86_64-linux-gnu/hdf5/serial
//这是因为ubuntu16.04的文件包含位置发生了变化,尤其是需要用到的hdf5的位置,所以需要更改这一路径
cd /usr/lib/x86_64-linux-gnu
\\然后根据情况执行下面两句:
sudo ln -s libhdf5_serial.so.10.1.0 libhdf5.so
sudo ln -s libhdf5_serial_hl.so.10.0.2 libhdf5_hl.so
3.当使用python3时,会出现找不到:boost_python3的情况,可以参考2的情况把python2.7的链接改为python3.5的链接
4.编译 pycaffe时报错:fatal error: numpy/arrayobject.h没有那个文件或目录
编译 pycaffe时报错:fatal error: numpy/arrayobject.h没有那个文件或目录
其实numpy已经是安装的,anaconda2里面有,python中import numpy也没有问题,但就是在此处报错,解决方法: sudo apt-get install python-numpy
然后
sudo make pycaffe -j16
pycaffe就编译成功了
如果还是不行,可以试试:
import numpy as np
np.get_include()
得到:
/usr/local/lib/python2.7/dist-packages/numpy/core/include
在Makefile.config找到PYTHON_INCLUDE,发现有点不同:
PYTHON_INCLUDE := /usr/include/python2.7 \
/usr/lib/python2.7/dist-packages/numpy/core/include
要加一个local,变成:
PYTHON_INCLUDE := /usr/include/python2.7 \
/usr/local/lib/python2.7/dist-packages/numpy/core/include
再make pycaffe就ok了
5.
ImportError: No module named _caffe解决方法(Ubuntu)
参考:https://github.com/BVLC/caffe/issues/263
首先确定完成了以下操作:
make pycaffe
接下来将/caffe/python/caffe设置为PYTHONPATH,设置方法如下:本方法用于当前用户,在用户主目录下有一个.bashrc隐藏文件,可以在此文件中设置PATH
$ gedit~/.bashrc
加入:
export PATHONPATH=/home/xxx/caffe/python:$PATHONPATH
多个路径的话以冒号分隔,保存后输入:
$ source~/.bashrc
查询变量是否生效:echo $PYTHONPATH
若出现加入的路径,则生效;否则重启电脑使其生效。
6. 编译cpm_caffe时,出现的一些问题
2019/2/26
[----------] 5 tests from EmbedLayerTest/1, where TypeParam = caffe::CPUDevice<double>
[ RUN ] EmbedLayerTest/1.TestGradientWithBias
[ OK ] EmbedLayerTest/1.TestGradientWithBias (21 ms)
[ RUN ] EmbedLayerTest/1.TestForward
[ OK ] EmbedLayerTest/1.TestForward (0 ms)
[ RUN ] EmbedLayerTest/1.TestForwardWithBias
[ OK ] EmbedLayerTest/1.TestForwardWithBias (0 ms)
[ RUN ] EmbedLayerTest/1.TestGradient
[ OK ] EmbedLayerTest/1.TestGradient (15 ms)
[ RUN ] EmbedLayerTest/1.TestSetUp
[ OK ] EmbedLayerTest/1.TestSetUp (0 ms)
[----------] 5 tests from EmbedLayerTest/1 (36 ms total)
[----------] 5 tests from BlobSimpleTest/1, where TypeParam = double
[ RUN ] BlobSimpleTest/1.TestInitialization
[ OK ] BlobSimpleTest/1.TestInitialization (0 ms)
[ RUN ] BlobSimpleTest/1.TestReshapeZero
[ OK ] BlobSimpleTest/1.TestReshapeZero (0 ms)
[ RUN ] BlobSimpleTest/1.TestReshape
[ OK ] BlobSimpleTest/1.TestReshape (0 ms)
[ RUN ] BlobSimpleTest/1.TestLegacyBlobProtoShapeEquals
[ OK ] BlobSimpleTest/1.TestLegacyBlobProtoShapeEquals (0 ms)
[ RUN ] BlobSimpleTest/1.TestPointersCPUGPU
[ OK ] BlobSimpleTest/1.TestPointersCPUGPU (0 ms)
[----------] 5 tests from BlobSimpleTest/1 (0 ms total)
[----------] 12 tests from DataLayerTest/3, where TypeParam = caffe::GPUDevice<double>
[ RUN ] DataLayerTest/3.TestReadCropTrainLevelDB
[ OK ] DataLayerTest/3.TestReadCropTrainLevelDB (38 ms)
[ RUN ] DataLayerTest/3.TestReadCropTrainSequenceUnseededLevelDB
[ OK ] DataLayerTest/3.TestReadCropTrainSequenceUnseededLevelDB (73 ms)
[ RUN ] DataLayerTest/3.TestReadCropTrainLMDB
[ OK ] DataLayerTest/3.TestReadCropTrainLMDB (21 ms)
[ RUN ] DataLayerTest/3.TestReshapeLMDB
[ OK ] DataLayerTest/3.TestReshapeLMDB (53 ms)
[ RUN ] DataLayerTest/3.TestReadCropTrainSequenceSeededLMDB
[ OK ] DataLayerTest/3.TestReadCropTrainSequenceSeededLMDB (61 ms)
[ RUN ] DataLayerTest/3.TestReshapeLevelDB
[ OK ] DataLayerTest/3.TestReshapeLevelDB (53 ms)
[ RUN ] DataLayerTest/3.TestReadLevelDB
[ OK ] DataLayerTest/3.TestReadLevelDB (186 ms)
[ RUN ] DataLayerTest/3.TestReadCropTestLMDB
[ OK ] DataLayerTest/3.TestReadCropTestLMDB (10 ms)
[ RUN ] DataLayerTest/3.TestReadLMDB
[ OK ] DataLayerTest/3.TestReadLMDB (122 ms)
[ RUN ] DataLayerTest/3.TestReadCropTrainSequenceSeededLevelDB
[ OK ] DataLayerTest/3.TestReadCropTrainSequenceSeededLevelDB (15 ms)
[ RUN ] DataLayerTest/3.TestReadCropTrainSequenceUnseededLMDB
[ OK ] DataLayerTest/3.TestReadCropTrainSequenceUnseededLMDB (29 ms)
[ RUN ] DataLayerTest/3.TestReadCropTestLevelDB
[ OK ] DataLayerTest/3.TestReadCropTestLevelDB (35 ms)
[----------] 12 tests from DataLayerTest/3 (696 ms total)
[----------] 1 test from LayerFactoryTest/1, where TypeParam = caffe::CPUDevice<double>
[ RUN ] LayerFactoryTest/1.TestCreateLayer
F0226 15:29:57.704740 23919 db_leveldb.cpp:16] Check failed: status.ok() Failed to open leveldb
IO error: /LOCK: Permission denied
*** Check failure stack trace: ***
@ 0x7f2aa7fa55cd google::LogMessage::Fail()
@ 0x7f2aa7fa7433 google::LogMessage::SendToLog()
@ 0x7f2aa7fa515b google::LogMessage::Flush()
@ 0x7f2aa7fa7e1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7f2aa58d2ec3 caffe::db::LevelDB::Open()
@ 0x7f2aa58d39d0 caffe::DataReader::Body::InternalThreadEntry()
@ 0x7f2aa572bc25 caffe::InternalThread::entry()
@ 0x7f2aa62695d5 (unknown)
@ 0x7f2aa4e696ba start_thread
@ 0x7f2aa4b9f41d clone
@ (nil) (unknown)
Makefile:525: recipe for target 'runtest' failed
make: *** [runtest] Aborted (core dumped)
amax@amax:/data0/Masters/XiaoWenFu/convolutional-pose-machines-release/caffe$
#权限问题,解决方法如下:
//
sudo make runtest -j6
[ OK ] LSTMLayerTest/3.TestLSTMUnitGradient (4224 ms)
[ RUN ] LSTMLayerTest/3.TestLSTMUnitGradientNonZeroCont
[ OK ] LSTMLayerTest/3.TestLSTMUnitGradientNonZeroCont (3511 ms)
[ RUN ] LSTMLayerTest/3.TestForward
[ OK ] LSTMLayerTest/3.TestForward (12 ms)
[ RUN ] LSTMLayerTest/3.TestGradient
[ OK ] LSTMLayerTest/3.TestGradient (25159 ms)
[----------] 9 tests from LSTMLayerTest/3 (406298 ms total)
[----------] 1 test from LayerFactoryTest/3, where TypeParam = caffe::GPUDevice<double>
[ RUN ] LayerFactoryTest/3.TestCreateLayer
F0226 16:04:42.030730 33862 db_leveldb.cpp:16] Check failed: status.ok() Failed to open leveldb
Invalid argument: : does not exist (create_if_missing is false)
*** Check failure stack trace: ***
@ 0x7fa2632eb5cd google::LogMessage::Fail()
@ 0x7fa2632ed433 google::LogMessage::SendToLog()
@ 0x7fa2632eb15b google::LogMessage::Flush()
@ 0x7fa2632ede1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7fa260c18ec3 caffe::db::LevelDB::Open()
@ 0x7fa260c199d0 caffe::DataReader::Body::InternalThreadEntry()
@ 0x7fa260a71c25 caffe::InternalThread::entry()
@ 0x7fa2615af5d5 (unknown)
@ 0x7fa2601af6ba start_thread
@ 0x7fa25fee541d clone
@ (nil) (unknown)
Makefile:525: recipe for target 'runtest' failed
make: *** [runtest] Aborted (core dumped)
amax@amax:/data0/Masters/XiaoWenFu/convolutional-pose-machines-release/caffe$
#我猜测是测试程序有问题,而非软件本身有问题,与研究方向无关,可忽略。
7、error while loading shared libraries: libcudart.so.9.1: cannot open shared object file: No such file
sudo cp /usr/local/cuda-9.1/lib64/libcudart.so.9.1 /usr/local/lib/libcudart.so.9.1 && sudo ldconfig
sudo cp /usr/local/cuda-9.1/lib64/libcublas.so.9.1 /usr/local/lib/libcublas.so.9.1 && sudo ldconfig
sudo cp /usr/local/cuda-9.1/lib64/libcurand.so.9.1 /usr/local/lib/libcurand.so.9.1 && sudo ldconfig
8、src/caffe/net.cpp:8:18: fatal error: hdf5.h: No such file or directory
按照在ubuntu上配置带gpu版本的caffe配的
1.src/caffe/net.cpp:8:18: fatal error: hdf5.h: No such file or directory
compilation terminated.
Makefile:581: recipe for target '.build_release/src/caffe/net.o' failed
make: *** [.build_release/src/caffe/net.o] Error 1
解决方法:
在Makefile.config文件的第85行,添加/usr/include/hdf5/serial/ 到 INCLUDE_DIRS,也就是把下面第一行代码改为第二行代码。
INCLUDE_DIRS:= $(PYTHON_INCLUDE) /usr/local/include
INCLUDE_DIRS:= $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/
在Makefile文件的第173行,把 hdf5_hl 和hdf5修改为hdf5_serial_hl 和 hdf5_serial,也就是把下面第一行代码改为第二行代码。
LIBRARIES +=glog gflags protobuf boost_system boost_filesystem m hdf5_hl hdf5
LIBRARIES +=glog gflags protobuf boost_system boost_filesystem m hdf5_serial_hl hdf5
9、Linux下设置环境变量有三种方法,一种用于当前终端,一种用于当前用户,一种用于所有用户
在用Linux(OS:Centos 7.2)时看到有一行代码是:
export PYTHONPATH=$PYTHONPATH:/home/usrname/models:/home/usrname/models/one
意思是将models以及其目录下的one文件夹加入系统环境中。
百度了发现环境变量有三种修改方式。以上属于下述中的第一种。
Linux下设置环境变量有三种方法,一种用于当前终端,一种用于当前用户,一种用于所有用户:
一:用于当前终端:
在当前终端中输入:export PATH=$PATH:<你的要加入的路径>
不过上面的方法只适用于当前终端,一旦当前终端关闭或在另一个终端中,则无效。
export NDK_ROOT=/home/jiang/soft/Android-ndk-r8e #只能在当前终端使用。
二:用于当前用户:
在用户主目录下有一个 .bashrc 隐藏文件,可以在此文件中加入 PATH 的设置如下:
$ gedit ~/.bashrc
加入:
export PATH=<你的要加入的路径>:$PATH
如果要加入多个路径,只要:
export PATH=<你要加入的路径1>:<你要加入的路径2>: ...... :$PATH
当中每个路径要以冒号分隔。
这样每次登录都会生效
添加PYTHONPATH的方法也是这样,在.bashrc中添加
export PYTHONPATH=/home/zhao/setup/caffe-master/python:/home/zhao/setup/mypy:$PYTHONPATH
保存后在终端输入 $ source ~/.bashrc 使环境变量立即生效
三:用于所有用户:
$ sudo gedit /etc/profile
加入:
export PATH=<你要加入的路径>:$PATH
就可以了。
终端输入:echo $PATH 可以查看环境变量