装CTC

https://github.com/SeanNaren/warp-ctc

下载下来,传到服务器上

解压

更名

cd warp-ctc
mkdir build; cd build
cmake ..
make

上面这些过程缺啥装啥,失败了,把缺的东西装上,重复下面这个过程,直到成功

cd build

make clean

cmake .. make

 

下一步开始安装

cd pytorch_binding
python setup.py install

报下面这个错:

generating build/warpctc_pytorch/_warp_ctc/__warp_ctc.c
(already up-to-date)
not modified: 'build/warpctc_pytorch/_warp_ctc/__warp_ctc.c'
running install
running bdist_egg
running egg_info
writing warpctc_pytorch.egg-info/PKG-INFO
writing dependency_links to warpctc_pytorch.egg-info/dependency_links.txt
writing top-level names to warpctc_pytorch.egg-info/top_level.txt
reading manifest file 'warpctc_pytorch.egg-info/SOURCES.txt'
writing manifest file 'warpctc_pytorch.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
copying warpctc_pytorch/_warp_ctc/__init__.py -> build/lib.linux-x86_64-3.6/warpctc_pytorch/_warp_ctc
running build_ext
building 'warpctc_pytorch._warp_ctc.__warp_ctc' extension
gcc -pthread -B /home1/fzp/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-pib/include -I/home1/fzp/anaconda3/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/TH -I/home1/fzp/anaconda3/li-I/home1/fzp/attention-OCR/warp-ctc-pytorch_bindings/include -I/home1/fzp/anaconda3/include/python3.6m -c build/warpctc_pyto__warp_ctc.o -std=c++11 -fPIC -std=c99 -DWARPCTC_ENABLE_GPU
cc1: warning: command line option ‘-std=c++11’ is valid for C++/ObjC++ but not for C [enabled by default]
gcc -pthread -B /home1/fzp/anaconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-pib/include -I/home1/fzp/anaconda3/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/TH -I/home1/fzp/anaconda3/li-I/home1/fzp/attention-OCR/warp-ctc-pytorch_bindings/include -I/home1/fzp/anaconda3/include/python3.6m -c /home1/fzp/attenti4-3.6/home1/fzp/attention-OCR/warp-ctc-pytorch_bindings/pytorch_binding/src/binding.o -std=c++11 -fPIC -std=c99 -DWARPCTC_EN
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default]
cc1plus: warning: command line option ‘-std=c99’ is valid for C/ObjC but not for C++ [enabled by default]
/home1/fzp/attention-OCR/warp-ctc-pytorch_bindings/pytorch_binding/src/binding.cpp: In function ‘int gpu_ctc(THCudaTensor*, 
/home1/fzp/attention-OCR/warp-ctc-pytorch_bindings/pytorch_binding/src/binding.cpp:92:49: error: cannot convert ‘THCudaTensotTensor*, int)’
     int probs_size = THFloatTensor_size(probs, 2);
                                                 ^
/home1/fzp/attention-OCR/warp-ctc-pytorch_bindings/pytorch_binding/src/binding.cpp:105:61: error: invalid conversion from ‘s
     void* gpu_workspace = THCudaMalloc(state, gpu_size_bytes);
                                                             ^
/home1/fzp/attention-OCR/warp-ctc-pytorch_bindings/pytorch_binding/src/binding.cpp:105:61: error: too few arguments to funct
In file included from /home1/fzp/anaconda3/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/THC/THC.h:4:0,
                 from /home1/fzp/attention-OCR/warp-ctc-pytorch_bindings/pytorch_binding/src/binding.cpp:9:
/home1/fzp/anaconda3/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/THC/THCGeneral.h:209:21: note: declared h
 THC_API cudaError_t THCudaMalloc(THCState *state, void **ptr, size_t size);
                     ^
error: command 'gcc' failed with exit status 1
解决方案:

改了之后,执行成功

开始测试是否安装成功:

  1. cd /home/xxx/warp-ctc/pytorch_binding/tests

  2. python test_gpu.py

报错1:

Traceback (most recent call last):
  File "test_gpu.py", line 1, in <module>
    import torch
ImportError: No module named torch

  尽管pytorch到我写下这篇文档时已经更新到0.5.0+版本了,但是SeanNaren/warp-ctc项目目前只兼容pytorch 0.4.0版本。所以安装过程中,请不要使用git clone法安装 pytorch,而是选用conda进行安装。我试过0.5+的版本报错,所以最好还是0.4版本

pytorch 0.4.0版本安装

报错2:

Traceback (most recent call last):
  File "test_gpu.py", line 3, in <module>
    import pytest
ImportError: No module named pytest

conda install pytest

装了下列两个东西

   py:     1.4.34-py27_0 
   pytest: 3.2.1-py27_0  

(crnn) [XXXXX@JXQ-240-26-65 tests]$ python test_cpu.py 
===================================================================================================================== test session starts ======================================================================================================================
platform linux2 -- Python 2.7.13, pytest-3.2.1, py-1.4.34, pluggy-0.4.0
rootdir: /export/gpudata/fujingling/projects/warp-ctc/pytorch_binding, inifile: setup.cfg
collected 4 items                                                                                                                                                                                                                                               

test_cpu.py ....

=================================================================================================================== 4 passed in 0.09 seconds ===================================================================================================================

看了其他人的博客:

在build下建一个test.py,加入以下代码,运行并没有报错呢

import torch
from warpctc_pytorch import CTCLoss
ctc_loss = CTCLoss()
# expected shape of seqLength x batchSize x alphabet_size
probs = torch.FloatTensor([[[0.1, 0.6, 0.1, 0.1, 0.1], [0.1, 0.1, 0.6, 0.1, 0.1]]]).transpose(0, 1).contiguous()
labels = torch.IntTensor([1, 2])
label_sizes = torch.IntTensor([2])
probs_sizes = torch.IntTensor([2])
probs.requires_grad_(True)  # tells autograd to compute gradients for probs
cost = ctc_loss(probs, labels, probs_sizes, label_sizes)
cost.backward()

 

于是,我就想着可以跑我的CRNN了,

cd  crnn.pytorch-master

 cp -r /xxx/xxx/XXX/projects/warp-ctc/pytorch_binding/warpctc_pytorch .

然而,万万没想到又报错了,应该是没装成功

 python train.py 
Traceback (most recent call last):
  File "train.py", line 12, in <module>
    from warpctc_pytorch import CTCLoss
  File "/export/gpudata/fujingling/projects/crnn.pytorch-master/warpctc_pytorch/__init__.py", line 6, in <module>
    from ._warp_ctc import *
  File "/export/gpudata/fujingling/projects/crnn.pytorch-master/warpctc_pytorch/_warp_ctc/__init__.py", line 3, in <module>
    from .__warp_ctc import lib as _lib, ffi as _ffi
ImportError: No module named __warp_ctc

 

解决方案:

>>> import torch
print torch.__version__
>>> print torch.__version__
0.1.12

估计是这里的问题

conda uninstall pytorch
Fetching package metadata .........................
Solving package specifications: .

Package plan for package removal in environment /export/gpudata/fujingling/conda/envs/crnn:

The following packages will be REMOVED:

    pytorch:     0.1.12-py27cuda7.5cudnn5.1_1 http://conda.jdfin.local/conda/free
    torchvision: 0.1.8-py27_0                 http://conda.jdfin.local/conda/free

Proceed ([y]/n)? y

又报错,cudnn的版本太低

 conda install pytorch==0.4
Fetching package metadata .........................
Solving package specifications: .

UnsatisfiableError: The following specifications were found to be in conflict:
  - pytorch ==0.4 -> cudnn 7.*
  - pytorch-gpu -> cudnn ==5.1
Use "conda info <package>" to see the dependencies for each package.
 

卸载,继续装

conda uninstall cudnn
Fetching package metadata .........................
Solving package specifications: .

Package plan for package removal in environment /export/gpudata/fujingling/conda/envs/crnn:

The following packages will be REMOVED:

    cudnn:       5.1-0         http://conda.jdfin.local/conda/free
    pytorch-gpu: 0.1.12-py27_0 http://conda.jdfin.local/conda/free

Proceed ([y]/n)? y

 

重新安装cudnn之前,先查看以下我的cuda版本,看看硬件能不能支持cudnn7.*

cat /usr/local/cuda/version.txt
CUDA Version 8.0.61

下面是CUDA对应能支持的cudnn列表

7.0.5和7.1.3版本的cudnn比较合适我的机器

(crnn) [xxx@xxxxxx crnn.pytorch-master]$ conda install cudnn==7.1.3
Fetching package metadata .........................
Solving package specifications: .

UnsatisfiableError: The following specifications were found to be in conflict:
  - cudnn ==7.1.3 -> cudatoolkit 8.0*
  - libtorch-gpu -> cudatoolkit ==7.5
Use "conda info <package>" to see the dependencies for each package.

报错,版本不对应

再卸载再安装:

conda uninstall cudatoolkit
Fetching package metadata .........................
Solving package specifications: .

Package plan for package removal in environment /export/gpudata/fujingling/conda/envs/crnn:

The following packages will be REMOVED:

    cudatoolkit:  7.5-2           http://conda.jdfin.local/conda/free
    libtorch-gpu: 0.1.12-0        http://conda.jdfin.local/conda/free
    nccl:         1.3.4-cuda7.5_1 http://conda.jdfin.local/conda/free

一个一个的装回去

 conda install cudatoolkit==8.0
Fetching package metadata .........................
Solving package specifications: .

Package plan for installation in environment /export/gpudata/fujingling/conda/envs/crnn:

The following NEW packages will be INSTALLED:

    cudatoolkit: 8.0-3 http://conda.jdfin.local/conda/free

Proceed ([y]/n)? y

 

装cudnn

 conda install cudnn==7.1.3(省略日志)

装pytorch

conda install pytorch==0.4
Fetching package metadata .........................
Solving package specifications: .

Package plan for installation in environment /export/gpudata/fujingling/conda/envs/crnn:

The following NEW packages will be INSTALLED:

    intel-openmp: 2019.0-118           http://repos.jd.com/conda/main     
    libgcc-ng:    8.2.0-hdf63c60_1     http://repos.jd.com/conda/main     
    libgfortran:  3.0.0-1              http://conda.jdfin.local/conda/free
    libstdcxx-ng: 8.2.0-hdf63c60_1     http://repos.jd.com/conda/main     
    nccl:         1.3.5-cuda9.0_0      http://repos.jd.com/conda/main     
    ninja:        1.7.2-0              http://conda.jdfin.local/conda/free
    openblas:     0.2.14-4             http://conda.jdfin.local/conda/free
    pytorch:      0.4.0-py27hdf912b8_0 http://repos.jd.com/conda/main     

The following packages will be UPDATED:

    cudatoolkit:  8.0-3                http://conda.jdfin.local/conda/free --> 9.0-h13b8566_0  http://repos.jd.com/conda/main     
    mkl:          2017.0.3-0           http://conda.jdfin.local/conda/free --> 2019.0-118      http://repos.jd.com/conda/main     

The following packages will be DOWNGRADED:

    cudnn:        7.1.3-cuda8.0_0      http://repos.jd.com/conda/main      --> 7.1.2-cuda9.0_0 http://repos.jd.com/conda/main     
    numpy:        1.13.1-py27_0        http://conda.jdfin.local/conda/free --> 1.10.2-py27_0   http://conda.jdfin.local/conda/free

Proceed ([y]/n)? y

最后经过一番折腾,我发现,把warp-ctc/pytorch_binding/warpctc_pytorch 这个文件夹移动到crnn工程的上一级目录下就好了

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值