四、pytorch1.1.0编译warp-ctc
上述在pytorch0.4.1上编译warp-ctc获得成功,但是由于pytorch版本过低,不能满足后续任务的要求,因此在pytorch1.1.0上进行了编译。
版本连接
按照前面的步骤解压、编译、安装
cd warp-ctc
mkdir build
cd build
cmake ..
make
#############################
cd ../pytorch_binding
python setup.py install
#############################
# 配置环境变量
vi ~/.bashrc
# 在文件的最后添加warp-ctc的build路径
export WARP_CTC_PATH="/yourpath/warp-ctc/build"
# 保存文件,退出
#############################
# 使配置生效
source ~/.bashrc
验证
1、利用给定的测试文件
(ctc-pytorch1.1.0) jjz@user:~/softwares$ cd warp-ctc/pytorch_binding/tests/
(ctc-pytorch1.1.0) jjz@user:~/softwares/warp-ctc/pytorch_binding/tests$ python test_cpu.py
===================================================================================================== test session starts =====================================================================================================
platform linux -- Python 3.7.9, pytest-6.2.0, py-1.10.0, pluggy-0.13.1
rootdir: /home2/jiangjz/softwares/warp-ctc/pytorch_binding, configfile: setup.cfg
collected 4 items
test_cpu.py .... [100%]
====================================================================================================== warnings summary =======================================================================================================
../../../../anaconda3/envs/ctc-pytorch1.1.0/lib/python3.7/site-packages/_pytest/config/__init__.py:1233
/home2/jiangjz/anaconda3/envs/ctc-pytorch1.1.0/lib/python3.7/site-packages/_pytest/config/__init__.py:1233: PytestConfigWarning: Unknown config option: pep8maxlinelength
self._warn_or_fail_if_strict(f"Unknown config option: {key}\n")
-- Docs: https://docs.pytest.org/en/stable/warnings.html
================================================================================================ 4 passed, 1 warning in 0.13s =================================================================================================
(ctc-pytorch1.1.0) jjz@user:~/softwares/warp-ctc/pytorch_binding/tests$ python test_gpu.py
===================================================================================================== test session starts =====================================================================================================
platform linux -- Python 3.7.9, pytest-6.2.0, py-1.10.0, pluggy-0.13.1
rootdir: /home2/jiangjz/softwares/warp-ctc/pytorch_binding, configfile: setup.cfg
collected 4 items
test_gpu.py .... [100%]
====================================================================================================== warnings summary =======================================================================================================
../../../../anaconda3/envs/ctc-pytorch1.1.0/lib/python3.7/site-packages/_pytest/config/__init__.py:1233
/home2/jiangjz/anaconda3/envs/ctc-pytorch1.1.0/lib/python3.7/site-packages/_pytest/config/__init__.py:1233: PytestConfigWarning: Unknown config option: pep8maxlinelength
self._warn_or_fail_if_strict(f"Unknown config option: {key}\n")
-- Docs: https://docs.pytest.org/en/stable/warnings.html
================================================================================================ 4 passed, 1 warning in 11.02s ================================================================================================
(ctc-pytorch1.1.0) jjz@user:~/softwares/warp-ctc/pytorch_binding/tests$
2、代码验证
import torch
from warpctc_pytorch import CTCLoss
ctc_loss = CTCLoss()
# expected shape of seqLength x batchSize x alphabet_size
probs = torch.FloatTensor([[[0.1, 0.6, 0.1, 0.1, 0.1], [0.1, 0.1, 0.6, 0.1, 0.1]]]).transpose(0, 1).contiguous()
labels = torch.IntTensor([1, 2])
label_sizes = torch.IntTensor([2])
probs_sizes = torch.IntTensor([2])
probs.requires_grad_(True) # tells autograd to compute gradients for probs
cost = ctc_loss(probs, labels, probs_sizes, label_sizes)
cost.backward()
print(cost)
(ctc-pytorch1.1.0) jjz@user:~/softwares$ python
Python 3.7.9 (default, Aug 31 2020, 12:42:55)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> from warpctc_pytorch import CTCLoss
>>> ctc_loss = CTCLoss()
>>> # expected shape of seqLength x batchSize x alphabet_size
>>> probs = torch.FloatTensor([[[0.1, 0.6, 0.1, 0.1, 0.1], [0.1, 0.1, 0.6, 0.1, 0.1]]]).transpose(0, 1).contiguous()
>>> labels = torch.IntTensor([1, 2])
>>> label_sizes = torch.IntTensor([2])
>>> probs_sizes = torch.IntTensor([2])
>>> probs.requires_grad_(True) # tells autograd to compute gradients for probs
tensor([[[0.1000, 0.6000, 0.1000, 0.1000, 0.1000]],
[[0.1000, 0.1000, 0.6000, 0.1000, 0.1000]]], requires_grad=True)
>>> cost = ctc_loss(probs, labels, probs_sizes, label_sizes)
>>> cost.backward()
>>> print(cost)
tensor([2.4629], grad_fn=<_CTCBackward>)