https://github.com/SeanNaren/warp-ctc/blob/pytorch_bindings/pytorch_binding/README.md
PyTorch binding for WarpCTC
Installation
Install PyTorch.
WARP_CTC_PATH
should be set to the location of a built WarpCTC (i.e. libwarpctc.so
). This defaults to ../build
, so from within a new warp-ctc clone you could build WarpCTC like this:
git clone https://github.com/baidu-research/warp-ctc.git
cd warp-ctc
mkdir build; cd build
cmake ..
make
Otherwise, set WARP_CTC_PATH
to wherever you have libwarpctc.so
installed. If you have a GPU, you should also make sure that CUDA_HOME
is set to the home cuda directory (i.e. where include/cuda.h
and lib/libcudart.so
live). For example:
export CUDA_HOME="/usr/local/cuda"
Now install the bindings:
cd pytorch_binding
python setup.py install
If you try the above and get a dlopen error on OSX with anaconda3 (as recommended by pytorch):
cd ../pytorch_binding
python setup.py install
cd ../build
cp libwarpctc.dylib /Users/$WHOAMI/anaconda3/lib
This will resolve the library not loaded error. This can be easily modified to work with other python installs if needed.
Example to use the bindings below.
import torch
from torch.autograd import Variable
from warpctc_pytorch import CTCLoss
ctc_loss = CTCLoss()
# expected shape of seqLength x batchSize x alphabet_size
probs = torch.FloatTensor([[[0.1, 0.6, 0.1, 0.1, 0.1], [0.1, 0.1, 0.6, 0.1, 0.1]]]).transpose(0, 1).contiguous()
labels = Variable(torch.IntTensor([1, 2]))
label_sizes = Variable(torch.IntTensor([2]))
probs_sizes = Variable(torch.IntTensor([2]))
probs = Variable(probs, requires_grad=True) # tells autograd to compute gradients for probs
cost = ctc_loss(probs, labels, probs_sizes, label_sizes)
cost.backward()