解决方法:对length调用.cpu()即可
packed = rnn.pack_padded_sequence(x, x_len.cpu(), batch_first=True)
原因:
It’s because of this PR #41984, that preserves the device of as_tensor argument if it’s torch tensor. pack_padded_sequence calls as_tensor on the lengths tensor: https://github.com/pytorch/pytorch/blob/master/torch/nn/utils/rnn.py#L234. It caused implicit copy before, but does not now.
Given that implementation does not do anything smart with the lengths on the GPU, and only copies and synchronizes behind users back, @myleott do you think we should restore previous behavior, or can you call .cpu() on the lengths in your script before calling pack_padded_sequence?
输入到pack_padded_sequence如果是tensor形式,必须要保证其在CPU()上