torch定义了7种CPU tensor类型和八种GPU tensor类型:
Data tyoe | CPU tensor | GPU tensor |
---|---|---|
32-bit floating point | torch.FloatTensor | torch.cuda.FloatTensor |
64-bit floating point | torch.DoubleTensor | torch.cuda.DoubleTensor |
16-bit floating point | N/A | torch.cuda.HalfTensor |
8-bit integer (unsigned) | torch.ByteTensor | torch.cuda.ByteTensor |
8-bit integer (signed) | torch.CharTensor | torch.cuda.CharTensor |
16-bit integer (signed) | torch.ShortTensor | torch.cuda.ShortTensor |
32-bit integer (signed) | torch.IntTensor | torch.cuda.IntTensor |
64-bit integer (signed) | torch.LongTensor | torch.cuda.LongTensor |
使用时,直接传入数字,就是按照形状初始化。
torch.FloatTensor(2,3)
torch.DoubleTensor(2,3)
torch.ByteTensor(2,3)
torch.CharTensor(2,3)
torch.ShortTensor(2,3)
torch.IntTensor(2,3)
torch.LongTensor(2,3)
也可以,传入一个[2,3]这样的序列,就会被当成数组处理。
与dtype的对应关系:
FloatTensor -> 'float32'
DoubleTensor -> 'float64'
ShortTensor -> 'int16'
IntTensor -> 'int32'
LongTensor -> 'int64'
可以自行用代码测试:
import torch
a= torch.FloatTensor(1)
print(a.dtype)
#'float32'
不同dtype之间原则上不能混合运算。
所以我经常遇到'float64' * 'float32' 报错。
RuntimeError: expected type torch.DoubleTensor but got torch.FloatTensor
你可以对一个tensor进行强行转换。
import torch
t= torch.Tensor(3.14159265)
t = t.float() #'float32'
t = t.double() #'float64'
t = t.int() #'int32'
也可以对模型进行这个骚操作。
比如下面的代码,让lstm的输出从原来的'float32'变成'float64',这就便于后面与其他'float64'的运算。
self.lstm = nn.LSTM(input_size=self.emb_dim,
hidden_size=n_lstm_units,
dropout=1-keep_prob).double()
请问,直接改变Tensor的dtype,会不会影响梯度的backward?
pytorch官方成员回答,不会。
@albanD:
Change of types are considered as any other operations by the autograd engine and so all the gradients will be computed properly. Note that there might be a loss of precision of going from double to float as you would expect.
#中文手册 https://pytorch-cn.readthedocs.io/zh/latest/package_references/Tensor/