一、 Tensor
1. 产生一个值未初始化的tensor
import torch
x=torch.Tensor(5,3)
1.00000e-36 *
0.0000 0.0000 0.0000
0.0000 0.0000 0.0000
1.7644 0.0000 0.0000
0.0000 0.0001 0.0000
1.8972 0.0000 0.0000
[torch.FloatTensor of size 5x3]
2. 产生一个随机初始化的tensor
x=torch.rand(5,3)#随机初始化
print(x)
0.3957 0.0976 0.3073
0.7409 0.9263 0.4604
0.7930 0.7133 0.7451
0.4581 0.8434 0.8505
0.6871 0.9408 0.8432
[torch.FloatTensor of size 5x3]
3. tensor几中不同的加法
#1
y=torch.rand(5,3)
print(y)
#2
print(torch.add(x,y))
#3
y.add_(x)# 会改变y的内容
print(y)
1.3257 0.2825 0.4671
1.3416 1.9150 0.7093
1.4926 0.9520 0.8118
0.6503 1.4023 1.0461
1.0384 1.4045 1.7990
[torch.FloatTensor of size 5x3]
4. 支持切片操作
print(x[:,1])
0.0976
0.9263
0.7133
0.8434
0.9408
[torch.FloatTensor of size 5]
5. 用view实现shape的变化
x=torch.randn(4,4)
y=x.view(16)
z=x.view(-1,8)
print(x.size(),y.size(),z.size())
torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])
6. tensor与numpy的互换
#tensor to numpy
a=torch.ones(5)
b=a.numpy()#改变numpy也会导致tensor的变化,因为他们共享数据空间
print(b)
[ 1. 1. 1. 1. 1.]
a.add_(1)
print(a)
print(b)
2
2
2
2
2
[torch.FloatTensor of size 5]
[ 2. 2. 2. 2. 2.]
#numpy to tensor
import numpy as np
a=np.ones(5)
b=torch.from_numpy(a)
print(a)
print(b)
[ 1. 1. 1. 1. 1.]
1
1
1
1
1
[torch.DoubleTensor of size 5]
二、Variable
用Variable包装(wrap)tensor,Variable会记录计算图中每个节点(tensor)的操作,以便在反向过程中于实现梯度的自动计算。
特点:
Variable wraps tensor,Variable.data是Tensor
Variable.backward 自动实现梯度计算,Variable.grad是梯度
只要有一个Variable的requires_grad=True,就会计算梯度,若只要有一个Variable的volatile=True,就不会计算梯度,一般test过程的输入这样设置,加快速度,减少内存占用
from torch.autograd import Variable
x=Variable(torch.ones(2,2),requires_grad=True)
print(x)
Variable containing:
1 1
1 1
[torch.FloatTensor of size 2x2]
y=x+2
z=y*y*3
out=z.mean()
print(z,out)
Variable containing:
27 27
27 27
[torch.FloatTensor of size 2x2]
Variable containing:
27
[torch.FloatTensor of size 1]
out.backward()
print(x.grad)
Variable containing:
4.5000 4.5000
4.5000 4.5000
[torch.FloatTensor of size 2x2]
#如果输出不是标量,就要在backward里面指定变量的值
x=torch.rand(3)
x=Variable(x,requires_grad=True)
y=x*2
while y.data.norm()<1000:
y=y*2
print(y)
Variable containing:
1509.3214
454.7229
1031.8900
[torch.FloatTensor of size 3]
gradients=torch.Tensor([0.1,1.0,0.0001])#求导要指定变量的值,因为输出不是标量
y.backward(gradients)
print(x.grad)
Variable containing:
204.8000
2048.0000
0.2048
[torch.FloatTensor of size 3]
注:以上是0.3版本,在0.4版本里,Variable和tensor的功能进行了合并,具体操作参考:PyTorch 0.4.0 Migration Guide