输入代码
x = variable(tensor.ones(2,2),requires_grad = Ture)
y = x.sum()
print(x)
print(y)
产生错误
UserWarning: torch.autograd.variable(...) is deprecated, use torch.tensor(...) instead
warnings.warn("torch.autograd.variable(...) is deprecated, use torch.tensor(...) instead")
在python之前的版本中,variable可以封装tensor,计算反向传播梯度时需要将tensor封装在variable中。但是在python 0.4版本之后,将variable和tensor合并,也就是说不需要将tensor封装在variable中就可以计算梯度。tensor具有variable的性质。
作为能否autograd的标签,requires_grad现在是Tensor的属性,所以,只要当一个操作的任何输入Tensor具有requires_grad = True的属性,autograd就可以自动追踪历史和反向传播了。
官方代码
# 默认创建requires_grad = False的Tensor
x = torch.ones(1) # create a tensor with requires_grad=False (default)
x.requires_grad
# out: False
# 创建另一个Tensor,同样requires_grad = False
y = torch.ones(1) # another tensor with requires_grad=False
# both inputs have requires_grad=False. so does the output
z = x + y
# 因为两个Tensor x,y,requires_grad=False.都无法实现自动微分,
# 所以操作(operation)z=x+y后的z也是无法自动微分,requires_grad=False
z.requires_grad
# out: False
# then autograd won't track this computation. let's verify!
# 因而无法autograd,程序报错
z.backward()
# out:程序报错:RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
# now create a tensor with requires_grad=True
w = torch.ones(1, requires_grad=True)
w.requires_grad
# out: True
# add to the previous result that has require_grad=False
# 因为total的操作中输入Tensor w的requires_grad=True,因而操作可以进行反向传播和自动求导。
total = w + z
# the total sum now requires grad!
total.requires_grad
# out: True
# autograd can compute the gradients as well
total.backward()
w.grad
#out: tensor([ 1.])
# and no computation is wasted to compute gradients for x, y and z, which don't require grad
# 由于z,x,y的requires_grad=False,所以并没有计算三者的梯度
z.grad == x.grad == y.grad == None
# True