import torch
w=torch.tensor([1.],requires_grad=True)
x=torch.tensor([2.],requires_grad=True)
a=torch.add(w,x)
b=torch.add(w,2)
y=torch.mul(a,b)
y.backward()
print(w.grad)
print(x.grad)
print(a.grad)
print(b.grad)
结果是:
tensor([6.])
tensor([3.])
None
None
对于pytorch 有个概念叶子节点,一般用户自己创建的变量叫叶子节点,而叶子节点经过计算得到的变量叫肥叶子节点,叶子节点的梯度值不为none, 非叶子节点的梯度值没有保存在内存中,所以对非叶子节点进行求梯度则为none
下面有另一个例子
例子1:
x = torch.ones([2,2],requires_grad=True)
out = x.sum()
out.backward()
print(x.grad)
结果是:
tensor([[1., 1.],
[1., 1.]])
例子2
import torch
x = torch.tensor([1.0, 2.0, 3.0, 4.0], requires_grad=True)
print(x.is_leaf)
x=x.view(2,2)
print(x.is_leaf)
out=x.sum()
out.backward()
print(x.grad)
结果是:
True
False
None
例子3:
import torch
x = torch.tensor([1.0, 2.0, 3.0, 4.0], requires_grad=True)
print(x.is_leaf)
#x=x.view(2,2)
#print(x.is_leaf)
out=x.sum()
out.backward()
print(x.grad)
结果是:
True tensor([1., 1., 1., 1.])