PyTorch官方教程 - Getting Started - 60分钟快速入门 - 自动微分

AUTOGRAD: AUTOMATIC DIFFERENTIATION

autograd程序包:对张量的所有运算自动微分。

import torch

张量

  • torch.Tensor

  • 训练:

追踪张量:

.requires_grad = True
.backward()
.grad

停止追踪:

.detach()
  • 评估(evaluating)

禁止追踪:

with torch.no_grad():
  • Function类

Tensor和Function通过构建无环图(acyclic graph),对计算过程编码。Tensor的.grad_fn属性指向创建Tensor的Function(Tensor由用户创建时,.grad_fn = None)。

  • 计算导数
.backward(gradient)
x = torch.ones(2, 2, requires_grad=True)
print(x)
tensor([[1., 1.],
        [1., 1.]], requires_grad=True)
y = x + 2
print(y)
print(y.grad_fn)
tensor([[3., 3.],
        [3., 3.]], grad_fn=<AddBackward0>)
<AddBackward0 object at 0x000001EF8A3D7160>
z = y * y * 3
out = z.mean()
print(z, out)
tensor([[27., 27.],
        [27., 27.]], grad_fn=<MulBackward0>) tensor(27., grad_fn=<MeanBackward0>)

.requires_grad_()修改(in-place)已有Tensor的.requires_grad属性(True、False(default))

a = torch.randn(2, 2)
a = (a * 3) / (a - 1)
print(a.requires_grad)
a.requires_grad_(True)
print(a.requires_grad)
b = (a * a).sum()
print(b.grad_fn)
False
True
<SumBackward0 object at 0x000001EF8A3D7F98>

梯度(Gradients)

# scalar
out.backward() # equivalent to out.backward(torch.tensor(1.))
# out.backward(torch.tensor(1.))

∂ o u t ∂ x \frac{\partial out}{\partial x} xout

o = 1 4 ∑ i z i z i = 3 ( x i + 2 ) 2 z i ∣ x i = 1 = 27 ↓ ∂ o ∂ x = 1.5 ( x i + 2 ) ∂ o ∂ x ∣ x i = 1 = 4.5 \begin{aligned} &amp; o = \frac{1}{4}\sum_i z_i \\ &amp; z_i = 3 (x_i + 2)^2 \\ &amp; z_i |_{x_i = 1} = 27 \\ &amp; \downarrow \\ &amp; \frac{\partial o}{\partial x} = 1.5 (x_i + 2) \\ &amp; \frac{\partial o}{\partial x} |_{x_i = 1} = 4.5 \end{aligned} o=41izizi=3(xi+2)2zixi=1=27xo=1.5(xi+2)xoxi=1=4.5

# gradients d(out)/dx
print(x.grad)
tensor([[4.5000, 4.5000],
        [4.5000, 4.5000]])

torch.autograd是计算向量-雅露克比积(vector-Jacobian product)的引擎

x = torch.randn(3, requires_grad=True)
y = x * 2
while y.data.norm() < 1000:
    y = y * 2
    
print(y)
tensor([-1494.1448,  -286.1320,  -200.1645], grad_fn=<MulBackward0>)
v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(v)
print(x.grad)
tensor([1.0240e+02, 1.0240e+03, 1.0240e-01])

停止autograd

print(x.requires_grad)
print((x ** 2).requires_grad)

with torch.no_grad():
    print((x ** 2).requires_grad)
True
True
False


  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值