神经网络可以通过
torch.nn
包来构建
nn依赖autograd(见学习笔记二)来定义模型和微分
nn.Module
包含各种类型的网络层(layers)
前向传播forward(input)
,返回output.
一、网络训练的基本步骤
- 定义一个具有科学系参数的神经网络(权值)
- 迭代数据集的所有输入
- 通过网络处理输入
- 计算loss
- 反向求网络参数的梯度
- 更新参数(权重)[新权值 = 权值- 学习率 * 梯度]
二、定义网络
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 3x3 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 3)
self.conv2 = nn.Conv2d(6, 16, 3)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 6 * 6, 120) # 6*6 from image dimension
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
输出:
Net(
(conv1): Conv2d(1, 6, kernel_size=(3, 3), stride=(1, 1))
(conv2): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1))
(fc1): Linear(in_features=576, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
只定义了前向传播,反向传播在使用autograd时会自动生成,前向传播中可使用任意tensor的操作
torch.nn
只支持mini-batches的样本而不是单个样本。比如nn.Conv2d
的输入为四维TensornSamples X nChannels X Height X Width)
。如果你只有单个样本,使用input.unsqueeze(0)
来增加假batch维
二、查看网络参数
用net.parameters()
params = list(net.parameters())
print(len(params))
print(params[0].size()) # conv1's .weight
输出:
10
torch.Size([6, 1, 3, 3])
三、处理输入及反向求导
#input 32 x 32
input = torch.randn(1, 1, 32, 32) #四维Tensor
out = net(input)
print(out)
输出:
tensor([[-0.1615, 0.1094, 0.1606, -0.1443, 0.0543, -0.1621, 0.1377, 0.1748,
0.0129, -0.0754]], grad_fn=<AddmmBackward>
先将梯度清零再求:
net.zero_grad()
out.backward(torch.randn(1, 10))
四、损失函数Loss Function
target = torch.randn(10) # a dummy target, for example
target = target.view(1, -1) # make it the same shape as output
criterion = nn.MSELoss() #loss function is MSE
loss = criterion(output, target) #compute loss
print(loss)
输出:
tensor(0.9596, grad_fn=<MseLossBackward>)
五、反向更新
现在,如果使用loss的.grad_fn属性反向跟踪loss,将看到一个如下所示的计算图:
input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
-> view -> linear -> relu -> linear -> relu -> linear
-> MSELoss
-> loss
在调用loss.backward
,整张图都会根据loss被微分,并且所有requires_grad=True
的tensor的.grad
Tensor都会不断地积累梯度。
查看求导过程:
print(loss.grad_fn) # MSELoss
print(loss.grad_fn.next_functions[0][0]) # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU
输出:
<MseLossBackward object at 0x7f437c57bcf8>
<AddmmBackward object at 0x7f437c57bda0>
<AccumulateGrad object at 0x7f437c57bda0>
反向求导:
net.zero_grad() # zeroes the gradient buffers of all parameters
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
输出:
conv1.bias.grad before backward
tensor([0., 0., 0., 0., 0., 0.])
conv1.bias.grad after backward
tensor([-0.0205, -0.0230, 0.0066, -0.0229, -0.0113, -0.0123])
六、更新权重(优化器等)
import torch.optim as optim
# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step() # Does the update