pytorch学习(2)构造简单神经网络Lenet-5
用pytorch构造一个神经网络一般有以下几个步骤:
定义一个有训练参数的神经网络
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 定义卷积核权重参数
self.conv1 = nn.Conv2d(1, 6, 3) # 图片输入通道为1,输出通道为6,卷积核大小为3x3
self.conv2 = nn.Conv2d(6, 16, 3)
# 定义全连接层参数
self.fc1 = nn.Linear(16 * 6 * 6, 120) # 输入为16个通道的6x6特征图
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# 池化的窗口大小为2x2
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# 若池化窗口为方形,也可以用单个数字
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
# 把3维张量flat为1维
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
output:
Net(
(conv1): Conv2d(1, 6, kernel_size=(3, 3), stride=(1, 1))
(conv2): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1))
(fc1): Linear(in_features=576, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)
迭代的输入数据集
先尝试性的输入一个张量
input = torch.ones(1, 1, 32, 32)
out = net(input)
print(out)
output:
tensor([[ 0.0736, 0.0561, 0.0124, -0.0494, -0.0058, -0.1239, -0.1036, 0.0572,
-0.0019, -0.0115]], grad_fn=<AddmmBackward>)
计算LOSS
先设计一个简单随机标签,然后计算其损失,其中 nn.MSELoss() 是一个均方差损失函数
output = net(input)
target = torch.randn(10) # a dummy target, for example
target = target.view(1, -1) # make it the same shape as output
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)
output:
tensor(0.7285, grad_fn=<MseLossBackward>)
反向传播梯度
有了Loss后,我们就可以得到所有参数在loss上的梯度啦
print(loss.grad_fn) # MSELoss
print(loss.grad_fn.next_functions[0][0]) # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU
output:
<MseLossBackward object at 0x000001AAA626C208>
<AddmmBackward object at 0x000001AAA7E88438>
<AccumulateGrad object at 0x000001AAA626C208>
要反向传播损失,我们只需要 loss.backward() ,但是需要先清楚现有的梯度,否则会积累在一起。
net.zero_grad() # zeroes the gradient buffers of all parameters
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
output:
conv1.bias.grad before backward
None
conv1.bias.grad after backward
tensor([-0.0010, 0.0000, 0.0000, 0.0000, -0.0117, -0.0164])
更新训练参数的值
神经网络中有许多更新参数的方法: SGD,Nesterov-SGD,Adam,RMSProp 等,在pytorch中的 torch.optim 封装了这些更新(优化)方法
这里以SGD为例: weight = weight - learning_rate * gradient
import torch.optim as optim
# 创建优化器
optimizer = optim.SGD(net.parameters(), lr=1) # 为了让结果更明显,我们把学习率设为1
optimizer.zero_grad() # 把梯度缓存置0
output = net(input)
loss = criterion(output, target)
loss.backward()
print(net.conv1.bias)
optimizer.step() # 更新
print(net.conv1.bias.grad)
print(net.conv1.bias)
output:
Parameter containing:
tensor([ 0.2609, 0.3270, 0.2890, 0.1068, -0.3012, 0.1862],
requires_grad=True)
tensor([ 0.0037, -0.0128, 0.0154, 0.0000, 0.0000, -0.0043])
Parameter containing:
tensor([ 0.2573, 0.3398, 0.2735, 0.1068, -0.3012, 0.1905],
requires_grad=True)