pytorch 创建神经网络

标准的训练神经网络流程是:
1.定义包含权重的神经网络
2.遍历所有输入数据
3.处理数据
4.计算损失值
5.向前传播
6.更新权重

定义神经网络

import torch
import torch.nn as nn
import torch.nn.functional as F


class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 6, 3) #1 inputChannel,6 outputChannel,3*3 kernel
        self.conv2 = nn.Conv2d(6, 16, 3) #6 inputChannel,16 outputChannel,3*3 kernel
        self.fc1 = nn.Linear(16 * 6 * 6, 120) #16*6*6 inputFeatures(16 channel 6*6 image dimension), 120 outputFeatures
        self.fc2 = nn.Linear(120, 84) #120 inputFeatures 84outputFeatures
        self.fc3 = nn.Linear(84, 10) #84 inputFeatures 10outputFeatures

    def forward(self, x):
        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # 卷积池化 (b,1,w,h) ->(b,6,(w-3+1)/2,(b-3+1/2))
        x = F.max_pool2d(F.relu(self.conv2(x)), 2)# 卷积池化(b,6,w,h) ->(b,16,(w-3+1)/2,(b-3+1/2))
        x = x.view(-1, x.size()[1:].numel()) # (b,16,w,h) -> (b,16*w*h)
        x = F.relu(self.fc1(x)) #(b,16*6*6) ->(b,120)
        x = F.relu(self.fc2(x))#(b,120) ->(b,84)
        x = self.fc3(x)#(b,84) ->(b,10)
        return x
net = Net()
print(net)

# https://stackoverflow.com/questions/53784998/how-are-the-pytorch-dimensions-for-linear-layers-calculated/53787076#53787076
----------------------------------------------------------------
Net(
  (conv1): Conv2d(1, 6, kernel_size=(3, 3), stride=(1, 1))
  (conv2): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1))
  (fc1): Linear(in_features=576, out_features=120, bias=True)
  (fc2): Linear(in_features=120, out_features=84, bias=True)
  (fc3): Linear(in_features=84, out_features=10, bias=True)
)

我们只需要定义forward方法,backward方法会自动定义(因为autograd的存在。
可以使用net.parameters()来查看权重系数。

params = list(net.parameters())
print(len(params))
print(params[0].size())  # conv1's .weight
---------------------
10
torch.Size([6, 1, 3, 3])

让我们使用32*32的输入。

input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)
----------------------------------------------
tensor([[ 0.0158,  0.0992, -0.1584, -0.0231,  0.0408, -0.0601, -0.0561,  0.0461,
          0.0854,  0.0818]], grad_fn=<AddmmBackward>)

Zero the gradient buffers of all parameters and backprops with random gradients:

net.zero_grad()
out.backward(torch.randn(1, 10))

torch.nn只能支持mini-batches,而不支持single sample。比如nn.Conv2d会将4D tensor作为输入,(nSamples * nChannels * Height * Width)。如果只有 a single sample 需要使用input.unsqueeze(0)来假装添加了batch dimension

2.损失函数

将 (output, target)作为参数,并计算 output和target的距离。
有几种不同的loss Functions,最简单的是nn.MSELoss计算mean-squared error

output = net(input)
target = torch.randn(10)  # a dummy target, for example
target = target.view(1, -1)  # make it the same shape as output
criterion = nn.MSELoss()

loss = criterion(output, target)
print(loss)
-------------------------------------------------------------
tensor(0.6443, grad_fn=<MseLossBackward>)

如果查看loss的反向传播,使用.grad_fn

input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
      -> view -> linear -> relu -> linear -> relu -> linear
      -> MSELoss
      -> loss

当我们使用loss.backward(),整个graph会计算loss,所有graph中带有requires_grad=True的tensor,将累加他们的gradient。

print(loss.grad_fn)  # MSELoss
print(loss.grad_fn.next_functions[0][0])  # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0])  # ReLU
----------------------------------------------------------
<MseLossBackward object at 0x7f16dfd30ba8>
<AddmmBackward object at 0x7f16dfd550b8>
<AccumulateGrad object at 0x7f16dfd30ba8>

Backprop

我们需要清除已经存在的gradients。

net.zero_grad()     # zeroes the gradient buffers of all parameters

print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)

loss.backward()

print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
------------------------------------------------
conv1.bias.grad before backward
tensor([0., 0., 0., 0., 0., 0.])
conv1.bias.grad after backward
tensor([ 0.0083,  0.0066,  0.0212, -0.0175, -0.0130,  0.0090])

更新权重

最简单的更新规则是Stochastic Gradient Descent (SGD):

weight = weight - learning_rate * gradient

手动实现

learning_rate = 0.01
for f in net.parameters(): # 遍历图中每个节点的参数
    f.data.sub_(f.grad.data * learning_rate) # 将节点的参数-(学习速率*梯度),单下划线表示替换

pytorch中已经实现了SGD等一系列的更新方法

import torch.optim as optim

# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)

# in your training loop:
optimizer.zero_grad()   # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()    # Does the update

参考:
https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py
https://stackoverflow.com/questions/53784998/how-are-the-pytorch-dimensions-for-linear-layers-calculated/53787076#53787076

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值