Pytorch官方教程学习笔记(3)

Neural Networks

Neural networks can be constructed using the torch.nn package.

Now that you had a glimpse of autograd, nn depends on
autograd to define models and differentiate them.
An nn.Module contains layers, and a method forward(input)\ that
returns the output.

For example, look at this network that classifies digit images:

卷积神经网络

It is a simple feed-forward network. It takes the input, feeds it
through several layers one after the other, and then finally gives the
output.

A typical training procedure for a neural network is as follows:

  • Define the neural network that has some learnable parameters (or
    weights)
  • Iterate over a dataset of inputs
  • Process input through the network
  • Compute the loss (how far is the output from being correct)
  • Propagate gradients back into the network’s parameters
  • Update the weights of the network, typically using a simple update rule:
    weight = weight - learning_rate * gradient

Define the network

Let’s define this network:

import torch
import torch.nn as nn
import torch.nn.functional as F


class Net(nn.Module):

    def __init__(self):
        super(Net, self).__init__()# super的用法
        # 输入图片为一个通道,卷积层输出为6个通道,卷积核大小为5x5
        # kernel,注意这里只是定义了卷积核的相关参数,并不进行卷积操作,要与tensorflow进行区分,tensorflow为定义卷积层,需要输入特征。
        self.conv1 = nn.Conv2d(1, 6, 5)
        self.conv2 = nn.Conv2d(6, 16, 5)
        # 映射操作: y = Wx + b
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        # 使用2x2大小的最大池化
        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
        # 池化核为正方形时只需要指定一个参数
        x = F.max_pool2d(F.relu(self.conv2(x)), 2)
        x = x.view(-1, self.num_flat_features(x))# 将上一层的输出特征展开为一维向量
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

    def num_flat_features(self, x):
        size = x.size()[1:]  # all dimensions except the batch dimension
        num_features = 1
        for s in size:
            num_features *= s
        return num_features

# 声明实例
net = Net()
print(net)
Net(
  (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
  (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
  (fc1): Linear(in_features=400, out_features=120, bias=True)
  (fc2): Linear(in_features=120, out_features=84, bias=True)
  (fc3): Linear(in_features=84, out_features=10, bias=True)
)

You just have to define the forward function, and the backward
function (where gradients are computed) is automatically defined for you
using autograd.
You can use any of the Tensor operations in the forward function.

net.parameters()只会返回可学习参数,最大池化层的参数不包含在内。

params = list(net.parameters())
print(len(params))# params中不仅包括参数,还包括一些字符串
print(params[0].size())  # conv1's .weight
print(params[2].size())  # conv2的权重
10
torch.Size([6, 1, 5, 5])
torch.Size([16, 6, 5, 5])

Let try a random 32x32 input
Note: Expected input size to this net(LeNet) is 32x32. To use this net on
MNIST dataset, please resize the images from the dataset to 32x32.

input = torch.randn(1, 1, 32, 32)# (batch, channle, height, width)
out = net(input)
print(out)
tensor([[[[ 0.0071, -1.0255,  0.4739,  ...,  1.3764,  0.1537, -0.0571],
          [-1.2788, -0.0997,  1.5445,  ...,  0.0922, -1.1516,  1.7641],
          [-0.4108, -0.0735,  1.1842,  ..., -1.5759, -1.2755,  0.7765],
          ...,
          [-0.4346, -1.3532, -0.3525,  ...,  1.0357,  0.3437, -0.4926],
          [-0.1134,  0.8268,  0.0381,  ..., -0.4789,  0.2087, -0.8353],
          [ 0.9305, -1.5593, -2.4147,  ..., -2.2118,  0.9654,  0.4943]]]])
tensor([[ 0.0347,  0.0828, -0.0881, -0.0035, -0.0551,  0.0315, -0.0579,  0.0850,
          0.0235, -0.0406]], grad_fn=<ThAddmmBackward>)

Zero the gradient buffers of all parameters and backprops with random
gradients:

net.zero_grad()
out.backward(torch.randn(1, 10))
Note

``torch.nn`` only supports mini-batches. The entire ``torch.nn`` package only supports inputs that are a mini-batch of samples, and not a single sample.

For example, ``nn.Conv2d`` will take in a 4D Tensor of
``nSamples x nChannels x Height x Width``.

If you have a single sample, just use ``input.unsqueeze(0)`` to add
a fake batch dimension.</p></div>

Before proceeding further, let’s recap all the classes you’ve seen so far.

Recap:

  • torch.Tensor - A multi-dimensional array with support for autograd
    operations like backward(). Also holds the gradient w.r.t. the
    tensor.
  • nn.Module - Neural network module. Convenient way of
    encapsulating parameters
    , with helpers for moving them to GPU,
    exporting, loading, etc.
  • nn.Parameter - A kind of Tensor, that is automatically
    registered as a parameter when assigned as an attribute to a

    Module.
  • autograd.Function - Implements forward and backward definitions
    of an autograd operation
    . Every Tensor operation, creates at
    least a single Function node, that connects to functions that
    created a Tensor and encodes its history.

At this point, we covered:

  • Defining a neural network
  • Processing inputs and calling backward

Still Left:

  • Computing the loss
  • Updating the weights of the network

Loss Function

A loss function takes the (output, target) pair of inputs, and computes a
value that estimates how far away the output is from the target.

There are several different
loss functions <http://pytorch.org/docs/nn.html#loss-functions>_ under the
nn package .
A simple loss is: nn.MSELoss which computes the mean-squared error
between the input and the target.

For example:

output = net(input)
target = torch.randn(10)  # a dummy target, for example
target = target.view(1, -1)  # make it the same shape as output
criterion = nn.MSELoss()

loss = criterion(output, target)
print(loss)
tensor(1.3829, grad_fn=<MseLossBackward>)

Now, if you follow loss in the backward direction, using its
.grad_fn attribute, you will see a graph of computations that looks
like this:

::

input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
      -> view -> linear -> relu -> linear -> relu -> linear
      -> MSELoss
      -> loss

So, when we call loss.backward(), the whole graph is differentiated
w.r.t. the loss, and all Tensors in the graph that has requires_grad=True
will have their .grad Tensor accumulated with the gradient.

For illustration, let us follow a few steps backward:

print(loss.grad_fn)  # MSELoss
print(loss.grad_fn.next_functions[0][0])  # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0])  # ReLU
<MseLossBackward object at 0x00000239A3D4B978>
<ThAddmmBackward object at 0x00000239A3D4BE10>
<ExpandBackward object at 0x00000239A3D4B978>

Backprop

To backpropagate the error all we have to do is to loss.backward().
You need to clear the existing gradients though, else gradients will be
accumulated to existing gradients.

Now we shall call loss.backward(), and have a look at conv1’s bias
gradients before and after the backward.

net.zero_grad()     # zeroes the gradient buffers of all parameters

print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)

loss.backward()

print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
conv1.bias.grad before backward
tensor([0., 0., 0., 0., 0., 0.])
conv1.bias.grad after backward
tensor([ 0.0073, -0.0007, -0.0071, -0.0049, -0.0106, -0.0027])

Now, we have seen how to use loss functions.

Read Later:

The neural network package contains various modules and loss functions
that form the building blocks of deep neural networks. A full list with
documentation is here <http://pytorch.org/docs/nn>_.

The only thing left to learn is:

  • Updating the weights of the network

Update the weights

The simplest update rule used in practice is the Stochastic Gradient
Descent (SGD):

 ``weight = weight - learning_rate * gradient``

We can implement this using simple python code:

… code:: python

learning_rate = 0.01
for f in net.parameters():
    f.data.sub_(f.grad.data * learning_rate)

However, as you use neural networks, you want to use various different
update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc.
To enable this, we built a small package: torch.optim that
implements all these methods. Using it is very simple:

import torch.optim as optim

# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)

# in your training loop:
optimizer.zero_grad()   # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()    # Does the update
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值