动手学深度学习-基础的概念和技术

本文深入探讨了线性回归的基本概念,包括模型定义、训练过程和解析解。介绍了小批量随机梯度下降法在模型优化中的应用,并展示了如何从零开始实现线性回归。同时,文章还涵盖了神经网络图的表示、向量计算表达式以及如何使用softmax回归解决分类问题。进一步,讨论了多层感知器的实现,以及欠拟合和过拟合的现象。最后,提到了权重衰减和dropout技术在防止过拟合中的作用,以及它们的简洁实现。
摘要由CSDN通过智能技术生成

线性回归

线性回归的结果是一个连续值,适用于回归问题(预测房价、气温、销售额等连续值问题),类似于从一堆数据中找到一条规律曲线,能很好的表示数据的分布规律,以期达到预测数据的效果。

线性回归的基本要素

模型定义

这一步,就是将数据中的自变量加上权重,最后加上偏置,以期表示出因变量。
在这里插入图片描述
房 屋 面 积 是 x 1 , 房龄是 x 2 , ω 1 和 ω 2 是权重, b 是偏差, y ^ 是对于真实价格 y 的预测 房屋面积是x_{1\text{,}}\text{房龄是}x_{2\text{,}}\omega _1\text{和}\omega _2\text{是权重,}b\text{是偏差,}\hat{y}\text{是对于真实价格}y\text{的预测} x1房龄是x2ω1ω2是权重,b是偏差,y^是对于真实价格y的预测

模型训练

一栋房屋称为一个样本,真实房价称为标签,用来预测标签的两个因素称为特征。
在这里插入图片描述
损失函数,其中常数1/2使得平方项求导之后的常数项系数为1
通常,会使用训练数据集中的所有样本误差的平均来衡量模型预测的质量
在这里插入图片描述

而我们的最终目标是找到一组模型参数: ω 1 ∗ , ω 2 ∗ , b ∗ = a r g min ⁡ ℓ ω 1 , ω 2 , b ( ω 1 , ω 2 , b ) \omega _{1}^{*}\text{,}\omega _{2}^{*}\text{,}b^*=\underset{\omega _1\text{,}\omega _2\text{,}b}{arg\min \ell}\text{(}\omega _1\text{,}\omega _2\text{,}b\text{)} ω1ω2b=ω1ω2bargminω1ω2b ,使得损失函数最小。

解析解:

误差最小化问题可以用公式来表达出来。

数值解:

没有解析解,只能通过优化算法来进行有限次的迭代模型参数,来尽可能降低损失函数的值。

小批量随机梯度下降:

选取一组模型参数的初始值,随机选取,然后对参数进行多次,试图每次迭代都能降低损失函数的值。
策略:每次迭代中,先随机均匀采样由固定数目训练数据样本组成的小批量(mini-batch) B \mathbb{B} B,然后求该批量样本中的平均损失有关模型参数的导数,最后用此结果与预先设定的一个正数(学习率)乘积作为模型参数关于本次迭代的减少量。(像批量大小、学习率这种人为设定的参数,称为超参数)

在这里插入图片描述

模型预测

当优化算法停止时,此时的模型参数不是最小化损失函数的最优解,而是对最优解的一个近似。

线性回归的表示

神经网络图

在这里插入图片描述
全连接(当输出层的神经元与输入层的各个输入完全连接)

矢量计算表达式

通过将矢量直接相加,可以省时。
损失函数:
在这里插入图片描述
小批量随机梯度下降:
在这里插入图片描述
在这里插入图片描述

线性回归的从零实现

此节意图为理解线性回归的底层实现细节。
导入需要的包

import torch
from IPython import display
from matplotlib import pyplot as plt
import numpy as np
import random

生成数据集

设训练数据集的样本数为1000,输入个数(特征值)为2,给定随机生成的样本特征 X ∈ R 1000 × 2 X\in \mathbb{R}^{1000\times 2} XR1000×2 ,使用真实权重 ω = [ 2 , − 3.4 ] T \omega =\left[ 2,-3.4 \right] ^T ω=[2,3.4]T和偏差 b = 4.2 b=4.2 b=4.2,以及一个随机噪声项 ϵ \epsilon ϵ来生成标签。
在这里插入图片描述
生成数据集:

num_input = 2
num_example = 1000
true_w = [2, -3,4]
true_b = 4.2
#随机生成的批量样本集特征
features = torch.from_numpy(np.random.normal(0, 1, (num_example, num_input)))
#根据公式得到的未加噪声项的label
labels = features[:, 0] * true_w[0] + features[:, 1] * true_w[1] + true_b
#加上随机噪声项
labels += torch.from_numpy(np.random.normal(0, 0.01, size=labels.size()))
print(features[0:4], labels[0:4])

通过 f e a t u r e [ : , 1 ] feature[:,1] feature[1]和标签 l a b e l s labels labels的散点图,可以观察到有明显的线性关系

def use_svg_display():
    #用矢量图表示
    display.set_matplotlib_formats('svg')

def set_figsize(figsize=(3.5, 2.5)):
    use_svg_display()
    #设置图的尺寸
    plt.rcParams['figure.figsize'] = figsize
#import sys
#sys.path.append('..')
#from d2lzh_pytorch import *

set_figsize()
plt.scatter(features[:, 1].numpy(), labels.numpy(),1)#此处的1表示的一个点设置多大

在这里插入图片描述

读取数据

def data_iter(batch_size, features, labels):
    #它每次返回batch_size(批量大小)个随机样本的特征和标签
    num_examples = len(features)
    indices = list(range(num_examples))
    random.shuffle(indices)#随机打乱,如此样本读取顺序就是随机的
    for i in range(0, num_examples, batch_size):
        j = torch.LongTensor(indices[i: min(i + batch_size, num_examples)])#最后一次可能不足一个batch
        yield features.index_select(0, j), labels.index_select(0, j)
      
batch_size = 10

import os
from ipynb.fs.full.d2lzh_pytorch import data_iter

for X, y in data_iter(batch_size, features, labels):
    print(X, y)
    break

结果:

tensor([[ 2.2406,  0.9488],
        [ 0.4478, -0.1821],
        [ 0.6336, -0.3973],
        [ 0.0037,  0.5691],
        [ 0.1857,  0.4156],
        [-0.5352,  1.2744],
        [-0.8015,  0.9952],
        [ 2.0209,  0.5435],
        [ 0.0560,  0.2799],
        [-1.4092,  0.0765]], dtype=torch.float64) tensor([ 5.8315,  5.6493,  6.6423,  2.4947,  3.3550, -0.7162, -0.3853,  6.6122,
         3.4580,  1.1384], dtype=torch.float64)

初始化模型参数

权重初始化成均值为0、标准差为0.01的正态随机数,偏差则初始化成0。

w = torch.tensor(np.random.normal(0, 0.01, (num_input, 1)), dtype=torch.float32)
b = torch.zeros(1, dtype=torch.float32)

结果:

tensor([[-0.0118],
        [ 0.0032]])
tensor([0.])

在之后的模型训练中,会用到求导,所以需要requires_grad。

softmax回归

数据集

获取数据集:

#获取数据集
import torch
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import time
import d2lzh_pytorch as d2l

mnist_train = torchvision.datasets.FashionMNIST(root='/home/niannian/code/pytorch/Fashion_Mnist', train=True, download=True, transform=transforms.ToTensor())
mnist_test = torchvision.datasets.FashionMNIST(root='/home/niannian/code/pytorch/Fashion_Mnist', train=False, download=True, transform=transforms.ToTensor())

print(type(mnist_train))
print(len(mnist_train), len(mnist_test))

#可以通过下标来访问任意一个样本
feature, label = mnist_train[0]
print(feature.shape, label)

#看一下训练集中前9个样本的图像内容和标签
X, y = [], []
for i in range(10):
    X.append(mnist_train[i][0])
    y.append(mnist_train[i][1])
d2l.show_fashion_mnist(X, d2l.get_fashion_mnist_labels(y))

在这里插入图片描述
读取小批量
pytorch的DataLoader中允许使用多进程来加速数据读取

batch_size = 256
if sys.platform.startswith('win'):
    num_workers = 0 #0表示不适用额外的进程来加速数据
else:
    num_workers = 4
train_iter = torch.utils.data.DataLoader(mnist_train, batch_size=batch_size, shuffle=True, num_workers=num_workers)
test_iter = torch.utils.data.DataLoader(mnist_test, batch_size=batch_size, shuffle=False, num_workers=num_workers)

#看一下读取一遍数据会花费多长时间
start = time.time()
for X, y in train_iter:
    continue
print('The cost of reading the data once is %.2f sec' % (time.time()-start))

结果:

The cost of reading the data once is 1.24 sec

softmax回归从零开始实现

import torch
import torchvision
import numpy as np
import d2lzh_pytorch as d2l
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)

初始化模型参数
图像的大小为2828,因此输入的向量的长度是784;由于图像有10个类别,所以权重和偏差参数分别为78410和1*10的矩阵

num_inputs = 28*28
num_outputs = 10

W = torch.tensor(np.random.normal(0, 0.01, (num_inputs, num_outputs)), dtype=torch.float)
b = torch.zeros(num_outputs, dtype=torch.float)
#需要模型参数梯度
W.requires_grad_(requires_grad = True)
b.requires_grad_(requires_grad = True)

实现softmax运算
多维Tensor按维度操作;dim=0是只对其中同一列的元素求和,dim=1是只对其中同一行的元素求和;
keepdim=True是结果仍是保留行和列这两个维度

tmp = torch.tensor([[1, 2, 3], [4, 5, 6]])
print(tmp.sum(dim=0, keepdim=True))
print(tmp.sum(dim=1, keepdim=True))
def softmax(X):
    X_exp = X.exp()
    partition = X_exp.sum(dim = 1, keepdim = True)
    return X_exp/partition #这里应用了广播机制
X = torch.rand((2, 5))
X_prob = softmax(X)
print(X_prob, X_prob.sum(dim=1))

#我们使用torch.mm来做矩阵乘法
def net(X):
    return softmax(torch.mm(X.view((-1, num_inputs)), W) + b)
def cross_entropy(y_hat, y):
    return - torch.log(y_hat.gather(1, y.view(-1, 1))) 
y_hat = torch.tensor([[0.1, 0.3, 0.6], [0.3, 0.2, 0.5]])
y = torch.LongTensor([0, 2])
def accuracy(y_hat, y):
    return (y_hat.argmax(dim=1) == y).float().mean().item()
print(accuracy(y_hat, y))
print(d2l.evaluate_accuracy(test_iter, net))  
def train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size,params=None, lr=None, optimizer=None):
    for epoch in range(num_epochs):
        train_l_sum, train_acc_sum, n = 0.0, 0.0, 0
        for X,y in train_iter:
            y_hat = net(X)
            l = loss(y_hat, y).sum()

            #梯度清零
            if optimizer is not None:
                optimizer.zero_grad()
            elif params is not None and params[0].grad is not None:
                for param in params:
                    param.grad.data.zero_()

            l.backward()
            if optimizer is None:
                d2l.sgd(params, lr, batch_size)
            else:
                optimizer.step()
            
            train_l_sum += l.item()
            train_acc_sum += (y_hat.argmax(dim=1) == y).sum().item()
            n += y.shape[0]
        test_acc = d2l.evaluate_accuracy(test_iter, net) #这个test_acc是在每次epoch训练完成之后进行的测试
        print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f' % (epoch+1, train_l_sum / n, train_acc_sum / n, test_acc))
num_epochs, lr = 5, 0.1
train_ch3(net, train_iter, test_iter, cross_entropy, num_epochs, batch_size, [W, b], lr)

结果:

epoch 1, loss 0.5699, train acc 0.814, test acc 0.813
epoch 2, loss 0.5251, train acc 0.826, test acc 0.818
epoch 3, loss 0.5004, train acc 0.833, test acc 0.824
epoch 4, loss 0.4855, train acc 0.837, test acc 0.824
epoch 5, loss 0.4733, train acc 0.840, test acc 0.825

预测:

X, y = iter(test_iter).next()
true_labels = d2l.get_fashion_mnist_labels(y.numpy())
pred_labels = d2l.get_fashion_mnist_labels(net(X).argmax(dim=1).numpy())
titles = [true + '\n' + pred for true, pred in zip(true_labels, pred_labels)]
d2l.show_fashion_mnist(X[0:9], titles[0:9])

在这里插入图片描述
softmax简洁实现

import torch
from torch import nn
from torch.nn import init
import numpy as np
import d2lzh_pytorch as d2l
batch_size =256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
from collections import OrderedDict
num_inputs = 28*28
num_outputs = 10

net = nn.Sequential(
    OrderedDict([
        ('flatten', d2l.FlattenLayer()),
        ('linear', nn.Linear(num_inputs, num_outputs))
    ])
)
print(net)
init.normal_(net.linear.weight, mean = 0, std = 0.01)
init.constant_(net.linear.bias, val = 0)

loss = nn.CrossEntropyLoss()

optimizer = torch.optim.SGD(net.parameters(), lr=0.1)

num_epochs = 5
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, None, None, optimizer)

结果:

epoch 1, loss 0.0031, train acc 0.749, test acc 0.776
epoch 2, loss 0.0022, train acc 0.814, test acc 0.806
epoch 3, loss 0.0021, train acc 0.826, test acc 0.817
epoch 4, loss 0.0020, train acc 0.831, test acc 0.818
epoch 5, loss 0.0019, train acc 0.836, test acc 0.828

evaluate_accuracy算的是(acc_sum / n)

def evaluate_accuracy(data_iter, net):
    acc_sum, n = 0.0, 0
    for X, y in data_iter:
        acc_sum += (net(X).argmax(dim=1) == y).float().sum().item()
        n += y.shape[0]
    return acc_sum / n

多层感知器

多层感知器的从零开始实现

import torch
import numpy as np
import d2lzh_pytorch as d2l

batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)

num_inputs, num_hiddens, num_outputs = 28*28, 256, 10
W1 = torch.tensor(np.random.normal(0, 0.01, (num_inputs, num_hiddens)), dtype=torch.float)
b1 = torch.zeros(num_hiddens, dtype=torch.float)
W2 = torch.tensor(np.random.normal(0, 0.01, (num_hiddens, num_outputs)), dtype=torch.float)
b2 = torch.zeros(num_outputs, dtype=torch.float)

params = [W1, b1, W2, b2]
for param in params:
    param.requires_grad_(requires_grad=True)

def relu(X):
    return torch.max(input=X, other=torch.tensor(0.0))

def net(X):
    X = X.view((-1, num_inputs))
    H = relu(torch.matmul(X, W1) + b1)
    return torch.matmul(H, W2) + b2

loss = torch.nn.CrossEntropyLoss()

num_epochs, lr = 5, 100.0 #由于使用的是d2lzh_pytorch中的SGD,所以会存在学习率看起来很大的问题
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, params, lr)

结果:

epoch 1, loss 0.0031, train acc 0.749, test acc 0.776
epoch 2, loss 0.0022, train acc 0.814, test acc 0.806
epoch 3, loss 0.0021, train acc 0.826, test acc 0.817
epoch 4, loss 0.0020, train acc 0.831, test acc 0.818
epoch 5, loss 0.0019, train acc 0.836, test acc 0.828

多层感知器的简洁实现

import torch
from torch import nn 
from torch.nn import init
import numpy as np
import d2lzh_pytorch as d2l

num_inputs = 28*28
num_outputs = 10
num_hiddens = 256

net = nn.Sequential(
    d2l.FlattenLayer(), #变换X的形状
    nn.Linear(num_inputs, num_hiddens),
    nn.ReLU(),
    nn.Linear(num_hiddens, num_outputs),
)

for params in net.parameters():
    init.normal_(params, mean=0, std=0.01)


batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
loss = torch.nn.CrossEntropyLoss()

optimizer = torch.optim.SGD(net.parameters(), lr=0.5)

num_epochs = 5
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, None, None, optimizer)

结果:

epoch 1, loss 0.0033, train acc 0.682, test acc 0.804
epoch 2, loss 0.0019, train acc 0.817, test acc 0.800
epoch 3, loss 0.0017, train acc 0.839, test acc 0.785
epoch 4, loss 0.0015, train acc 0.855, test acc 0.825
epoch 5, loss 0.0014, train acc 0.862, test acc 0.850

欠拟合和过拟合

欠拟合是训练不够,训练误差大;
过拟合是训练太强,没有很好的泛化,训练误差远小于测试误差。

模型复杂度对欠拟合和过拟合的影响
在这里插入图片描述

多项式函数拟合实验

%matplotlib inline
import torch
import numpy as np
import d2lzh_pytorch as d2l

n_train, n_test, true_w, true_b = 100, 100, [1.2, -3.4, 5.6], 5
features = torch.randn((n_train+n_test, 1))
poly_features = torch.cat((features, torch.pow(features, 2), torch.pow(features, 3)), 1) #此处最后的1(按列拼接)
labels = (true_w[0] * poly_features[:, 0] + true_w[1] * poly_features[:, 1] + true_w[2] * poly_features[:, 2] + true_b)
labels += torch.tensor(np.random.normal(0, 0.01, size=labels.size()), dtype=torch.float)

num_epochs, loss = 100, torch.nn.MSELoss()
def fit_and_plot(train_features, test_features, train_labels, test_labels):
    net = torch.nn.Linear(train_features.shape[-1], 1)
    batch_size = min(10, train_labels.shape[0])
    #对数据集进行打包
    dataset = torch.utils.data.TensorDataset(train_features, train_labels)
    train_iter = torch.utils.data.DataLoader(dataset, batch_size, shuffle=True)
    optimizer = torch.optim.SGD(net.parameters(), lr=0.01)
    train_ls, test_ls = [], []
    for _ in range(num_epochs):
        for X,y in train_iter:
            l = loss(net(X), y.view(-1, 1))
            optimizer.zero_grad()
            l.backward()
            optimizer.step()
        train_labels = train_labels.view(-1,1)
        test_labels = test_labels.view(-1,1)
        train_ls.append(loss(net(train_features), train_labels).item())
        test_ls.append(loss(net(test_features), test_labels).item())
    print('final epoch: train loss', train_ls[-1], 'test loss', test_ls[-1])
    d2l.semilogy(range(1, num_epochs+1), train_ls, 'epochs', 'loss', range(1, num_epochs+1), test_ls, ['train', 'test'])
    print('weight:', net.weight.data, '\nbias:', net.bias.data)
fit_and_plot(poly_features[:n_train, :], poly_features[n_train:, :], labels[:n_train], labels[n_train:])

在这里插入图片描述

线性函数拟合(欠拟合)

fit_and_plot(features[:n_train, :], features[n_train:, :], labels[:n_train], labels[n_train:])

在这里插入图片描述

过拟合

fit_and_plot(poly_features[:3, :], poly_features[3:, :], labels[:3], labels[3:])

在这里插入图片描述

权重衰减

高维线性回归实验

%matplotlib inline
import torch
import torch.nn as nn 
import numpy as np
import d2lzh_pytorch as d2l

#维度为200
n_train, n_test, num_inputs = 20, 100, 200
true_w , true_b = torch.ones(num_inputs, 1) * 0.01, 0.05

features = torch.randn((n_train + n_test, num_inputs))
labels = torch.matmul(features, true_w) + true_b
labels += torch.tensor(np.random.normal(0, 0.01, size=labels.size()), dtype=torch.float)
train_features, test_features = features[:n_train, :], features[n_train:, :]
train_labels, test_labels = labels[:n_train], labels[n_train:]

#为每个参数附上梯度
def init_params():
    w = torch.randn((num_inputs, 1), requires_grad=True)
    b = torch.zeros(1, requires_grad=True)
    return [w, b]

def l2_penalty(w):
    return (w**2).sum() / 2

batch_size, num_epochs, lr = 1, 100, 0.003
net, loss = d2l.linreg, d2l.squared_loss

#将数据集打包
dataset = torch.utils.data.TensorDataset(train_features, train_labels)
train_iter = torch.utils.data.DataLoader(dataset, batch_size, shuffle=True)

def fit_and_plot(lambd):
    w, b = init_params()
    train_ls, test_ls = [], []
    for _ in range(num_epochs):
        for X,y in train_iter:
            l = loss(net(X, w, b), y) + lambd * l2_penalty(w)
            l = l.sum()
            #梯度清零
            if w.grad is not None:
                w.grad.data.zero_()
                b.grad.data.zero_()
            l.backward()
            #优化器;作用是更新迭代w和b
            d2l.sgd([w, b], lr, batch_size)
        train_ls.append(loss(net(train_features, w, b), train_labels).mean().item())
        test_ls.append(loss(net(test_features, w, b), test_labels).mean().item())
    d2l.semilogy(range(1, num_epochs + 1), train_ls, 'epochs', 'loss',
                 range(1, num_epochs + 1), test_ls, ['train', 'test'])
    print('L2 norm of w:', w.norm().item())

观察过拟合,即lambd=0,没有使用权重衰减

fit_and_plot(lambd=0)

在这里插入图片描述
使用权重衰减
训练误差虽然有所提高,但是测试误差下降了不少,过拟合现象有了一定程度的缓解

fit_and_plot(lambd=3)

在这里插入图片描述

简洁实现

通过构造优化器实例时的weight_decay参数来指定权重衰减超参数

def fit_and_plot_pytorch(wd):
    net = nn.Linear(num_inputs, 1)
    nn.init.normal_(net.weight, mean=0, std=1)
    nn.init.normal_(net.bias, mean=0, std=1)
    optimizer_w = torch.optim.SGD(params=[net.weight], lr=lr, weight_decay=wd) #对权重使用权重衰减超参数
    optimizer_b = torch.optim.SGD(params=[net.bias], lr=lr) #不对偏差使用权重衰减超参数

    train_ls, test_ls = [], []
    for _ in range(num_epochs):
        for X,y in train_iter:
            l = loss(net(X), y).mean()
            optimizer_w.zero_grad()
            optimizer_b.zero_grad()
            l.backward()
            #对两个optimizer实例分别调用step函数,从而分别更新权重和偏差
            optimizer_w.step()
            optimizer_b.step()
        train_ls.append(loss(net(train_features), train_labels).mean().item())
        test_ls.append(loss(net(test_features), test_labels).mean().item())
    d2l.semilogy(range(1, num_epochs + 1), train_ls, 'epochs', 'loss',
                 range(1, num_epochs + 1), test_ls, ['train', 'test'])
    print('L2 norm of w:', net.weight.data.norm().item())

fit_and_plot_pytorch(0)#不使用权重衰减

在这里插入图片描述

fit_and_plot_pytorch(3)#使用权重衰减

在这里插入图片描述

dropout(丢弃法)

从零开始实现:

%matplotlib inline
import torch
import torch.nn as nn 
import numpy as np
import d2lzh_pytorch as d2l

def dropout(X, drop_prob):
    X = X.float() 
    assert 0<= drop_prob <=1  #assert(断言)用于判断一个表达式,在表达式条件为 false 的时候触发异常,条件为true时正常执行
    keep_prob = 1 - drop_prob
    if keep_prob == 0:
        return torch.zeros_like(X)
    mask = (torch.randn(X.shape) < drop_prob).float()

    return mask * X / keep_prob

num_inputs, num_outputs, num_hiddens1, num_hiddens2 = 784, 10, 256, 256

W1 = torch.tensor(np.random.normal(0, 0.01, size=(num_inputs, 
num_hiddens1)), dtype=torch.float, requires_grad=True)
b1 = torch.zeros(num_hiddens1, requires_grad=True)
W2 = torch.tensor(np.random.normal(0, 0.01, size=(num_hiddens1, 
num_hiddens2)), dtype=torch.float, requires_grad=True)
b2 = torch.zeros(num_hiddens2, requires_grad=True)
W3 = torch.tensor(np.random.normal(0, 0.01, size=(num_hiddens2, 
num_outputs)), dtype=torch.float, requires_grad=True)
b3 = torch.zeros(num_outputs, requires_grad=True)
params = [W1, b1, W2, b2, W3, b3]

drop_prob1, drop_prob2 = 0.2, 0.5
def net(X, is_training=True):
    X = X.view(-1, num_inputs)
    H1 = (torch.matmul(X, W1) + b1).relu()
    if is_training:
        H1 = dropout(H1, drop_prob1)
    H2 = (torch.matmul(H1, W2) + b2).relu()
    if is_training:
        H2 = dropout(H2, drop_prob2)
    return torch.matmul(H2, W3) + b3

num_epoches, lr, batch_size = 5, 100.0, 256
loss = torch.nn.CrossEntropyLoss()
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epoches, batch_size, params, lr)

结果:

epoch 1, loss 0.0046, train acc 0.546, test acc 0.653
epoch 2, loss 0.0023, train acc 0.780, test acc 0.758
epoch 3, loss 0.0020, train acc 0.818, test acc 0.828
epoch 4, loss 0.0018, train acc 0.833, test acc 0.811
epoch 5, loss 0.0017, train acc 0.841, test acc 0.851

mask = (torch.randn(X.shape) < drop_prob).float()torch.randn(X.shape)得到的是由标准正态分布得到的概率值,如果该概率值小于keep_prob的值返回Fasle大于返回True用float让布尔值变为浮点数。
以下是常用的几种torch分布:

  • 均匀分布
    torch.rand(*sizes, out=None) → Tensor
    返回一个张量,包含了从区间[0, 1)的均匀分布中抽取的一组随机数。张量的形状由参数sizes定义。

  • 标准正态分布
    torch.randn(*sizes, out=None) → Tensor
    返回一个张量,包含了从标准正态分布(均值为0,方差为1,即高斯白噪声)中抽取的一组随机数。张量的形状由参数sizes定义。

  • 离散正态分布
    torch.normal(means, std, out=None) → Tensor
    返回一个张量,包含了从指定均值means和标准差std的离散正态分布中抽取的一组随机数。标准差std是一个张量,包含每个输出元素相关的正态分布标准差。

  • 线性间距向量
    torch.linspace(start, end, steps=100, out=None) → Tensor
    返回一个1维张量,包含在区间start和end上均匀间隔的step个点。输出张量的长度由steps决定。

简洁实现:

net = nn.Sequential(
    d2l.FlattenLayer(),
    nn.Linear(num_inputs, num_hiddens1),
    nn.ReLU(),
    nn.Dropout(drop_prob1),
    nn.Linear(num_hiddens1, num_hiddens2),
    nn.ReLU(),
    nn.Dropout(drop_prob2),
    nn.Linear(num_hiddens2, 10)
)
for param in net.parameters():
    nn.init.normal_(param, mean=0, std=0.01)

optimizer = torch.optim.SGD(net.parameters(), lr=0.5)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epoches, batch_size, None, None, optimizer)

结果:

epoch 1, loss 0.0044, train acc 0.563, test acc 0.698
epoch 2, loss 0.0023, train acc 0.783, test acc 0.798
epoch 3, loss 0.0019, train acc 0.824, test acc 0.810
epoch 4, loss 0.0017, train acc 0.838, test acc 0.787
epoch 5, loss 0.0016, train acc 0.849, test acc 0.855

正向传播和反向传播

正向传播的计算依赖于模型参数的当前值,而这些模型参数是在反向传播的梯度计算后通过优化器迭代的。

反向传播的梯度计算依赖于各变量的当前值,而这些变量值是通过正向传播计算并存储得到的。

因此,我们在模型参数初始化之后,我们交替地进行正向传播和反向传播,并根据反向传播计算的梯度迭代模型参数。既然我们在进行反向传播时会使用正向传播中由计算得到的中间变量来避免重复计算,那么这个复用也将导致正向传播结束后不能立即释放中间变量内存,由此占用内存空间。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值