pytorch从零开始学习(7)

pytorch从零开始学习(7)

当模型复杂时,代码量级较大,需要减少代码冗余,以此来保证不会出现错误

如何减小代码冗余?函数或类将其封装

1*1卷积核

  • 升维度、降维度

  • 跨通道信息的融合

  • 减少计算量

  • 如果输入是一个一维的图像,那么此时的1x1卷积操作是没有意义的,因为,1x1卷积核是不考虑像素与周围像素的关系的,所以,只有到输入是多维度的图像时,1x1卷积核能够对各个通道的图像进行信息融合,完成升维降维操作。

  • 信息融合:将不同通道的信息进行融合。例子:模拟考试,将不同科目的成绩按照各自的权重相乘、相加得到最终的结果,以此结果比较学生之间的学习状况以及差距。例子2:同样是学习,老师需要知道班级的学生在该门课程中哪里的知识点没有掌握,同时又要考虑时间,因此,不可能评讲试卷所有的题目,因此,老师需要统计所有学生在该试卷中出错的信息,得到一个错误信息综合,知道在哪个知识点容易出错。

Inception卷积网络简单实现

kernel的大小:---->inception从此点出发,使用多个不同大小的kernel来找到最优的卷积大小

"""
卷积神经网络高级篇
"""

# ------------------------------------------------------导入相应的包------------------------------------------------------
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import transforms
from torchvision import datasets
from torch.utils.data import DataLoader


# --------------------------------------------------------------准备数据--------------------------------------------------
batch_size = 64
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])

train_dataset = datasets.MNIST(root='./data/mnist/', train=True, download=True, transform=transform)
train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size)

test_dataset = datasets.MNIST(root='./data/mnist/', train=False, download=True, transform=transform)
test_loader = DataLoader(test_dataset, shuffle=False, batch_size=batch_size)


# ---------------------------------------------------定义inception module------------------------------------------------
class Inception(nn.Module):
    def __init__(self, in_channels):
        super(Inception, self).__init__()
        self.branch1x1 = nn.Conv2d(in_channels=in_channels, out_channels=16, kernel_size=1)

        self.branch5x5_1 = nn.Conv2d(in_channels=in_channels, out_channels=16, kernel_size=1)
        self.branch5x5_2 = nn.Conv2d(16, 24, kernel_size=5, padding=2)

        self.brach3x3_1 = nn.Conv2d(in_channels=in_channels, out_channels=16, kernel_size=1)
        self.brach3x3_2 = nn.Conv2d(in_channels=16, out_channels=24, kernel_size=3, padding=1)
        self.brach3x3_3 = nn.Conv2d(in_channels=24, out_channels=24, kernel_size=3, padding=1)

        self.brach_pool = nn.Conv2d(in_channels=in_channels, out_channels=24, kernel_size=1)

    def forward(self, x):
        # 模型分支1
        branch1x1 = self.branch1x1(x)

        # 模型分支2
        branch5x5 = self.branch5x5_1(x)
        branch5x5 = self.branch5x5_2(branch5x5)

        # 模型分支3
        branch3x3 = self.branch3x3_1(x)
        branch3x3 = self.branch3x3_2(branch3x3)
        branch3x3 = self.branch3x3_3(branch3x3)

        # 模型分支4
        branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
        branch_pool = self.branch_pool(branch_pool)

        # 在channel维度拼接各个分支
        outputs = [branch1x1, branch5x5, branch3x3, branch_pool]
        return torch.cat(outputs, dim=1)


# -----------------------------------------------定义总模型---------------------------------------------------------------
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
        self.conv2 = nn.Conv2d(88, 20, kernel_size=5)

        self.incep1 = Inception(in_channels=10)
        self.incep2 = Inception(in_channels=20)

        self.mp = nn.MaxPool2d(2)
        self.fc = nn.Linear(1408, 10)

    def forward(self, x):
        in_size = x.size(0)
        x = F.relu(self.mp(self.conv1(x)))
        x = self.incep1(x)
        x = F.relu(self.mp(self.conv2(x)))
        x = self.incep2(x)
        x = x.view(in_size, -1)
        x = self.fc(x)

        return x


# -------------------------------------------------------实例化模型-------------------------------------------------------
model = Net()

# ----------------------------------------------定义损失函数与优化器--------------------------------------------------------
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)


# --------------------------------------------------定义一个epoch的训练----------------------------------------------------
def train(epoch):
    running_loss = 0.0
    for batch_idx, data in enumerate(train_loader, 0):
        inputs, target = data
        optimizer.zero_grad()

        outputs = model(inputs)
        loss = criterion(outputs, target)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
        if batch_idx % 300 == 299:
            print('[%d, %5d] loss: %.3f' % (epoch + 1, batch_idx + 1, running_loss / 300))
            running_loss = 0.0


# ----------------------------------------------------------定义测试------------------------------------------------------
def test():
    correct = 0
    total = 0
    with torch.no_grad():
        for data in test_loader:
            images, labels = data
            outputs = model(images)
            _, predicted = torch.max(outputs.data, dim=1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()
    print('accuracy on test set: %d %% ' % (100 * correct / total))


if __name__ == '__main__':
    for epoch in range(10):
        train(epoch)
        test()

ResNet:

网络退化问题

随着网络层数的增多,会出现以下几个问题:

  • 计算资源的消耗
  • 模型过拟合梯度消失,梯度爆炸(反向传播时,要用到梯度,而当梯度是一个小于1的值时,当层数无限增多,梯度计算图会使得梯度接近于0,出现梯度消失;同理梯度大于零导致梯度爆炸)

需要区别几个概念:过拟合的特点:在训练中loss是不断减小的。而一种网络退化现象,即:随着网络层数的增多,训练loss逐渐下降,趋于饱和,然后上升的现象

网络退化问题:当模型在浅层网络上已经可以达到最优时,如果此时增加网络的层数,按理说如果这些增加的层数即使什么也不学,模型的效果也不会减小,可以如何使得增加的层数什么也不学呢?即使得增加的网络层他们的权重为1呢,这是需要学习的,就像前面的网络层一样需要学习模型的权重参数,可是,模型很难学习到权重为1,这就使得模型退化。

残差卷积网络简单实现

"""
卷积神经网络高级篇-resnet
"""
# ------------------------------------------------------导入相应的包------------------------------------------------------
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import transforms
from torchvision import datasets
from torch.utils.data import DataLoader


# --------------------------------------------------------------准备数据--------------------------------------------------
batch_size = 64
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])

train_dataset = datasets.MNIST(root='./data/mnist/', train=True, download=True, transform=transform)
train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size)

test_dataset = datasets.MNIST(root='./data/mnist/', train=False, download=True, transform=transform)
test_loader = DataLoader(test_dataset, shuffle=False, batch_size=batch_size)


# ----------------------------------------------------定义残差模块--------------------------------------------------------
class ResidualBlock(nn.Module):
    def __init__(self, in_channels):
        super(ResidualBlock, self).__init__()
        self.in_channels = in_channels
        self.conv1 = nn.Conv2d(self.in_channels, self.in_channels, kernel_size=3, padding=1)
        self.conv2 = nn.Conv2d(self.in_channels, self.in_channels, kernel_size=3, padding=1)

    def forward(self, x):
        y = F.relu(self.conv1(x))
        y = self.conv2(y)
        return F.relu(x+y)


# ------------------------------------------------------定义总模型--------------------------------------------------------
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 16, kernel_size=5)
        self.conv2 = nn.Conv2d(16, 32, kernel_size=5)

        self.rblock1 = ResidualBlock(16)
        self.rblock2 = ResidualBlock(32)

        self.mp = nn.MaxPool2d(2)
        self.fc = nn.Linear(512, 10)

    def forward(self, x):
        in_size = x.size(0)

        x = self.mp(F.relu(self.conv1(x)))
        x = self.rblock1(x)
        x = self.mp(F.relu(self.conv2(x)))
        x = self.rblock2(x)

        x = x.view(in_size, -1)
        x = self.fc(x)

        return x


# -------------------------------------------------------实例化模型-------------------------------------------------------
model = Net()

# ----------------------------------------------定义损失函数与优化器--------------------------------------------------------
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)


# --------------------------------------------------定义一个epoch的训练----------------------------------------------------
def train(epoch):
    running_loss = 0.0
    for batch_idx, data in enumerate(train_loader, 0):
        inputs, target = data
        optimizer.zero_grad()

        outputs = model(inputs)
        loss = criterion(outputs, target)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
        if batch_idx % 300 == 299:
            print('[%d, %5d] loss: %.3f' % (epoch + 1, batch_idx + 1, running_loss / 300))
            running_loss = 0.0


# ----------------------------------------------------------定义测试------------------------------------------------------
def test():
    correct = 0
    total = 0
    with torch.no_grad():
        for data in test_loader:
            images, labels = data
            outputs = model(images)
            _, predicted = torch.max(outputs.data, dim=1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()
    print('accuracy on test set: %d %% ' % (100 * correct / total))


if __name__ == '__main__':
    for epoch in range(10):
        train(epoch)
        test()

参考资料:

详解残差网络 - 知乎 (zhihu.com)

机器学习卷积层多的网络反而没有层数少的网络表现好? - 知乎 (zhihu.com)

11.卷积神经网络(高级篇)_哔哩哔哩_bilibili

(二十七)通俗易懂理解——Resnet残差网络 - 知乎 (zhihu.com)

【CNN】卷积神经网络中的 11 卷积 的作用_11卷积后加bn不_scxyz_的博客-CSDN博客

一文读懂卷积神经网络中的1x1卷积核 - 知乎 (zhihu.com)

卷积神经网络中用1*1 卷积有什么作用或者好处呢? - 知乎 (zhihu.com)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值