Pytorch学习笔记第十一课处理多维特征输入GoogleNet/ResNet

笔记来源于B站up主,@刘二大人

视频链接11.卷积神经网络(高级篇)_哔哩哔哩_bilibili

感谢所有社区创作者提供的帮助。

GoogleNet的网络图如下所示,其中包含了一个很重要的模块,Inception Module

Inception Module的具体形式如下图所示,作用是通过穷举的方式尝试每一种kernel_size组合,当某个卷积核效果突出的时候它的权重就会变大,其他的支路权重就会变小,以此来自动求出几条路线中最优的卷积组合。

PS.Concatenate表示一堆张量拼接后的张量。

在拼接的过程中要保证图像的W和H一致。

输入的图像数据为(batch_size, C, W, H),经过Conv2d的时候只允许改变C的值,W、H的变化通过padding进行补偿

其中有一类特殊的卷积核:1x1Conv 其作用是改变输出的通道数量,可以大大减少计算量

 举例说明

一个192个通道的28*28的矩阵,在经过padding=2后采用5*5卷积运算,需要进行的浮点数运算为1.2亿次

5*5的卷积核对应5*5的像素块分别进行一次运算5^2

再对28*28个像素进行遍历28^2

再对192个通道都进行一次上述操作192

最后输出是32个通道,所以需要有32个卷积核来王成上述操作32

最后得到120422400

经过1x1的卷积核先对通道数进行压缩到16维,再重新扩张到32维,最后输出在形式上是一样的,但计算量减少了一个数量级。 

 

根据Inception Module的结构通过代码来构建。注意输入和输出的层数需要对应

 

 代码:

Net网络内使用了两层Inception的操作

import torch
import torch.nn as nn
from torchvision import transforms
from torchvision import datasets
from torch.utils.data import DataLoader
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt

# prepare dataset
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
batch_size = 64
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.1307,), (0.3081,))# 归一化,均值和方差
])

train_dataset = datasets.MNIST(root='../data/', train=True, download=False, transform=transform)
train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size)
test_dataset = datasets.MNIST(root='../data/', train=False, download=False, transform=transform)
test_loader = DataLoader(test_dataset, shuffle=False, batch_size=batch_size)


# 将Inception模块设计成一个类,输入通道设置为自定义的,输出通道固定是88
class InceptionA(nn.Module):
    def __init__(self, in_channels):
        super(InceptionA, self).__init__()
        self.branch1x1 = nn.Conv2d(in_channels, 16, kernel_size=1)

        self.branch5x5_1 = nn.Conv2d(in_channels, 16, kernel_size=1)
        self.branch5x5_2 = nn.Conv2d(16, 24, kernel_size=5, padding=2)

        self.branch3x3_1 = nn.Conv2d(in_channels, 16, kernel_size=1)
        self.branch3x3_2 = nn.Conv2d(16, 24, kernel_size=3, padding=1)
        self.branch3x3_3 = nn.Conv2d(24, 24, kernel_size=3, padding=1)

        self.branch_pool = nn.Conv2d(in_channels, 24, kernel_size=1)

    def forward(self, x):
        branch1x1 = self.branch1x1(x)

        branch5x5 = self.branch5x5_1(x)
        branch5x5 = self.branch5x5_2(branch5x5)

        branch3x3 = self.branch3x3_1(x)
        branch3x3 = self.branch3x3_2(branch3x3)
        branch3x3 = self.branch3x3_3(branch3x3)

        branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
        branch_pool = self.branch_pool(branch_pool)

        # 先将输出组合到一个列表中,再利用cat函数将输出按通道维度拼接
        # 利用cat函数将输出进行拼接,输出的维度是(B,C,W,H),根据通道进行拼接选择dim=1
        outputs = [branch1x1, branch5x5, branch3x3, branch_pool]
        return torch.cat(outputs, dim=1)


class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
        self.conv2 = nn.Conv2d(88, 20, kernel_size=5)  # 4条支路输出通道总和24+16+24+24=88

        self.incep1 = InceptionA(in_channels=10)  # 与conv1 中的10对应
        self.incep2 = InceptionA(in_channels=20)  # 与conv2 中的20对应

        self.mp = nn.MaxPool2d(2)
        self.flatten = nn.Flatten()
        self.fc = nn.Linear(1408, 10)

    def forward(self, x):
        # 输入x的第0个维度是batch_size
        in_size = x.size(0)
        x = F.relu(self.mp(self.conv1(x)))
        x = self.incep1(x)
        x = F.relu(self.mp(self.conv2(x)))
        x = self.incep2(x)
        x = self.flatten(x)
        # x = x.view(in_size, -1)
        x = self.fc(x)

        return x


model = Net()
model.to(device)

# construct loss and optimizer
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)


# training cycle forward, backward, update


def train(epoch):
    train_loss = 0.0
    for batch_idx, data in enumerate(train_loader, 0):
        inputs, target = data
        inputs, target = inputs.to(device), target.to(device)

        optimizer.zero_grad()

        outputs = model(inputs)
        loss = criterion(outputs, target)
        loss.backward()
        optimizer.step()

        # running_loss += loss.item()
        # if batch_idx % 300 == 299:
        #     print('[%d, %5d] loss: %.3f' % (epoch + 1, batch_idx + 1, running_loss / 300))
        #     running_loss = 0.0
        train_loss += loss.item()
        if batch_idx % 300 == 299:
            print('epoch:{} batch_idx:{}  loss:{}'.format(epoch + 1, batch_idx + 1, train_loss / 300))
            train_loss = 0.0


def test():
    correct = 0
    total = 0
    with torch.no_grad():
        for data in test_loader:
            images, labels = data
            images, labels = images.to(device), labels.to(device)
            outputs = model(images)
            _, predicted = torch.max(outputs.data, dim=1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()
    # print('accuracy on test set: %d %% ' % (100 * correct / total))
            acc = correct / total
        print('accuracy on test_dataset:{}'.format(acc))
        return acc

if __name__ == '__main__':
    epoch_list = []
    acc_list = []
    for epoch in range(10):
        train(epoch)
        acc = test()
        epoch_list.append(epoch)
        acc_list.append(acc)

    plt.plot(epoch_list, acc_list)
    plt.ylabel('accuracy')
    plt.xlabel('epoch')
    plt.show()

结果:

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值