9.Sequential的介绍和神经网络搭建实战

一、Sequential的介绍

torch.nn.Sequential(*args)

一个顺序容器。模块将按照它们在构造函数中传递的顺序添加到其中。或者,OrderedDict可以传入一个模块。该forward()方法Sequential接受任何输入并将其转发到它包含的第一个模块。然后,它将每个后续模块的输出顺序“链接”到输入,最后返回最后一个模块的输出。

例子:

# Using Sequential to create a small model. When `model` is run,
# input will first be passed to `Conv2d(1,20,5)`. The output of
# `Conv2d(1,20,5)` will be used as the input to the first
# `ReLU`; the output of the first `ReLU` will become the input
# for `Conv2d(20,64,5)`. Finally, the output of
# `Conv2d(20,64,5)` will be used as input to the second `ReLU`
model = nn.Sequential(
          nn.Conv2d(1,20,5),
          nn.ReLU(),
          nn.Conv2d(20,64,5),
          nn.ReLU()
        )

# Using Sequential with OrderedDict. This is functionally the
# same as the above code
model = nn.Sequential(OrderedDict([
          ('conv1', nn.Conv2d(1,20,5)),
          ('relu1', nn.ReLU()),
          ('conv2', nn.Conv2d(20,64,5)),
          ('relu2', nn.ReLU())
        ]))

2.实际应用例子

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.model1 = Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self, x):
        x = self.model1(x)
        return x
class Tudui(nn.Module):
    def __init__(self):
        super(Tudui,self).__init__()
        self.conv1=Conv2d(3,32,5,padding=2) #输入通道,输出通道,卷积核
        self.maxpool1=MaxPool2d(2)
        self.conv2=Conv2d(32,32,5,padding=2)
        self.maxpool2=MaxPool2d(2)
        self.conv3=Conv2d(32,64,5,padding=2)
        self.maxpool3=MaxPool2d(2)
        #经过Flatten
        self.flatten=Flatten()
        self.linear1=Linear(1024,64)
        self.linear2=Linear(64,10)
        
    def forward(self,x):
        x=self.conv1(x)
        x=self.maxpool1(x)
        x=self.conv2(x)
        x=self.maxpool2(x)
        x=self.conv3(x)
        x=self.maxpool3(x)
        x=self.flatten(x)
        x=self.linear1(x)
        x=self.linear2(x)
        return x

二、神经网络搭建实战

在这里插入图片描述

import torch
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
from torch.utils.tensorboard import SummaryWriter


class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.model1 = Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self, x):
        x = self.model1(x)
        return x

tudui = Tudui()
print(tudui)
input = torch.ones((64, 3, 32, 32))
output = tudui(input)
print(output.shape)
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
双通道卷积神经网络可以通过在nn.Sequential()中添加不同的层来构建。 以下是一个示例代码: ```python import torch.nn as nn class DoubleChannelCNN(nn.Module): def __init__(self): super(DoubleChannelCNN, self).__init__() self.conv1 = nn.Conv2d(3, 32, kernel_size=3, padding=1) self.bn1 = nn.BatchNorm2d(32) self.relu1 = nn.ReLU(inplace=True) self.conv2 = nn.Conv2d(32, 64, kernel_size=3, padding=1) self.bn2 = nn.BatchNorm2d(64) self.relu2 = nn.ReLU(inplace=True) self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2) self.conv3 = nn.Conv2d(64, 128, kernel_size=3, padding=1) self.bn3 = nn.BatchNorm2d(128) self.relu3 = nn.ReLU(inplace=True) self.conv4 = nn.Conv2d(128, 256, kernel_size=3, padding=1) self.bn4 = nn.BatchNorm2d(256) self.relu4 = nn.ReLU(inplace=True) self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2) self.fc1 = nn.Linear(256 * 7 * 7, 1024) self.dropout = nn.Dropout(p=0.5) self.fc2 = nn.Linear(1024, 10) def forward(self, x): x1 = x[:, :3] # 第1个通道 x2 = x[:, 3:] # 第2个通道 x1 = self.relu1(self.bn1(self.conv1(x1))) x1 = self.pool1(self.relu2(self.bn2(self.conv2(x1)))) x1 = self.relu3(self.bn3(self.conv3(x1))) x1 = self.pool2(self.relu4(self.bn4(self.conv4(x1)))) x1 = x1.view(x1.size(0), -1) x2 = self.relu1(self.bn1(self.conv1(x2))) x2 = self.pool1(self.relu2(self.bn2(self.conv2(x2)))) x2 = self.relu3(self.bn3(self.conv3(x2))) x2 = self.pool2(self.relu4(self.bn4(self.conv4(x2)))) x2 = x2.view(x2.size(0), -1) x = torch.cat((x1, x2), dim=1) x = self.dropout(self.fc1(x)) x = self.fc2(x) return x ``` 在这个例子中,我们首先定义了一个DoubleChannelCNN类,它继承了nn.Module。在构造函数中,我们定义了所有的卷积层、池化层、全连接层和dropout层。在前向传播函数中,我们将输入数据分成两个通道,并对每个通道进行相同的卷积操作,然后将它们连接在一起,通过全连接层和dropout层得到最终的输出。 这是一个简单的例子,你可以根据你的具体需求来修改和扩展它。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值