6-pytorch-神经网络搭建

本文介绍了如何在PyTorch中使用神经网络模块,包括构建基础模型结构(如Conv2d、MaxPool2d、ReLU),以及如何添加非线性层、线性层和处理卷积层、池化层的参数。作者通过实例展示了如何使用这些组件构建和操作CIFAR10数据集的图像模型。
摘要由CSDN通过智能技术生成

b站小土堆pytorch教程学习笔记

1.神经网络骨架搭建:Containers

官方文档代码:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

在这里插入图片描述

import torch
from torch import nn


class A(nn.Module):
    def __init__(self):
        super().__init__()

    def forward(self,input):
        output=input+1
        return output

a=A()
x=torch.tensor(1.0)
output=a(x)
print(output)

tensor(2.)

2.向骨架中填充内容:

convolution layers
pooling layers
padding layers
Non-linear Activations (weighted sum, nonlinearity)
Normalization Layers

2.1卷积层

CLASS torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode=‘zeros’, device=None, dtype=None)

Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int, tuple or str, optional) – Padding added to all four sides of the input. Default: 0
padding_mode (str, optional) – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’. Default: ‘zeros’
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True

在这里插入图片描述

import torch
import torchvision
from torch import nn
from torch.nn import Conv2d
from torch.utils.data import DataLoader

dataset=torchvision.datasets.CIFAR10('./dataset',train=False,
                                     transform=torchvision.transforms.ToTensor(),
                                     download=True)
dataloader=DataLoader(dataset,batch_size=64)

class Han(nn.Module):
    def __init__(self):
        super(Han, self).__init__()##首先完成父类的初始化
        self.conv1=Conv2d(in_channels=3,##定义卷积层,可在其他函数中调用
                          out_channels=6,kernel_size=3,stride=1,padding=0)

    def forward(self,x):##定义一个forward函数
        x=self.conv1(x)
        return x

han=Han()
print(han)

Files already downloaded and verified
Han(
(conv1): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1))
)

初始化的神经网络名字为Han,一个卷积层。

接下来对每张图像进行处理:

han=Han()
# print(han)
for data in dataloader:
    imgs,targets=data
    print(imgs.shape)#原图像的shape
    output=han(imgs)#经过卷积
    print(output.shape)#后的图像shape

torch.Size([64, 3, 32, 32])
torch.Size([64, 6, 30, 30])

接下来使用tensorboard展示:

han=Han()
# print(han)
writer=SummaryWriter('logs')

step=0
for data in dataloader:
    imgs,targets=data
    print(imgs.shape)#原图像的shape,torch.Size([64, 3, 32, 32])
    writer.add_images('input',imgs,step)

    output=han(imgs)#经过卷积
    print(output.shape)#后的图像shape.torch.Size([64, 6, 30, 30])
    #由于6通道无法显示,尝试reshape 6->3
    output=torch.reshape(output,(-1,3,30,30))
    writer.add_images('output',output,step)
    step=step+1

> tensorboard --logdir=logs
在这里插入图片描述

2.2 最大池化

CLASStorch.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)

Parameters
kernel_size (Union[int, Tuple[int, int]]) – the size of the window to take a max over
stride (Union[int, Tuple[int, int]]) – the stride of the window. Default value is kernel_size
padding (Union[int, Tuple[int, int]]) – Implicit negative infinity padding to be added on both sides
dilation (Union[int, Tuple[int, int]]) – a parameter that controls the stride of elements in the window
return_indices (bool) – if True, will return the max indices along with the outputs. Useful for torch.nn.MaxUnpool2d later
ceil_mode (bool) – when True, will use ceil instead of floor to compute the output shape

在这里插入图片描述
相关代码同卷积层:
在这里插入图片描述
最大池化一般只需要设置kernel_size,移动步长默认为kernel_size
ceil_mode为True时,表示边缘部分最大池化结果是否舍去
最大池化希望保留输入特征,但减少计算量

2.3非线性激活

CLASS torch.nn.ReLU(inplace=False)

import torch
from torch import nn
from torch.nn import ReLU

input=torch.tensor(
    [[1,-0.5],
      [-1,3]])
output=torch.reshape(input,(-1,1,2,2))
# print(output.shape)

class Han(nn.Module):
    def __init__(self):
        super(Han, self).__init__()
        self.relu1=ReLU(inplace=False)

    def forward(self, input):
        output=self.relu1(input)
        return output
    
han=Han()
output=han(input)
print(output)

tensor([[1., 0.],
[0., 3.]])

查看sigmoid对图片影响:

han=Han()
# output=han(input)
# print(output)
writer=SummaryWriter('logs')
step=0
for data in dataloader:
    imgs,targets=data
    writer.add_images('input_sigmoid',imgs,step)
    output=han(imgs)
    writer.add_images('output_sigmoid',output,step)
    step+=1

writer.close()

在这里插入图片描述
非线性层向网络引入非线性特征,非线性越多才能训练出符合各种特征的模型。

2.4线性层及其他层

在这里插入图片描述
线性层:

import torch
import torchvision.datasets
from torch import nn
from torch.nn import Linear
from torch.utils.data import DataLoader

dataset=torchvision.datasets.CIFAR10('dataset',train=False,
                                     transform=torchvision.transforms.ToTensor())
dataloader=DataLoader(dataset,batch_size=64)

class Han(nn.Module):
    def __init__(self):
        super(Han, self).__init__()
        self.linear1=Linear(196608,10)

    def forward(self,input):
        output=self.linear1(input)
        return output

han=Han()

for data in dataloader:
    imgs,targets=data
    print(imgs.shape)#原始图片torch.Size([64, 3, 32, 32])
    # output=torch.reshape(imgs,(1,1,1,-1))
    output=torch.flatten(imgs)#torch.Size([196608])
    print(output.shape)#展平后torch.Size([1, 1, 1, 196608])->torch.Size([10])
    output=han(output)
    print(output.shape)#经过线性层后torch.Size([1, 1, 1, 10])
2.5 已有网络模型

图像方面:
在这里插入图片描述

  • 9
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值