神经网络【小土堆】

本博客资源均来自b站小土堆的视频:

【PyTorch深度学习快速入门教程(绝对通俗易懂!)【小土堆】】https://www.bilibili.com/video/BV1hE411t7RN?p=22&vd_source=cdd93033e19ffe61f167bada04827323


本人python项目目录:

目录

本博客资源均来自b站小土堆的视频:

本人python项目目录:

一、卷积层

1.代码:

2.效果展示:

二、最大池化的使用

1.介绍:

2.代码:

3.效果展示:

三、非线性激活

1. inplace = True 和 False 的区别:

 2.代码:

3.结果展示:

四、Linear:(我没太理解)

1.介绍:

2.代码:

3.运行展示:

五、sequential:

1.介绍:

 2.代码:

3.结果展示:


一、卷积层

torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)

1.代码:

# 神经网络——卷积层
import torch
import torchvision
from torch import nn
from torch.nn import Conv2d
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

dataset = torchvision.datasets.CIFAR10("../data_conv2d", train=False, transform=torchvision.transforms.ToTensor(),
                                       download=True)
# 因为此代码在src文件夹下,想在外面创建一个文件夹,所以需要../name

dataloader = DataLoader(dataset, batch_size=64)

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui,self).__init__()
        self.conv1 = Conv2d(in_channels=3, out_channels=6, kernel_size=3, stride=1, padding=0)

    def forward(self, x):
        x = self.conv1(x)
        return x

tudui = Tudui()
print(tudui)

step = 0
writer = SummaryWriter("../logs_nn_conv2d")
for data in dataloader:
    imgs, targets = data
    output = tudui(imgs)
    print(imgs.shape)       # torch.Size([64, 3, 32, 32])
    print(output.shape)     # torch.Size([64, 6, 30, 30])
    writer.add_images("input", imgs, step)

    output = torch.reshape(output, (-1, 3, 30, 30))  # [64, 6, 30, 30] -> [xxx, 3, 30, 30]
    writer.add_images("output", output, step)    # 直接运行会报错,6channel无法显示,需要上述语句

    step = step + 1
# tensorboard --logdir=logs_nn_conv2d

因为通道数的不同,无法上传至tensorboard展示,故又通过torch.reshape(output,(-1,3,30,30))对output进行了转换下格式,其中(-1,3,30,30),out_channels将强制转换为3,而因为是-1故需要将多余的通过batch进行扩展,也就是增加每组图片的数量,故在tensorboard中显示的时候input和output显示的每组数量不一样。

2.效果展示:

二、最大池化的使用

传送们:池化层详细介绍 

1.介绍:

在输入图像中(例:5*5),中使用池化核(3*3),取池化选中的9个数字中最大的一个。

Ceil_model = True 时,因步长使输入图像中被选中区域不足池化核大小时的数据,进行保留并取最大值。

Ceil_model = False 时,因步长使输入图像中被选中区域不足池化核大小时的数据,直接舍去

2.代码:

# 最大池化目的:保留输入特征并减小输入数据(5*5 -> 3*3 甚至更小) 网络训练参数减少,训练更快
import torch
import torchvision
from torch import nn
from torch.nn import Conv2d, MaxPool2d
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

dataset = torchvision.datasets.CIFAR10("../data_maxpool", train=False, transform=torchvision.transforms.ToTensor(),
                                       download=True)

dataloader = DataLoader(dataset, batch_size=64)

input1 = torch.tensor([[1, 2, 0, 3, 1],
                      [0, 1, 2, 3, 1],
                      [1, 2, 1, 0, 0],
                      [5, 2, 3, 1, 1],
                      [2, 1, 0, 1, 1]], dtype=torch.float32)

input1 = torch.reshape(input1, (-1, 1, 5, 5))   # input 格式要求:(batch_size, channel, h, w)
print(input1.shape)

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.maxpool1 = MaxPool2d(kernel_size=3, ceil_mode=False)

    def forward(self, input):
        output = self.maxpool1(input)
        return output

tudui = Tudui()
output1 = tudui(input1) # 列表
print(output1)

step = 0        # 数据集图片
writer = SummaryWriter("../logs_maxpool")

for data in dataloader:
    imgs, targets = data
    writer.add_images("input", imgs, step)
    output2 = tudui(imgs)
    writer.add_images("output_dataset", output2, step)   # 池化并不会改变channel
    step = step + 1

writer.close()

# 终端:tensorboard --logdir=logs_maxpool  --port=6789

3.效果展示:

三、非线性激活

1. inplace = True 和 False 的区别:

 2.代码:

# 非线性变换:给网络中引入非线性特征,非线性越多才能训练出符合各种曲线各种特征的模型
import torch
import torchvision
from torch import nn
from torch.nn import ReLU, Sigmoid
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

input = torch.tensor([[1, -0.5],
                      [-1, 3]])

input = torch.reshape(input, (-1, 1, 2, 2))
print(input.shape)

dataset = torchvision.datasets.CIFAR10("../data_maxpool", train=False, transform=torchvision.transforms.ToTensor(),
                                       download=True)

dataloader = DataLoader(dataset, batch_size=64)

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.relv1 = ReLU()
        self.sigmoid1 = Sigmoid()

    def forward(self,input):
        output = self.sigmoid1(input)
        return output

tudui = Tudui()
output1 = tudui(input)
print(output1)

writer = SummaryWriter("../logs_relu")
step = 0
for data in dataloader:
    imgs, targets = data
    writer.add_images("input_imgs", imgs, step)
    output_imgs = tudui(imgs)
    writer.add_images("output_imgs", output_imgs, step)
    step = step + 1

writer.close()
# tensorboard --logdir=logs_relu  --port=6789

3.结果展示:

四、Linear:(我没太理解)

1.介绍:

nn.Linear定义一个神经网络的线性层,方法签名如下:

        torch.nn.Linear(in_features, # 输入的神经元个数
           out_features, # 输出神经元个数
           bias=True # 是否包含偏置
           )

2.代码:

import torch
import torchvision
from torch import nn
from torch.nn import Linear
from torch.utils.data import DataLoader

dataset = torchvision.datasets.CIFAR10("../data_conv2d", train=False, transform=torchvision.transforms.ToTensor(),
                                       download=True)

dataloader = DataLoader(dataset, batch_size=64, drop_last=True)  # 记得设置 drop_last=True 否则会报错

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.linear1 = Linear(196608, 10)

    def forward(self, input):
        output = self.linear1(input)
        return output

tudui = Tudui()

for data in dataloader:
    imgs, targets = data
    print("imgs.shape: " + str(imgs.shape))
    # output = torch.reshape(imgs, (1, 1, 1, -1))
    output = torch.flatten(imgs)  # 将 n行 n列的数组转换成 1行 n*n 列
    print("output.shape: " + str(output.shape))
    output_linear = tudui(output)
    print("output_linear.shape: " + str(output_linear.shape))

3.运行展示:

五、sequential:

1.介绍:

CLASS torch.nn.Sequential(arg: OrderedDict[strModule])

# Using Sequential to create a small model. When `model` is run,
# input will first be passed to `Conv2d(1,20,5)`. The output of
# `Conv2d(1,20,5)` will be used as the input to the first
# `ReLU`; the output of the first `ReLU` will become the input
# for `Conv2d(20,64,5)`. Finally, the output of
# `Conv2d(20,64,5)` will be used as input to the second `ReLU`
model = nn.Sequential(
          nn.Conv2d(1,20,5),
          nn.ReLU(),
          nn.Conv2d(20,64,5),
          nn.ReLU()
        )

# Using Sequential with OrderedDict. This is functionally the
# same as the above code
model = nn.Sequential(OrderedDict([
          ('conv1', nn.Conv2d(1,20,5)),
          ('relu1', nn.ReLU()),
          ('conv2', nn.Conv2d(20,64,5)),
          ('relu2', nn.ReLU())
        ]))

 2.代码:

import torch
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
from torch.utils.tensorboard import SummaryWriter


class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        """
        self.conv1 = Conv2d(3, 32, 5, padding=2)
        self.maxpool1 = MaxPool2d(2)
        self.conv2 = Conv2d(32, 32, 5, padding=2)
        self.maxpool2 = MaxPool2d(2)
        self.conv3 = Conv2d(32, 64, 5, padding=2)
        self.maxpool3 = MaxPool2d(2)
        self.flatten = Flatten()
        self.linear1 = Linear(1024, 64)     # 1024 不会算时,在 forward 中删除 linear1、2 然后输出可以看到(64,1024),接下行
        self.linear2 = Linear(64, 10)       # 64:batch_size   1024: flatten展成一行
        """
        self.model1 = Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )

    def forward(self, x):
        """
        x = self.conv1(x)
        x = self.maxpool1(x)
        x = self.conv2(x)
        x = self.maxpool2(x)
        x = self.conv3(x)
        x = self.maxpool3(x)
        x = self.flatten(x)
        x = self.linear1(x)
        x = self.linear2(x)
        """
        x = self.model1(x)
        return x

tudui = Tudui()
print(tudui)
input = torch.ones(64, 3, 32, 32)   # 进行网络验证
output = tudui(input)
print("output.shape: " + str(output.shape))

writer = SummaryWriter("../logs_sequential")
writer.add_graph(tudui, input)
writer.close()
"""     检查网络正确性:
数据能不能经过网络后产生想要的输出,创建一个假想的输入:
input = torch.ones(64, 3, 32, 32)
输出: output.shape: torch.Size([64, 10])
tensorboard  --logdir=logs_sequential --port=2222
"""

3.结果展示:

  • 49
    点赞
  • 44
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值