P16:神经网络的基本骨架——nn.Module的使用:
""" https://pytorch.org/docs/stable/nn.html#containers
骨架:Container; 卷积层: Convolution layers; 池化层: Pooling layers
例子:来自于pytorch官网:https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module): # Model 继承了nn.Module的类
def __init__(self):
super().__init__() # 必须!
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x): # 前向传播; 反向传播:backward
x = F.relu(self.conv1(x)) # conv1:卷积; relu:非线性
return F.relu(self.conv2(x))
"""
import torch
from torch import nn
class Tudui(nn.Module):
def __init__(self):
super().__init__() # 必须!
def forward(self,input):
output = input + 1
return output
tudui = Tudui()
x = torch.tensor(1.0)
output = tudui(x)
print(output)
P17:土堆说卷积操作(可选看)
pytorh官网(Convolution Layers):torch.nn — PyTorch 2.2 documentation
CONV2D:torch.nn.functional.conv2d — PyTorch 2.2 documentation
1.Parameters
1)input -input tensor of shape(minibatch,in_channels,iH,iW)
2)weight-filters of shape
3)bias-optional bias tensor of shape (out_channels)(out_channels). Default: None
4)stride :步长,例:stride = 1 :控制横向和纵向,每次走一个单位
最后可以得到卷积的输出:
有个数up算错了,大伙笑话他--》
【PyTorch深度学习快速入门教程(绝对通俗易懂!)【小土堆】】https://www.bilibili.com/video/BV1hE411t7RN?p=17&vd_source=cdd93033e19ffe61f167bada04827323
5)padding:决定填充有多大,默认不填充,下图为 padding = 1
6)dilation:
7)groups :
2.本节课代码如下:
import torch
import torch.nn.functional as F
input = torch.tensor([[1, 2, 0, 3, 1],
[0, 1, 2, 3, 1],
[1, 2, 1, 0, 0],
[5, 2, 3, 1, 1],
[2, 1, 0, 1, 1]])
kernel = torch.tensor([[1, 2, 1],
[0, 1, 0],
[2, 1, 0]])
print(input.shape) # input要求有四个参数的尺寸,但现在只有两个参数
print(kernel.shape)
input = torch.reshape(input, (1, 1, 5, 5)) # batch(样本) = 1, channels = 1, 5*5
kernel = torch.reshape(kernel, (1, 1, 3, 3))
print(input.shape) # 现在换成四个参数了:input--input tensor of shape(minibatch,in_channels,iH,iW)
print(kernel.shape)
output = F.conv2d(input, kernel, stride=1) # 步长为1
print("步长为1\n" + str(output))
output2 = F.conv2d(input, kernel, stride=2) # 步长为2
print("步长为2\n" + str(output2))
output3 = F.conv2d(input, kernel, stride=1, padding=1) # 步长为1 ,填充1
print("步长为1 ,填充1\n" + str(output3))