【Pytorch】小土堆自学日记(八)

目录

一、非线性激活

1、ReLU

①输入输出要求-指明batch_size:

②图片演示:

③参数解释inplace:

2、Sigmoid

①输入输出要求-指明batch_size:

②图片演示:

3、非线性激活对图像的作用

①代码:

②输出:

二、线性层及其他层介绍:

1、线性层

①图解:

②flatten函数:用来展开

③代码实践:

④输出:

三、搭建小实战和Sequential的使用

1、实现的模型:

2、代码:

3、结果显示:


一、非线性激活

1、ReLU

①输入输出要求-指明batch_size:

  • Input: (𝑁,∗)where * means, any number of additional dimensions

  • Output: (𝑁,∗) , same shape as the input

②图片演示:

输入<0: 输出为0

输入>0: 输出inout

③参数解释inplace:

input值是否收到output值的替换(默认为false)

④代码实现:

import torch
from torch import nn
from torch.nn import ReLU

input = torch.tensor([[1,-0.5],
                      [-1,3]])
output = torch.reshape(input,(-1,1,2,2))
print(output.shape)

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui,self).__init__()
        self.relu1 =ReLU()

    def forward(self,input):
        output = self.relu1(input)
        return output

tudui = Tudui()
output = tudui(input)
print(output)

⑤代码输出:

2、Sigmoid

①输入输出要求-指明batch_size:

  • Input: (𝑁,∗)where * means, any number of additional dimensions

  • Output: (𝑁,∗) , same shape as the input

②图片演示:

3、非线性激活对图像的作用

①代码:

import torch
import torchvision.transforms
from torch import nn
from torch.nn import ReLU, Sigmoid
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

input = torch.tensor([[1,-0.5],
                      [-1,3]])


dataset = torchvision.datasets.CIFAR10("../data",train = False,download = True,transform =torchvision.transforms.ToTensor())
dataloader = DataLoader(dataset,batch_size=64)
class Tudui(nn.Module):
    def __init__(self):
        super(Tudui,self).__init__()
        self.relu1 =ReLU()
        self.sigmoid1 = Sigmoid()

    def forward(self,input):
        output = self.sigmoid1(input)
        return output

tudui = Tudui()
writer = SummaryWriter("../logs_relu")
step = 0
for data in dataloader:
    imgs,targets = data
    writer.add_images("input",imgs,global_step=step)
    output = tudui(imgs)
    writer.add_images("output",output,global_step=step)
    step = step+1

writer.close()



②输出:

二、线性层及其他层介绍:

1、线性层

①图解:

输入为x1....xn,输出为g1.....gn

e.g. g1=k1*x1+b1+k2*x2+b2+.......+kxn+bn

②flatten函数:用来展开

③代码实践:

import torch
import torchvision.datasets
from torch.nn import Linear
from torch.utils.data import DataLoader
from torch import nn

dataset = torchvision.datasets.CIFAR10("../data",train = False,transform=torchvision.transforms.ToTensor(),
                                       download=True)
dataloader = DataLoader(dataset,batch_size = 64)

class Tudui(nn.Module):
    def __init__(self):
        super(Tudui,self).__init__()
        self.linear1 = Linear(196608,10)

    def forward(self,input):
        output = self.linear1(input)
        return output

tudui = Tudui()


for data in dataloader:
    imgs,targets = data
    #reshape功能更加强大,可以指定变换尺寸
    #flatten是只能摊开
    output = torch.flatten(imgs)
    print(output.shape)
    output = tudui(output)
    print(output.shape)

④输出:

三、搭建小实战和Sequential的使用

1、实现的模型:

经过了三次卷积,三次池化,一次展开,两次线性,代码需要按顺序编写

2、代码:

import torch
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear,Sequential
from torch.utils.tensorboard import SummaryWriter


class Tudui(nn.Module):
    def __init__(self):
        super(Tudui,self).__init__()
        # self.conv1 = Conv2d(3,32,5,padding=2)
        # #想让hout和wout不变还是32,从官网torch.nn的conv2d的公式计算padding=2和stride=1 or padding=kernelsize-1/2
        # self.maxpool1 =MaxPool2d(2)
        # self.conv2 = Conv2d(32,32,5,padding=2)
        # self.maxpool2 = MaxPool2d(2)
        # self.conv3 =Conv2d(32,64,5,padding=2)
        # self.maxpool3 = MaxPool2d(2)
        # self.flatten  =Flatten()
        # #展平完成后为64*4*4=1024,使用两个线性层
        # self.linear1 =Linear(1024,64)
        # self.linear2 = Linear(64,10)
        #最后分为十类,以下为Sequential的简单写法
        #############################################
        self.model1 = Sequential(
            Conv2d(3,32,5,padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )


    def forward(self,x):

        # x=self.conv1(x)
        # x = self.maxpool1(x)
        # x =self.conv2(x)
        # x=self.maxpool2(x)
        # x=self.conv3(x)
        # x=self.maxpool3(x)
        # x=self.flatten(x)
        # x=self.linear1(x)
        # x=self.linear2(x),以下为简单写法
        ##########################################################
        x=self.model1(x)
        return x
#实例化开始
tudui = Tudui()
#查看网络结构
#print(tudui)
#检查网络的正确性,建造一个假想的input
input = torch.ones((64,3,32,32))
output =tudui(input)
print(output.shape)#torch.shape=([64,10])
###################################################################
writer = SummaryWriter("../logs_seq")
writer.add_graph(tudui,input)
writer.close()

3、结果显示:

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值