【Pytorch】小土堆自学日记(九)

一、损失函数与反向传播

1、loss:

①越小越好

②作用:

a.计算实际输出和目标值之间的差距

b.为我们更新输出提供一定的依据(反向传播)---计算出来grad,也就是梯度,对参数进行优化(梯度下降法)

③L1Loss官方文档:

*表示可以为任何维度,注意N表示大小

④简单应用代码:

import torch
from torch.nn import L1Loss

input =torch.tensor([1,2,3],dtype=torch.float32)
target =torch.tensor([1,2,5],dtype=torch.float32)

inputs = torch.reshape(input,(1,1,1,3))
targets = torch.reshape(target,(1,1,1,3))
#loss的reduction表示方式,mean是求平均,sum是只求和不除
loss =L1Loss(reduction='mean')
result = loss(inputs,targets)
print(result)

2、MSELOSS(求平方差):

①官方文档

torch.nn.MSELoss(size_average=None, reduce=None, reduction='mean')

②代码:

import torch
from torch.nn import L1Loss,MSELoss
from torch import nn
#需要dtype为float,不然报错
input =torch.tensor([1,2,3],dtype=torch.float32)
target =torch.tensor([1,2,5],dtype=torch.float32)
#修改大小
inputs = torch.reshape(input,(1,1,1,3))
targets = torch.reshape(target,(1,1,1,3))

#loss的reduction表示方式,mean是求平均,sum是只求和不除
loss =L1Loss(reduction='mean')
result = loss(inputs,targets)
print(result)
#mseloss
loss_mse = nn.MSELoss()
result_mes = loss_mse(inputs,targets)
print(result_mes)

③代码结果:

3、CrossEntropyLoss(交叉商)

①官方文档:

torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')

公式实例:exp是ln

输入输出要求和之前有所不同,多了c---类别数(因为是分类问题):

②代码:

import torch
from torch.nn import L1Loss,MSELoss
from torch import nn
#需要dtype为float,不然报错
input =torch.tensor([1,2,3],dtype=torch.float32)
target =torch.tensor([1,2,5],dtype=torch.float32)
#修改大小
inputs = torch.reshape(input,(1,1,1,3))
targets = torch.reshape(target,(1,1,1,3))

#loss的reduction表示方式,mean是求平均,sum是只求和不除
loss =L1Loss(reduction='mean')
result = loss(inputs,targets)
print(result)
#mseloss
loss_mse = nn.MSELoss()
result_mes = loss_mse(inputs,targets)
print(result_mes)
#x是预测结果
x=torch.tensor([0.1,0.2,0.3])
#y是实际结果
y=torch.tensor([1])

x=torch.reshape(x,(1,3))
loss_cross=nn.CrossEntropyLoss()
result_cross =loss_cross(x,y)
print(result_cross)

③代码结果:

4、loss_function怎么在神经网络中使用:

①代码:

import torch
import torchvision.datasets
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear,Sequential
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

dataset = torchvision.datasets.CIFAR10("../data",train=False,transform=torchvision.transforms.ToTensor(),download=True)
dataloader = DataLoader(dataset,batch_size=1)
class Tudui(nn.Module):
    def __init__(self):
        super(Tudui,self).__init__()
        # self.conv1 = Conv2d(3,32,5,padding=2)
        # #想让hout和wout不变还是32,从官网torch.nn的conv2d的公式计算padding=2和stride=1 or padding=kernelsize-1/2
        # self.maxpool1 =MaxPool2d(2)
        # self.conv2 = Conv2d(32,32,5,padding=2)
        # self.maxpool2 = MaxPool2d(2)
        # self.conv3 =Conv2d(32,64,5,padding=2)
        # self.maxpool3 = MaxPool2d(2)
        # self.flatten  =Flatten()
        # #展平完成后为64*4*4=1024,使用两个线性层
        # self.linear1 =Linear(1024,64)
        # self.linear2 = Linear(64,10)
        #最后分为十类,以下为Sequential的简单写法
        #############################################
        self.model1 = Sequential(
            Conv2d(3,32,5,padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )


    def forward(self,x):

        # x=self.conv1(x)
        # x = self.maxpool1(x)
        # x =self.conv2(x)
        # x=self.maxpool2(x)
        # x=self.conv3(x)
        # x=self.maxpool3(x)
        # x=self.flatten(x)
        # x=self.linear1(x)
        # x=self.linear2(x),以下为简单写法
        ##########################################################
        x=self.model1(x)
        return x
#实例化开始
loss  = nn.CrossEntropyLoss()

tudui = Tudui()
for data in dataloader:
    imgs,targets = data
    output =tudui(imgs)
    result_loss= loss(output,targets)
    result_loss.backward()
    print("ok")

此时deug后,grad有显示,可以对参数进行优化,从而达到loss降低的效果

二、优化器

1、官方文档:

①先构建优化器

②调用优化器的step方法,将梯度更新

2、具体参数:

torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0)
  • params (iterable) – 模型参数

  • lr (floatoptional) – 学习速率 (default: 1.0)

3、代码: 

①具体代码

import torch
import torchvision.datasets
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear,Sequential
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
#加载数据集
dataset = torchvision.datasets.CIFAR10("../data",train=False,transform=torchvision.transforms.ToTensor(),download=True)
#加载数据
dataloader = DataLoader(dataset,batch_size=64)
#创建网络
class Tudui(nn.Module):
    def __init__(self):
        super(Tudui,self).__init__()
        self.model1 = Sequential(
            Conv2d(3,32,5,padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )


    def forward(self,x):
        x=self.model1(x)
        return x
#实例化开始
loss  = nn.CrossEntropyLoss()
#搭建网络
tudui = Tudui()
#设置优化器,随机梯度下降,学习速率不能太大,模型不稳定,太小训练慢
optim = torch.optim.SGD(tudui.parameters(),lr=0.01)
for epoch in range(20):
    running_loss = 0.0
    for data in dataloader:
        imgs,targets = data
        output =tudui(imgs)
        result_loss= loss(output,targets)
        #设置为0,上一次的梯度对这一次没用
        optim.zero_grad()
        #反向传播
        result_loss.backward()
        optim.step()
        #误差总和
        running_loss = running_loss+result_loss
    print(running_loss)



结果:损失在减小

②debug过程:(查看grad的变化)

a.找到梯度:

b.此时代码运行到清零,没有梯度

c.运行完反向传播后,梯度被计算出来

d.运行完优化器,参数被改变

  • 3
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值