p24优化器(一)

在这里插入图片描述
官网中有相关优化器optim的使用方法及例子
在这里插入图片描述

Taking an optimization step案例

for input, target in dataset:
    optimizer.zero_grad()
    output = model(input)
    loss = loss_fn(output, target)
    loss.backward()
    optimizer.step()

首先是传入的参数是params,优化器需要知道可更新的参数有哪些

在这里插入图片描述
grad不断减少,进行更新
在这里插入图片描述

import torch
import torchvision.datasets
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Linear, Flatten, Sequential
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

dataset = torchvision.datasets.CIFAR10("./dataset", train=False, transform=torchvision.transforms.ToTensor(), download=True)
dataloader = DataLoader(dataset, batch_size=1)

class Lixinyu(nn.Module):
    def __init__(self):
        super(Lixinyu, self).__init__()
        self.model1 = Sequential(Conv2d(3, 32, 5, stride=1, padding=2),
                                 MaxPool2d(2),
                                 Conv2d(32, 32, 5, padding=2),
                                 MaxPool2d(2),
                                 Conv2d(32, 64, 5, padding=2),
                                 MaxPool2d(2),
                                 Flatten(),
                                 Linear(1024, 64),
                                 Linear(64, 10))

    def forward(self, x):
        x = self.model1(x)
        return x
loss = nn.CrossEntropyLoss()
lixinyu = Lixinyu()
optim = torch.optim.SGD(lixinyu.parameters(), lr=0.01)
for data in dataloader:
    imgs, targets = data
    outputs = lixinyu(imgs)
    result_loss = loss(outputs, targets)
    optim.zero_grad()
    result_loss.backward()
    optim.step()
    print(result_loss)


tensor(1.8779, grad_fn=<NllLossBackward0>)
tensor(1.7652, grad_fn=<NllLossBackward0>)
tensor(1.2320, grad_fn=<NllLossBackward0>)
tensor(0.9124, grad_fn=<NllLossBackward0>)
tensor(3.4349, grad_fn=<NllLossBackward0>)
tensor(0.6194, grad_fn=<NllLossBackward0>)

仅进行了一轮循环,可再套一层循环

对每轮loss求和,查看每轮loss的变化

import torch
import torchvision.datasets
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Linear, Flatten, Sequential
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter

dataset = torchvision.datasets.CIFAR10("./dataset", train=False, transform=torchvision.transforms.ToTensor(), download=True)
dataloader = DataLoader(dataset, batch_size=1)

class Lixinyu(nn.Module):
    def __init__(self):
        super(Lixinyu, self).__init__()
        self.model1 = Sequential(Conv2d(3, 32, 5, stride=1, padding=2),
                                 MaxPool2d(2),
                                 Conv2d(32, 32, 5, padding=2),
                                 MaxPool2d(2),
                                 Conv2d(32, 64, 5, padding=2),
                                 MaxPool2d(2),
                                 Flatten(),
                                 Linear(1024, 64),
                                 Linear(64, 10))

    def forward(self, x):
        x = self.model1(x)
        return x
loss = nn.CrossEntropyLoss()
lixinyu = Lixinyu()
optim = torch.optim.SGD(lixinyu.parameters(), lr=0.01)
for epoch in range(20):
    running_loss = 0.0
    for data in dataloader:
        imgs, targets = data
        outputs = lixinyu(imgs)
        result_loss = loss(outputs, targets)
        optim.zero_grad()
        result_loss.backward()
        optim.step()
        running_loss = running_loss + result_loss
    print(f"{epoch}" ":" f"{running_loss}")

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

橙黄橘绿时_Eden

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值