Pytorch神经网络的搭建与训练入门

代码中有详细的注释 本文若是能完全理解,则可以说已经入门Pytorch

看本文之前,需要有dataset,dataloader和module的基础知识
可以参考这篇博客:Pytorch入门

# 完整的模型训练

import torch
import torch.nn as nn
from torch.nn import MaxPool2d, ReLU, Sigmoid, Linear, Conv2d, Flatten, Sequential
import torchvision
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter as Sum

import StandardModule
from SaveModule import *

# 1.准备并查看数据集
root = r'C:\Users\15715\Desktop\Programs\Python\深度学习\pythonProject\Pytorchdataset\dataset_CIFAR10'
train_set = torchvision.datasets.CIFAR10(root=root, train=True, transform=torchvision.transforms.ToTensor(),
                                         download=True)
test_set = torchvision.datasets.CIFAR10(root=root, train=False, transform=torchvision.transforms.ToTensor(),
                                        download=True)

train_size = len(train_set)  # 50000
test_size = len(test_set)  # 10000
# print(train_size)
# print(test_size)


# 2.加载数据集
train_dataLoader = DataLoader(train_set, batch_size=64)
test_dataLoader = DataLoader(test_set, batch_size=64)


# 3.搭建nn
# 10 Class
# 见StandardModule文件
mc = StandardModule.Standard()


# 4.定义损失函数
loss_fn = nn.CrossEntropyLoss()

# 5.定义优化器
optimizer = torch.optim.SGD(params=mc.parameters(), lr=1e-2)


# 6.设置训练网络的参数
total_train = 0
total_test = 0
epoch = 10  # 训练的轮数


# 可选:可视化
writer = Sum('logs')

# 7.开始训练
for i in range(epoch):
    print('------------The {0} round of training begins------------'.format(i+1))

    # 训练步骤
    # 取出训练数据,然后放入nn中
    mc.train()
    for data in train_dataLoader:
        img, target = data
        output = mc(img)
        # 计算loss
        loss = loss_fn(output, target)
        # 进行优化
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        total_train += 1
        # 不需要每次都打印,可读性差,而且没有意义
        if total_train % 100 == 0:
            print('loss of {0} round : {1}'.format(total_train, loss.item()))
            writer.add_scalar('train_loss', loss.item(), total_train)

    # 每完成一次训练步骤,将测试集进行处理
    # 评估模型的情况
    # 宏观上
    mc.eval()
    test_loss_total = 0
    total_accuracy = 0
    # 下方代码意义:不会进行优化
    with torch.no_grad():
        for data in test_dataLoader:
            img,target = data
            output = mc(img)
            loss = loss_fn(output, target)
            test_loss_total += loss.item()
            ac = (output.argmax() == target).sum()
            total_accuracy = total_accuracy + ac

    print('loss on the overall test set is {}'.format(test_loss_total))
    print('Correct rate on the overall test set : {}'.format(total_accuracy/test_size))

    writer.add_scalar('test_loss', test_loss_total, total_test)
    writer.add_scalar('correct rate', total_accuracy/test_size, total_test)

    total_test += 1
    torch.save(mc, r'C:\Users\15715\Desktop\Programs\Python\深度学习\pythonProject\Module\Standardtest_{}.pth'.format(i))
    print('module saved')


writer.close()
# 端口设置
# tensorboard --logdir=logs
# tensorboard --logdir=logs --port=6007
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

It is a deal️

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值