pytorch 神经网络套路 实现多维输入特征的分类:MNIST手写数字分类(pytorch中的hello world)

35 篇文章 3 订阅
29 篇文章 0 订阅

1.数据集:

MNIST:手写数字 

下载方法:见代码段,dataset的下载属性download设置为True

2.模型:

本文中,去掉了64维特征的线性层,直接由128维特征降为10维,作为输出。

LOSS:采用CrossEntropyLoss,交叉熵

优化器:SGD随机梯度下降

3.python代码:

import torch
import torchvision
from torch import nn
from torch.nn import Linear, ReLU, CrossEntropyLoss
from torch.optim import SGD
from torch.utils.data import DataLoader
from torchvision import transforms
import matplotlib.pyplot as plt

# 定义计算设备,如果gpu可用就用cuda加速,否则使用cpu计算
device = ("cuda:0" if torch.cuda.is_available() else "cpu")

# 准备测试和验证集
train_dataset = torchvision.datasets.MNIST("./data_set/train_dataset", train=True, transform=transforms.Compose(
    [transforms.ToTensor(), transforms.Normalize(0.1307, 0.3081)]), download=True)

val_dataset = torchvision.datasets.MNIST("./data_set/val_dataset", train=True, transform=transforms.Compose(
    [transforms.ToTensor(), transforms.Normalize(0.1307, 0.3081)]), download=True)

train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)

val_loader = DataLoader(val_dataset, batch_size=64, shuffle=True)


# 建立模型
class model(nn.Module):
    def __init__(self):
        super(model, self).__init__()
        self.linear1 = Linear(784, 512, bias=True)
        self.linear2 = Linear(512, 256, bias=True)
        self.linear3 = Linear(256, 128, bias=True)
        self.linear4 = Linear(128, 10, bias=True)
        self.activate = ReLU()

    def forward(self, x):
        x = x.view(-1, 28 * 28)
        x = self.linear1(x)
        x = self.activate(x)
        x = self.linear2(x)
        x = self.activate(x)
        x = self.linear3(x)
        x = self.activate(x)
        x = self.linear4(x)
        return x


# 模型类实例化
my_model = model()
# gpu加速
my_model.to(device)

# 交叉熵损失函数
loss_cal = CrossEntropyLoss(size_average=True)
# gpu加速
loss_cal = loss_cal.to(device)

# 优化器
optimizer = SGD(my_model.parameters(), lr=0.01, momentum=0.1)


# 训练函数
def train():
    loss_sum = 0
    for i, data in enumerate(train_loader):
        imgs, labels = data
        # gpu加速
        imgs = imgs.to(device)
        labels = labels.to(device)
        # 前向计算
        outs = my_model(imgs)
        loss = loss_cal(outs, labels)
        loss_sum = loss_sum + loss.item()
        # 梯度清零
        optimizer.zero_grad()
        # 反向传播
        loss.backward()
        # 参数优化
        optimizer.step()
        if i % 50 == 49:
            loss_list.append(loss_sum / 50)
            print("loss:" + str(loss_sum / 50))
            loss_sum = 0


# 验证函数
def val(epoch):
    correct = 0
    total = 0
    with torch.no_grad():
        for i, data in enumerate(val_loader):
            imgs, labels = data
            imgs = imgs.to(device)
            labels = labels.to(device)
            outs = my_model(imgs)
            value, index = torch.max(outs.data, dim=1)
            total += labels.size(0)
            correct += (index == labels).sum().item()

    print("epoch:" + str(epoch) + "accuracy:" + str(correct / float(total)))


if __name__ == "__main__":

    loss_list = []
    for epoch in range(10):
        train()
        val(epoch)
    plt.figure()
    plt.plot(loss_list)
    plt.show()


# 部分输出结果
# loss:0.08146183107048273
# loss:0.09409760734066368
# loss:0.08619683284312486
# loss:0.0925342983007431
# loss:0.10231521874666213
# loss:0.09232219552621246
# loss:0.08993134327232838
# loss:0.08221664376556874
# loss:0.08144009610638023
# loss:0.08002457775175571
# loss:0.09039016446098685
# loss:0.09695547364652157
# loss:0.08471773650497198
# loss:0.09309148231521248
# loss:0.08477838601917029
# loss:0.08601660847663879
# loss:0.08170276714488864
# loss:0.0860577792301774
# epoch:8accuracy:0.9759166666666667
# loss:0.08367479223757983
# loss:0.08048359720036387
# loss:0.07391996223479509
# loss:0.08173186900094151
# loss:0.07706649962812662
# loss:0.08126413892954588
# loss:0.07230079263448715
# loss:0.08468491364270449
# loss:0.06976747298613191
# loss:0.07847631502896547
# loss:0.07585323289036751
# loss:0.08995127085596323
# loss:0.0836580672301352
# loss:0.07555182477459312
# loss:0.09279871977865696
# loss:0.06770527206361293
# loss:0.08244827814400196
# loss:0.07539539625868201
# epoch:9accuracy:0.9793333333333333

4.可视化结果:

 随着训练次数增加,loss逐渐减小并收敛。

5.以上均为个人学习pytorch基础入门中的基础,浅做记录,如有错误,请各位大佬批评指正!

6.关于问题描述和原理的部分图片参考刘老师的视频课件,本文也是课后作业的一部分,特此附上视频链接,《PyTorch深度学习实践》完结合集_哔哩哔哩_bilibili,希望大家都有所进步!

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Newjet666

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值