手写数字识别【刘二大人《PyTorch深度学习实践】p10 Basic CNN

1、5x5的卷积x2
2、ReLU激活x2
3、2x2的Max pooling x2
3、线性层x1
添加的功能:
1、使用GPU(判断是否使用GPU,计时)
2、将每一个epoch对应的准确度打印出来并且绘制折线图

# 基于CNN的手写数字识别(基础篇)
import torch
from torchvision import transforms  # 数据集相关  针对图像进行原始处理的工具
from torchvision import datasets  # 数据集相关
from torch.utils.data import DataLoader  # 数据集相关
import torch.nn.functional as F  # 全连接层中的激活函数不用sigmod(),使用更加主流的ReLU()
import torch.optim as optim  # 优化器的包
from torchvision import transforms
import time
import os
import matplotlib.pyplot as plt
#########################################################################################################################################
#数据准备

batch_size = 64
#pytorch读图像时,用python中的PIL/Pillow读入,用神经网络来做处理时,要求输入数据较小,最好在0-1之间,输入呈现正态分布,这样对于神经网络的训练时最优的
#即 transform就是将 原始数据转化为张量,并做normalization(数据标准化)
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.1307,), (0.3081,))  # 归一化 (0.1307,)->均值mean, (0.3081,)->标准差std
])                                              # Normalize过程:Pixel(norm) = (Pixel(origin)-mean)/std

train_dataset = datasets.MNIST(root='../dataset/mnist/',
                               train=True,
                               download=True,
                               transform=transform)
train_loader = DataLoader(train_dataset,
                          shuffle=True,
                          batch_size=batch_size)

test_dataset = datasets.MNIST(root='../dataset/mnist/',
                              train=False,
                              download=True,
                              transform=transform)

test_loader = DataLoader(test_dataset,
                         shuffle=False,
                         batch_size=batch_size)

#########################################################################################################################################
#设计模型

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = torch.nn.Conv2d(1, 10, kernel_size=5)
        self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=5)
        self.pooling = torch.nn.MaxPool2d(2)
        self.fc = torch.nn.Linear(320, 10)



    def forward(self, x):
        # Flatten data from(n, 1, 28, 28) to (n, 784)
        batch_size = x.size(0)  # 很重要!!!!
        x = F.relu(self.pooling(self.conv1(x)))
        x = F.relu(self.pooling(self.conv2(x)))

        x = x.view(batch_size,-1)  # 为了全连接层输入为一维向量,所以要做view
        x = self.fc(x)    # 然后接入全连接层,做分类
        return x    #最后一层不用激活函数,线性层输出就是结果->交叉熵中有softmax激活函数

model = Net()

#使用GPU
device = torch.device("cuda:0"if torch.cuda.is_available()else"cpu")
if str(device) == "cuda:0":
    print("使用GPU进行计算")
else:
    print("未能找到GPU,将在CPU上进行计算")

model.to(device) # 这表示所有的缓存啥的都放到cuda中

#########################################################################################################################################
#选择损失和优化器

criterion = torch.nn.CrossEntropyLoss()   # 使用交叉熵损失 交叉熵中包含了softmax()激活函数+NLLLoss  所以直接线性层输出即可!!!
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)  # 随机梯度下降优化器 momentum动量为0.5

#########################################################################################################################################
#train and test

def train(epoch):
    running_loss = 0.0
    for batch_idx, data in enumerate(train_loader, 0):
        inputs, target = data   # inputs = x ,target =  y
        # 迁移到 GPU 中
        inputs, target = inputs.to(device),target.to(device)
        optimizer.zero_grad()   # 优化器清零

        # forward + backward + update
        outputs = model(inputs)            # 输出
        loss = criterion(outputs, target)  # 算损失
        loss.backward()                    # 反馈
        optimizer.step()                   # 优化

        running_loss += loss.item()       # 累积loss
        if batch_idx % 300 == 299:        # 每300迭代轮输出一次running_loss
            print('[%d, %5d] Loss: %.3f' % (epoch + 1, batch_idx + 1, running_loss/2000))
            running_loss = 0.0

def test():
    correct = 0      # 正确多少
    total = 0        # 总数多少
    with torch.no_grad():    # 不用计算梯度
        for data in test_loader:          # test_loader拿数据
            inputs, target = data
            inputs, target = inputs.to(device), target.to(device)
            outputs = model(inputs)       # 做预测
            _, predicted = torch.max(outputs.data, dim=1)   # 拿出最大值坐标
            total += target.size(0)
            correct += (predicted == target).sum().item()
    print('Accuracy on test set: %d %% [%d/%d]' % (100 * correct / total, correct, total))
    return (100 * correct / total)
#########################################################################################################################################
if __name__ == '__main__':
    epoch_list = []
    acc_list = []
    # 创建一个测试tensor
    x = torch.rand(1000, 1000).to(device)

    # 测试运行时间
    start_time = time.time()

    for epoch in range(10):
        train(epoch)
        #if epoch % 10 == 9:  # 每10轮输出一个test
        acc = test()
        epoch_list.append(epoch)
        acc_list.append(acc)

    elapsed_time = time.time() - start_time
    print(f"GPU运行时间:{elapsed_time} 秒")

    os.environ['KMP_DUPLICATE_LIB_OK'] = 'TRUE'
    plt.plot(epoch_list, acc_list)
    plt.xlabel('epoch')
    plt.ylabel('accuracy')
    plt.show()

结果:
使用GPU进行计算
[1, 300] Loss: 0.107
[1, 600] Loss: 0.029
[1, 900] Loss: 0.021
Accuracy on test set: 96 % [9663/10000]
[2, 300] Loss: 0.017
[2, 600] Loss: 0.015
[2, 900] Loss: 0.014
Accuracy on test set: 97 % [9775/10000]
[3, 300] Loss: 0.012
[3, 600] Loss: 0.012
[3, 900] Loss: 0.011
Accuracy on test set: 98 % [9809/10000]
[4, 300] Loss: 0.010
[4, 600] Loss: 0.009
[4, 900] Loss: 0.010
Accuracy on test set: 98 % [9832/10000]
[5, 300] Loss: 0.008
[5, 600] Loss: 0.009
[5, 900] Loss: 0.008
Accuracy on test set: 98 % [9847/10000]
[6, 300] Loss: 0.007
[6, 600] Loss: 0.007
[6, 900] Loss: 0.007
Accuracy on test set: 98 % [9845/10000]
[7, 300] Loss: 0.006
[7, 600] Loss: 0.007
[7, 900] Loss: 0.007
Accuracy on test set: 98 % [9880/10000]
[8, 300] Loss: 0.006
[8, 600] Loss: 0.006
[8, 900] Loss: 0.006
Accuracy on test set: 98 % [9871/10000]
[9, 300] Loss: 0.006
[9, 600] Loss: 0.005
[9, 900] Loss: 0.005
Accuracy on test set: 98 % [9879/10000]
[10, 300] Loss: 0.005
[10, 600] Loss: 0.006
[10, 900] Loss: 0.005
Accuracy on test set: 98 % [9887/10000]
GPU运行时间:86.90266346931458 秒

在这里插入图片描述
使用CPU:
[1, 300] Loss: 0.097
[1, 600] Loss: 0.029
[1, 900] Loss: 0.021
Accuracy on test set: 96 % [9679/10000]
[2, 300] Loss: 0.016
[2, 600] Loss: 0.014
[2, 900] Loss: 0.013
Accuracy on test set: 98 % [9805/10000]
[3, 300] Loss: 0.011
[3, 600] Loss: 0.010
[3, 900] Loss: 0.011
Accuracy on test set: 98 % [9805/10000]
[4, 300] Loss: 0.010
[4, 600] Loss: 0.009
[4, 900] Loss: 0.008
Accuracy on test set: 98 % [9858/10000]
[5, 300] Loss: 0.008
[5, 600] Loss: 0.008
[5, 900] Loss: 0.008
Accuracy on test set: 98 % [9818/10000]
[6, 300] Loss: 0.008
[6, 600] Loss: 0.007
[6, 900] Loss: 0.007
Accuracy on test set: 98 % [9861/10000]
[7, 300] Loss: 0.006
[7, 600] Loss: 0.007
[7, 900] Loss: 0.006
Accuracy on test set: 98 % [9828/10000]
[8, 300] Loss: 0.006
[8, 600] Loss: 0.005
[8, 900] Loss: 0.006
Accuracy on test set: 98 % [9862/10000]
[9, 300] Loss: 0.005
[9, 600] Loss: 0.006
[9, 900] Loss: 0.005
Accuracy on test set: 98 % [9862/10000]
[10, 300] Loss: 0.005
[10, 600] Loss: 0.005
[10, 900] Loss: 0.005
Accuracy on test set: 98 % [9875/10000]
CPU运行时间:892.3390035629272 秒

在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值