基于Pytorch深度学习神经网络手写字母识别系统源码(带界面和手写画板)

 第一步:准备数据

26个手写字母

第二步:搭建模型

我们这里搭建了一个CNN网络

参考代码如下:

from torch import nn
import torch


class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()

        self.Conv1 = nn.Sequential(
            # 卷积层1
            nn.Conv2d(1, 16, 5, 1, 2),
            # 激活函数层
            nn.ReLU(),
            # 最大池化层
            nn.MaxPool2d(kernel_size=2)
        )
        self.Conv2 = nn.Sequential(
            # 卷积层2
            nn.Conv2d(16, 32, 5, 1, 2),
            nn.Dropout(p=0.2),
            # 激活函数层
            nn.ReLU(),
            # 最大池化层
            nn.MaxPool2d(kernel_size=2)
        )
        # 最后接上一个全连接层(将图像变为1维)
        # 为什么是32*7*7:(1,28,28)->(16,28,28)(conv1)->(16,14,14)(pool1)->(32,14,14)(conv2)->(32,7,7)(pool2)->output
        self.Linear = nn.Sequential(
            nn.Linear(32 * 7 * 7, 400),
            nn.Dropout(p=0.2),
            nn.ReLU(),
            nn.Linear(400, 80),
            nn.ReLU(),
            nn.Linear(80, 26),
        )

    def forward(self, input):
        input = self.Conv1(input)
        input = self.Conv2(input)  # view可理解为resize
        # input.size() = [100, 32, 7, 7], 100是每批次的数量,32是厚度,图片尺寸为7*7
        # 当某一维是-1时,会自动计算他的大小(原则是总数据量不变):
        input = input.view(input.size(0), -1)  # (batch=100, 1568), 最终效果便是将二维图片压缩为一维(数据量不变)
        # 最后接上一个全连接层,输出为10:[100,1568]*[1568,10]=[100,10]
        output = self.Linear(input)
        return output

第三步:统计训练过程

train loss: 100%[**************************************************->]0.572
21.834354393
[epoch 1] train_loss: 1.444  test_accuracy: 0.793
train loss: 100%[**************************************************->]0.466
23.038146265
[epoch 2] train_loss: 0.671  test_accuracy: 0.874
train loss: 100%[**************************************************->]0.598
29.854321249999998
[epoch 3] train_loss: 0.504  test_accuracy: 0.891
train loss: 100%[**************************************************->]0.341
69.749600783
[epoch 4] train_loss: 0.424  test_accuracy: 0.903
train loss: 100%[**************************************************->]0.254
70.424355719
[epoch 5] train_loss: 0.370  test_accuracy: 0.914
train loss: 100%[**************************************************->]0.668
71.40316233800002
[epoch 6] train_loss: 0.336  test_accuracy: 0.917
train loss: 100%[**************************************************->]0.270
71.71575588499996
[epoch 7] train_loss: 0.308  test_accuracy: 0.924
train loss: 100%[**************************************************->]0.106
72.41022224800003
[epoch 8] train_loss: 0.288  test_accuracy: 0.927
train loss: 100%[**************************************************->]0.206
72.79280162400005
[epoch 9] train_loss: 0.272  test_accuracy: 0.926
train loss: 100%[**************************************************->]0.294
73.10957593400008
[epoch 10] train_loss: 0.260  test_accuracy: 0.931
train loss: 100%[**************************************************->]0.343
73.35514073399997
[epoch 11] train_loss: 0.247  test_accuracy: 0.932
train loss: 100%[**************************************************->]0.095
73.71719574300005
[epoch 12] train_loss: 0.236  test_accuracy: 0.934
train loss: 100%[**************************************************->]0.256
74.26416404899999
[epoch 13] train_loss: 0.227  test_accuracy: 0.933
train loss: 100%[**************************************************->]0.403
74.42455653699994
[epoch 14] train_loss: 0.220  test_accuracy: 0.932
train loss: 100%[**************************************************->]0.294
74.65999796300002
[epoch 15] train_loss: 0.213  test_accuracy: 0.934
train loss: 100%[**************************************************->]0.373
74.56802458499988
[epoch 16] train_loss: 0.205  test_accuracy: 0.936
train loss: 100%[**************************************************->]0.275
75.22917284200003
[epoch 17] train_loss: 0.200  test_accuracy: 0.935
train loss: 100%[**************************************************->]0.140
75.54182100100002
[epoch 18] train_loss: 0.194  test_accuracy: 0.937
train loss: 100%[**************************************************->]0.242
75.73754603099997
[epoch 19] train_loss: 0.188  test_accuracy: 0.938
train loss: 100%[**************************************************->]0.095
75.97630976599999
[epoch 20] train_loss: 0.183  test_accuracy: 0.936
train loss: 100%[**************************************************->]0.403
76.2476526229998
[epoch 21] train_loss: 0.179  test_accuracy: 0.938
train loss: 100%[**************************************************->]0.097
76.04915710599994
[epoch 22] train_loss: 0.175  test_accuracy: 0.938
train loss: 100%[**************************************************->]0.122
76.74978737799984
[epoch 23] train_loss: 0.171  test_accuracy: 0.940
train loss: 100%[**************************************************->]0.063
77.16762021900013
[epoch 24] train_loss: 0.167  test_accuracy: 0.941
train loss: 100%[**************************************************->]0.170
76.64283345299987
[epoch 25] train_loss: 0.164  test_accuracy: 0.940
train loss: 100%[**************************************************->]0.177
77.38436796199994
[epoch 26] train_loss: 0.160  test_accuracy: 0.939
train loss: 100%[**************************************************->]0.059
76.03620862999969
[epoch 27] train_loss: 0.157  test_accuracy: 0.938
train loss: 100%[**************************************************->]0.100
74.18787550200022
[epoch 28] train_loss: 0.154  test_accuracy: 0.939
train loss: 100%[**************************************************->]0.081
74.31627198099977
[epoch 29] train_loss: 0.151  test_accuracy: 0.941
train loss: 100%[**************************************************->]0.338
74.09028946699982
[epoch 30] train_loss: 0.149  test_accuracy: 0.939
train loss: 100%[**************************************************->]0.054
74.19566870900007
[epoch 31] train_loss: 0.145  test_accuracy: 0.940
train loss: 100%[**************************************************->]0.077
74.22765220200017
[epoch 32] train_loss: 0.143  test_accuracy: 0.940
train loss: 100%[**************************************************->]0.164
74.27367530000038
[epoch 33] train_loss: 0.142  test_accuracy: 0.940
train loss: 100%[**************************************************->]0.241
74.46459777000018
[epoch 34] train_loss: 0.138  test_accuracy: 0.940
train loss: 100%[**************************************************->]0.423
74.37540000000035
[epoch 35] train_loss: 0.136  test_accuracy: 0.940
train loss: 100%[**************************************************->]0.109
74.35796185300023
[epoch 36] train_loss: 0.132  test_accuracy: 0.940
train loss: 100%[**************************************************->]0.073
74.36104124799976
[epoch 37] train_loss: 0.131  test_accuracy: 0.941
train loss: 100%[**************************************************->]0.191
74.19974820900006
[epoch 38] train_loss: 0.128  test_accuracy: 0.940
train loss: 100%[**************************************************->]0.191
74.37960920800015
[epoch 39] train_loss: 0.127  test_accuracy: 0.940
train loss: 100%[**************************************************->]0.048
74.41452133999974
[epoch 40] train_loss: 0.125  test_accuracy: 0.940
train loss: 100%[**************************************************->]0.076
74.64848251800004
[epoch 41] train_loss: 0.123  test_accuracy: 0.941
train loss: 100%[**************************************************->]0.183
74.42275145399981
[epoch 42] train_loss: 0.122  test_accuracy: 0.940
train loss: 100%[**************************************************->]0.225
74.73164779199988
[epoch 43] train_loss: 0.119  test_accuracy: 0.942
train loss: 100%[**************************************************->]0.122
74.39397763000034
[epoch 44] train_loss: 0.118  test_accuracy: 0.940
train loss: 100%[**************************************************->]0.288
74.05052073199977
[epoch 45] train_loss: 0.117  test_accuracy: 0.941
train loss: 100%[**************************************************->]0.194
74.45704804800016
[epoch 46] train_loss: 0.114  test_accuracy: 0.941
train loss: 100%[**************************************************->]0.048
74.83653039399996
[epoch 47] train_loss: 0.112  test_accuracy: 0.941
train loss: 100%[**************************************************->]0.101
74.75587790100008
[epoch 48] train_loss: 0.111  test_accuracy: 0.941
train loss: 100%[**************************************************->]0.150
74.7070473370004
[epoch 49] train_loss: 0.109  test_accuracy: 0.942
train loss: 100%[**************************************************->]0.022
74.70521551500042
[epoch 50] train_loss: 0.109  test_accuracy: 0.940
Finished Training

第四步:搭建GUI界面

第五步:整个工程的内容

有训练代码和训练好的模型以及训练过程,提供数据,提供GUI界面代码

 代码的下载路径(新窗口打开链接)基于Pytorch深度学习神经网络手写字母识别系统源码(带界面和手写画板)

​​

有问题可以私信或者留言,有问必答

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值