深度学习入门——MNIST数据集计算数字图片识别精度-neuralnet_mnist

前提条件:

  1. dataset 里面有数字图片数据集
  2. sample_weight.pkl 学习训练后的权重W及偏置b,假设已学习训练完成,学习参数保存在该文件

neuralnet_mnist.py:

import io
import sys,os
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')
sys.path.append(os.pardir)

import numpy as np
import matplotlib.pyplot as plt
from matplotlib.image import imread
import pickle
from dataset.mnist import load_mnist
from common.functions import sigmoid, softmax

print('_hello,inited!')

## 假设学习训练已完成,学习的参数保存在sample_weiht.pkl中,测试数据10000个
def get_data():
    (x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, flatten=True, one_hot_label=False)
    return x_test, t_test

## 加载权重W及偏置b,利用pickle功能,高效地完成MNIST数据的准备工作
def init_network():
    with open('resource/sample_weight.pkl', 'rb') as f:
        network = pickle.load(f)
    return 
  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
可以使用以下代码基于MNIST数据集在PyTorch中实现手写数字识别: ```python import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms # 定义超参数 input_size = 784 # 28x28 hidden_size = 100 num_classes = 10 num_epochs = 5 batch_size = 100 learning_rate = 0.001 # 加载数据集,并进行标准化处理 train_dataset = torchvision.datasets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True) test_dataset = torchvision.datasets.MNIST(root='./data', train=False, transform=transforms.ToTensor()) train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False) # 定义神经网络模型 class NeuralNet(nn.Module): def __init__(self, input_size, hidden_size, num_classes): super(NeuralNet, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_size, num_classes) def forward(self, x): out = self.fc1(x) out = self.relu(out) out = self.fc2(out) return out model = NeuralNet(input_size, hidden_size, num_classes) # 定义损失函数和优化器 criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # 训练模型 total_step = len(train_loader) for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): images = images.reshape(-1, 28*28) # 前向传播 outputs = model(images) loss = criterion(outputs, labels) # 反向传播和优化 optimizer.zero_grad() loss.backward() optimizer.step() if (i+1) % 100 == 0: print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' .format(epoch+1, num_epochs, i+1, total_step, loss.item())) # 测试模型 with torch.no_grad(): correct = 0 total = 0 for images, labels in test_loader: images = images.reshape(-1, 28*28) outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Test Accuracy of the model on the 10000 test images: {} %'.format(100 * correct / total)) ``` 运行该代码后,将输出模型的训练过程和在测试集上的准确率。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值