RNN 超详细入门代码(mnist)

最近在看Morvan老师的视频,入门RNN,在这里也贴上自己根据老师的课程修改过的RNN代码,作为学习~

用到的是RNN 神经网络,mnist数据集

#  -*- coding: utf-8 -*-

import numpy as np
np.random.seed(1337)

from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import SimpleRNN,Activation,Dense
from keras.optimizers import Adam

#每个图片就是28行,一个时间段就读取一行
TIME_STEPS = 28
#输入就是一个时间段输入28个列
INPUT_SIZE = 28
#一次循环放入多少张图片
BATCH_SIZE = 50
#
BATCH_INDEX = 0
#输出的标签大小是多少
OUTPUT_SIZE = 10
#RNN中间神经元的数量
CELL_SIZE = 50
#学习率
LR = 0.001

(X_train,y_train),(X_test,y_test) = mnist.load_data()

#data process---------------------------
X_train = X_train.reshape(-1, 28, 28) / 255.      # normalize
X_test = X_test.reshape(-1, 28, 28) / 255.        # normalize

#对标签进行one-hot 编码
y_train = np_utils.to_categorical(y_train, num_classes=10)
y_test = np_utils.to_categorical(y_test, num_classes=10)

#st
  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,以下是使用RNN识别MNIST数据集的详细代码。我们将使用PyTorch来实现这个任务。 ```python import torch import torch.nn as nn import torch.optim as optim from torchvision import datasets, transforms # 定义参数 input_size = 28 sequence_length = 28 hidden_size = 128 num_layers = 2 num_classes = 10 batch_size = 100 num_epochs = 5 learning_rate = 0.001 # 加载MNIST数据集并进行预处理 train_dataset = datasets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True) test_dataset = datasets.MNIST(root='./data', train=False, transform=transforms.ToTensor()) # 使用DataLoader加载数据 train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False) # 定义RNN模型 class RNN(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(RNN, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True) self.fc = nn.Linear(hidden_size, num_classes) def forward(self, x): h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) out, _ = self.rnn(x, h0) out = self.fc(out[:, -1, :]) return out # 将模型加载到GPU上 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = RNN(input_size, hidden_size, num_layers, num_classes).to(device) # 定义损失函数和优化器 criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) # 训练模型 total_step = len(train_loader) for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): images = images.reshape(-1, sequence_length, input_size).to(device) labels = labels.to(device) # 前向传播 outputs = model(images) loss = criterion(outputs, labels) # 反向传播和优化 optimizer.zero_grad() loss.backward() optimizer.step() # 每100个批次打印一次日志 if (i+1) % 100 == 0: print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, i+1, total_step, loss.item())) # 测试模型 with torch.no_grad(): correct = 0 total = 0 for images, labels in test_loader: images = images.reshape(-1, sequence_length, input_size).to(device) labels = labels.to(device) outputs = model(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Test Accuracy of the model on the 10000 test images: {} %'.format(100 * correct / total)) ``` 这段代码首先加载MNIST数据集并进行预处理。然后定义了一个RNN模型,将其加载到GPU上,并定义了损失函数和优化器。接下来进行训练和测试,最后输出测试结果。 需要注意的是,在训练和测试阶段,每个输入图像都被reshape为形状为(batch_size, sequence_length, input_size)的张量,其中sequence_length表示序列长度(即图像的高度),input_size表示每个时间步的输入大小(即图像的宽度)。这样做是为了适应RNN的输入要求。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值