简单神经网络和卷积神经网络识别手写数字

目录

1 简单CNN

 2 Improved CNN

 3 卷积神经网络

4 参考博客和视频

5 相关函数阅读keras官方文档


1 简单CNN

实现的一个两层神经网络其隐藏层有15神元输出有10神经元

from tensorflow.keras.utils import to_categorical
from tensorflow.keras import models, layers, regularizers
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.datasets import mnist
import matplotlib.pyplot as plt


# 加载数据
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

# print(train_labels.shape, test_images.shape)
# print(train_images[0])
# print(train_labels[0])
# plt.imshow(train_images[0])
# plt.show()

train_images = train_images.reshape((60000, 28*28)).astype("float")
test_images = test_images.reshape((10000, 28*28)).astype("float")
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)

# print(train_images[0])
# print(train_labels[0])

network = models.Sequential()
network.add(layers.Dense(units=15, activation='relu', input_shape=(28*28,),))
network.add(layers.Dense(units=10, activation='softmax'))
# 多或二分类用softmax输出的是概率 sigmod

# 编译步骤
network.compile(optimizer=RMSprop(lr=0.001), loss= 'categorical_crossentropy', metrics=["accuracy"])
network.fit(train_images, train_labels, epochs=20, batch_size=128, verbose=2)

# print(network.summary())
# 测试
y_pre = network.predict(test_images[:5])
print(y_pre, test_labels[:5])
test_loss, test_accuracy = network.evaluate(test_images, test_labels)
print("test_loss:", test_loss, "test_accuracy:", test_accuracy)

发生过拟合了,在训练集上准确率高但在测试集上低

(每次运行结果可能不一样)

 

网络结构

print(network.summary())

 

线性分类器

Softmax分类器

 先exp指数运算,好处将值变为正了,然后归一化

13%是猫 87%为车

交叉熵损失 

较低时效果好

 2 Improved CNN

from tensorflow.keras.utils import to_categorical
from tensorflow.keras import models, layers, regularizers
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.datasets import mnist
import matplotlib.pyplot as plt


# 加载数据
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

# print(train_labels.shape, test_images.shape)
# print(train_images[0])
# print(train_labels[0])
# plt.imshow(train_images[0])
# plt.show()

train_images = train_images.reshape((60000, 28*28)).astype("float")
test_images = test_images.reshape((10000, 28*28)).astype("float")
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)

# print(train_images[0])
# print(train_labels[0])

network = models.Sequential()
network.add(layers.Dense(units=128, activation='relu', input_shape=(28*28,),
                         kernel_regularizer=regularizers.l1(0.0001)))
network.add(layers.Dropout(0.01))
network.add(layers.Dense(units=32, activation='relu',
                         kernel_regularizer=regularizers.l1(0.0001)))
network.add(layers.Dropout(0.01))
network.add(layers.Dense(units=10, activation='softmax'))
# 多或二分类用softmax输出的是概率 sigmod

# 编译步骤
network.compile(optimizer=RMSprop(lr=0.001), loss= 'categorical_crossentropy', metrics=["accuracy"])
network.fit(train_images, train_labels, epochs=20, batch_size=128, verbose=2)

# print(network.summary())
# 测试
# y_pre = network.predict(test_images[:5])
# print(y_pre, test_labels[:5])
test_loss, test_accuracy = network.evaluate(test_images, test_labels)
print("test_loss:", test_loss, "    test_accuracy:", test_accuracy)

同样发生过拟合了

网络结构

 

 

 3 卷积神经网络


from tensorflow.keras.utils import to_categorical
from tensorflow.keras import models, layers
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.datasets import mnist
# 加载数据集
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

# 搭建LeNet网络
def LeNet():
    network = models.Sequential()
    network.add(layers.Conv2D(filters=6, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1)))
    network.add(layers.AveragePooling2D((2, 2)))
    network.add(layers.Conv2D(filters=16, kernel_size=(3, 3), activation='relu'))
    network.add(layers.AveragePooling2D((2, 2)))
    network.add(layers.Conv2D(filters=120, kernel_size=(3, 3), activation='relu'))
    network.add(layers.Flatten())
    network.add(layers.Dense(84, activation='relu'))
    network.add(layers.Dense(10, activation='softmax'))
    return network
network = LeNet()
network.compile(optimizer=RMSprop(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])

train_images = train_images.reshape((60000, 28, 28, 1)).astype('float') / 255
test_images = test_images.reshape((10000, 28, 28, 1)).astype('float') / 255
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)

# 训练网络,用fit函数, epochs表示训练多少个回合, batch_size表示每次训练给多大的数据
network.fit(train_images, train_labels, epochs=10, batch_size=128, verbose=2)
test_loss, test_accuracy = network.evaluate(test_images, test_labels)
print("test_loss:", test_loss, "    test_accuracy:", test_accuracy)

网络结构

4 参考博客和视频

手把手完成mnist手写数字识别视频

csdn博客

5 相关函数阅读keras官方文档

快速开始序贯(Sequential)模型

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值