基于Keras的深度学习(一)——第一个简单的例子:手写数字识别

 

让我们开始用Keras搭建一个简单的神经网络吧,先让它跑起来。

使用的MNIST数据集,可通过代码直接下载。但是我在下载的时候出现了一些问题,如果你也遇到了下载问题,那么直接复制地址直接去网页下载,再读进来就好了。

 

from __future__ import print_function
import numpy as np
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import RMSprop,Adam
from keras.utils import np_utils

from keras import regularizers
import matplotlib.pyplot as plt

np.random.seed(1671)  # for reproducibility,重复性设置

# network and training
NB_EPOCH = 20
BATCH_SIZE = 128
VERBOSE = 1
NB_CLASSES = 10   # number of outputs = number of digits
# OPTIMIZER = RMSprop() # optimizer, explainedin this chapter
OPTIMIZER = Adam() # optimizer, explainedin this chapter

N_HIDDEN = 128
VALIDATION_SPLIT=0.2 # how much TRAIN is reserved for VALIDATION
DROPOUT = 0.3

# data: shuffled and split between train and test sets
# (X_train, y_train), (X_test, y_test) = mnist.load_data()
with np.load('mnist.npz', allow_pickle=True) as f:
    X_train, y_train = f['x_train'], f['y_train']
    X_test, y_test = f['x_test'], f['y_test']
#X_train is 60000 rows of 28x28 values --> reshaped in 60000 x 784
RESHAPED = 784
#
X_train = X_train.reshape(60000, RESHAPED)
X_test = X_test.reshape(10000, RESHAPED)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')

# normalize  归一化
X_train /= 255
X_test /= 255
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
# 非类别特征转化为数值变量,one-hot编码
Y_train = np_utils.to_categorical(y_train, NB_CLASSES)
Y_test = np_utils.to_categorical(y_test, NB_CLASSES)

# M_HIDDEN hidden layers
# 10 outputs
# final stage is softmax
# 激活函数 relu/sigmoid
model = Sequential()
# model.add(Dense(N_HIDDEN, input_shape=(RESHAPED,)))

## 模型正则化,避免模型过拟合
# 机器学习中用到了3种不同类型的正则化方法
# 1.L1正则化(lasso):模型复杂度表示为权重的绝对值之和
# 2.L2正则化(ridge):模型复杂度表示为权重的平方和
# 3.弹性网络正则化:模型复杂度通过联合前述两张技术捕捉
# 相同的正则化方案可以独立的用用于权重、模型和激活函数,kernel_regularizer=regularizers.l2(0.01)
model.add(Dense(N_HIDDEN, input_shape=(RESHAPED,)))
#model.add(Dense(N_HIDDEN, input_shape=(RESHAPED,),kernel_regularizer=regularizers.l1_l2(0.01)))
model.add(Activation('relu'))
model.add(Dropout(DROPOUT))#随机丢弃某些值,正则化形式,不依赖邻近网络,所以更好地发挥

model.add(Dense(N_HIDDEN))#隐藏层
model.add(Activation('relu'))
model.add(Dropout(DROPOUT))#随机丢弃某些值,正则化形式

model.add(Dense(NB_CLASSES))
model.add(Activation('softmax'))
model.summary()

# 选择优化器optimizer
# 选择优化器使用的目标函数(损失函数loss)以确定权重空间,优化过程就是损失最小化过程
model.compile(loss='categorical_crossentropy',
              optimizer=OPTIMIZER,
              metrics=['accuracy'])


history = model.fit(X_train, Y_train,
                    batch_size=BATCH_SIZE, epochs=NB_EPOCH,
                    verbose=VERBOSE, validation_split=VALIDATION_SPLIT)

score = model.evaluate(X_test, Y_test, verbose=VERBOSE)
# classes=model.predict_classes(X_test, 1, verbose=VERBOSE)
# proba=model.predict_proba(X_test, 1, verbose=VERBOSE)
print("\nTest score:", score[0])
print('Test accuracy:', score[1])

# list all data in history
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.savefig('Adam_acc.png')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.savefig('Adam_loss.png')
plt.show()

到这里,一个简单的神经网络就搭建好了。

里面简单使用了,dense层、dropout、激活函数、优化器、目标函数(损失函数)、正则化等概念。让我们对Keras的使用有了一个初步的了解。

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值