前言
本文介绍的是keras的CNN的例子,代码的具体解释都在下面,源码是b站up主莫烦python的
1.代码
代码如下(示例):
import numpy as np
np.random.seed(1377)
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Activation, Convolution2D, MaxPooling2D, Flatten
from keras.optimizers import Adam
#dowmload the mnist to the path
(X_train, y_train), (X_test, y_test) = mnist.load_data()
#data pre-processing
X_train = X_train.reshape(-1, 1, 28, 28)#-1是样本数量,没有固定拿来随便代替 1是图片是灰度的 RGB就是3
X_test = X_test.reshape(-1, 1, 28, 28)
y_train = np_utils.to_categorical(y_train, num_classes=10)
y_test = np_utils.to_categorical(y_test, num_classes=10)
#Another way to build your CNN
model = Sequential()
#Conv layer 1 output shape(32, 28, 28)
model.add(Convolution2D(
nb_filter=32,#滤波器有32个,每个滤波器都会扫过这个图片,会得到另外一整张图片,所以之后得到的告诉是32层
nb_row=5,#滤波器的行
nb_col=5,#~列
border_mode='same',#padding method滤波器在过滤时候用什么方式 same就是长和宽不变
input_shape=(1,#channels
28,28)#height&width
))
model.add(Activation('relu'))
#Pooling layer 1 (max pooling) output shape(32, 14, 14)
model.add(MaxPooling2D(
pool_size=(2,2),
strides=(2,2),
border_mode='same' #padding method
))
#Conv layer 2 output shape(64,14,14)
model.add(Convolution2D(64, 5, 5, border_mode='same'))
model.add(Activation('relu'))
#Pooling layer 2 (max pooling) output shape(64, 7, 7)
model.add(MaxPooling2D(pool_size=(2,2), border_mode='same'))
#Fully connected layer 1 input shape (64 * 7 * 7)=(3136), output shape(1024)
#抹平就是为了可以把这一个一个点全连接成一个层.
model.add(Flatten())#Flatten层用来将输入“压平”,即把多维的输入一维化,常用在从卷积层到全连接层的过渡。Flatten不影响batch的大小。
model.add(Dense(1024))#1024是自己定义的神经元的个数,可以设置其他的
model.add(Activation('relu'))
#Fully connected layer 2 to shape (10) for 10 classes
model.add(Dense(10))#输出 10 个 unit, 用 softmax 作为分类。
model.add(Activation('softmax'))
#Another way to define your optomizer
adam = Adam(lr=1e-4)
#We add metrics to get more results you want to see
model.compile(optimizer=adam,
loss='categorical_crossentropy',#分类交叉熵函数
metrics=['accuracy'])
print('Training--------')
#Another way to train the model
model.fit(X_train, y_train, nb_epoch=1, batch_size=32,)
print('\nTesting--------')
#Evaluate the model with the metrics we defined earlier
loss, accuracy = model.evaluate(X_test, y_test)
print('\ntest loss:', loss)
print('\ntest accuracy:', accuracy)
2.运行结果
Training--------
Epoch 1/1
60000/60000 [==============================] - 53s 889us/step - loss: 0.2842 - accuracy: 0.9179
Testing--------
10000/10000 [==============================] - 2s 212us/step
test loss: 0.10877728501232341
test accuracy: 0.9679999947547913
总结
感动,发现keras真的挺好用的,上手也还可以,再刷两节课就去看我的骨龄预测了。。。。。。。