24、keras 再次使用案例

一、序贯(Sequential)模型

image.png
  • 定义模型

  • 序贯模型是多个网络层的线性堆叠,也就是“一条路走到黑”。

  • 可以通过向Sequential模型传递一个layer的list来构造该模型

  • 也可以通过.add()方法一个个的将layer加入模型中

  • 模型需要知道输入数据的shape,因此,Sequential的第一层需要接受一个关于输入数据shape的参数.

  • 传递一个input_shape的关键字参数给第一层

  • 有些2D层,如Dense,支持通过指定其输入维度input_dim来隐含的指定输入数据shape。

  • 如果你需要为输入指定一个固定大小的batch_size(常用于stateful RNN网络),可以传递batch_size参数到一个层中,例* * 如你想指定输入张量的batch大小是32,数据shape是(6,8),则你需要传递batch_size=32和input_shape=(6,8)
    编译

  • 在训练模型之前,我们需要通过compile来对学习过程进行配置。compile接收三个参数:
    优化器optimizer:该参数可指定为已预定义的优化器名,如rmsprop、adagrad,或一个Optimizer类的对象,详情见optimizers

  • 损失函数loss:该参数为模型试图最小化的目标函数,它可为预定义的损失函数名,如categorical_crossentropy、mse,也可以为一个损失函数。详情见losses

  • 指标列表metrics:对分类问题,我们一般将该列表设置为metrics=['accuracy']。指标可以是一个预定义指标的名字,也可以是一个用户定制的函数.指标函数应该返回单个张量,或一个完成metric_name - > metric_value映射的字典.

keras代码基本的使用:

基于多层感知器的二分类

DNN

import numpy as np
from keras.models import Sequential# 序贯模型
from keras.layers import Dense, Dropout # Dense矩阵运算;Dropout掩盖一些神经元,防止过拟合
import keras
# 生成虚拟数据
x_train = np.random.random((1000, 20))
y_train = np.random.randint(2, size=(1000, 1))
x_test = np.random.random((100, 20))
y_test = np.random.randint(2, size=(100, 1))
# 构建模型
model = Sequential()#keras中序贯模型
model.add(Dense(64, input_dim=20, activation='relu'))#隐含层一
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))#隐含层二
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))#输出层,二分类,sigmoid;多分类,softmax
# 配置模型,告诉模型,损失是什么,优化是什么,评价标准是什么
model.compile(loss='binary_crossentropy',#binary 二进制二分类
              optimizer='rmsprop',# rmsprop 梯度下降,改进了SGD
              metrics=['accuracy'])# 准确率
# 训练
model.fit(x_train, y_train,
          epochs=20,
          batch_size=128)
Epoch 1/20
1000/1000 [==============================] - 0s 355us/step - loss: 0.7169 - accuracy: 0.4810
Epoch 2/20
1000/1000 [==============================] - 0s 18us/step - loss: 0.7092 - accuracy: 0.4940
Epoch 3/20
1000/1000 [==============================] - 0s 29us/step - loss: 0.7031 - accuracy: 0.4840
Epoch 4/20
1000/1000 [==============================] - 0s 25us/step - loss: 0.7024 - accuracy: 0.4990
Epoch 5/20
1000/1000 [==============================] - 0s 23us/step - loss: 0.6981 - accuracy: 0.5250
Epoch 6/20
1000/1000 [==============================] - 0s 30us/step - loss: 0.6983 - accuracy: 0.4980
Epoch 7/20
1000/1000 [==============================] - 0s 28us/step - loss: 0.6920 - accuracy: 0.5410
Epoch 8/20
1000/1000 [==============================] - 0s 24us/step - loss: 0.6892 - accuracy: 0.5410
Epoch 9/20
1000/1000 [==============================] - 0s 29us/step - loss: 0.6987 - accuracy: 0.5040
Epoch 10/20
1000/1000 [==============================] - 0s 25us/step - loss: 0.6961 - accuracy: 0.5120
Epoch 11/20
1000/1000 [==============================] - 0s 30us/step - loss: 0.6947 - accuracy: 0.5150
Epoch 12/20
1000/1000 [==============================] - 0s 30us/step - loss: 0.6923 - accuracy: 0.5210
Epoch 13/20
1000/1000 [==============================] - 0s 24us/step - loss: 0.6922 - accuracy: 0.5290
Epoch 14/20
1000/1000 [==============================] - 0s 29us/step - loss: 0.6923 - accuracy: 0.5530
Epoch 15/20
1000/1000 [==============================] - 0s 35us/step - loss: 0.6961 - accuracy: 0.5070
Epoch 16/20
1000/1000 [==============================] - 0s 20us/step - loss: 0.6946 - accuracy: 0.5100
Epoch 17/20
1000/1000 [==============================] - 0s 30us/step - loss: 0.6867 - accuracy: 0.5540
Epoch 18/20
1000/1000 [==============================] - 0s 24us/step - loss: 0.6903 - accuracy: 0.5260
Epoch 19/20
1000/1000 [==============================] - 0s 27us/step - loss: 0.6927 - accuracy: 0.5230
Epoch 20/20
1000/1000 [==============================] - 0s 33us/step - loss: 0.6875 - accuracy: 0.5560





<keras.callbacks.callbacks.History at 0x1a2c9302c88>
# 验证测试
# 第一个值损失,交叉熵,第二个值是准确率
score = model.evaluate(x_test, y_test, batch_size=128)
score
100/100 [==============================] - 0s 379us/step





[0.6818742156028748, 0.550000011920929]
# 交叉熵计算的结果
y_prob = keras.utils.to_categorical(y_test,num_classes=2)
proba_ = model.predict(x_test)
proba_ = np.concatenate([1-proba_,proba_],axis = -1)
(y_prob*np.log(1/proba_)).sum(axis = -1).mean()
0.68187433
# 计算准确率
y_ = model.predict_classes(x_test)
(y_ == y_test).mean()
0.55

获取参数

for v in model.trainable_weights:
    print(v.shape)
(20, 64)
(64,)
(64, 64)
(64,)
(64, 1)
(1,)

Keras代码使用(2):

构建一个多层感知机的神经网络 == DNN ---->矩阵

from keras.models import Sequential,Model

from keras.layers import Dense,Dropout,Input

import numpy as np

from keras.datasets import mnist

import keras

加载数据

(X_train,y_train),(X_test,y_test) = mnist.load_data()

X_train = X_train/255.0
X_test= X_test/255.0
X_train = (X_train.astype(np.float32)).reshape(60000,-1)
X_test = X_test.astype(np.float32).reshape(10000,784)
y_train = keras.utils.to_categorical(y_train)
y_test = keras.utils.to_categorical(y_test)

模型构建成功

input_layer = Input(shape = (784,))
x = Dense(256,activation='relu')(input_layer)# 第一层隐含层 感知机
x = Dense(256,activation='relu')(x)#第二层隐含层 感知机 计算的数据,就是第一层的返回
x = Dense(512,activation=keras.activations.relu)(x)#第三层隐含层 感知机
output_layer = Dense(10,activation='softmax')(x)
model = Model(inputs = input_layer,outputs = output_layer)
model.compile(optimizer=keras.optimizers.sgd(),
              loss=keras.losses.categorical_crossentropy,
              metrics=[keras.metrics.categorical_accuracy])
model.fit(x = X_train,
          y = y_train,
          batch_size=256,
          epochs=10,
          verbose=1,#verbose控制是否有输出
          validation_data=(X_test,y_test))
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 [==============================] - 8s 131us/step - loss: 1.9025 - categorical_accuracy: 0.5257 - val_loss: 1.2492 - val_categorical_accuracy: 0.7527
Epoch 2/10
60000/60000 [==============================] - 4s 73us/step - loss: 0.8528 - categorical_accuracy: 0.8065 - val_loss: 0.5924 - val_categorical_accuracy: 0.8538
Epoch 3/10
60000/60000 [==============================] - 4s 74us/step - loss: 0.5208 - categorical_accuracy: 0.8645 - val_loss: 0.4406 - val_categorical_accuracy: 0.8785
Epoch 4/10
60000/60000 [==============================] - 4s 73us/step - loss: 0.4187 - categorical_accuracy: 0.8849 - val_loss: 0.3730 - val_categorical_accuracy: 0.8945
Epoch 5/10
60000/60000 [==============================] - 4s 75us/step - loss: 0.3684 - categorical_accuracy: 0.8962 - val_loss: 0.3383 - val_categorical_accuracy: 0.9057
Epoch 6/10
60000/60000 [==============================] - 4s 74us/step - loss: 0.3372 - categorical_accuracy: 0.9051 - val_loss: 0.3135 - val_categorical_accuracy: 0.9097
Epoch 7/10
60000/60000 [==============================] - 4s 73us/step - loss: 0.3148 - categorical_accuracy: 0.9102 - val_loss: 0.2949 - val_categorical_accuracy: 0.9155
Epoch 8/10
60000/60000 [==============================] - 4s 75us/step - loss: 0.2976 - categorical_accuracy: 0.9154 - val_loss: 0.2797 - val_categorical_accuracy: 0.9186
Epoch 9/10
60000/60000 [==============================] - 4s 75us/step - loss: 0.2827 - categorical_accuracy: 0.9191 - val_loss: 0.2664 - val_categorical_accuracy: 0.9231
Epoch 10/10
60000/60000 [==============================] - 5s 78us/step - loss: 0.2704 - categorical_accuracy: 0.9221 - val_loss: 0.2582 - val_categorical_accuracy: 0.9260





<keras.callbacks.callbacks.History at 0x57cedc8>

训练保存模型并且使用:

构建一个多层感知机的神经网络 == DNN ---->矩阵

from keras.models import Sequential,Model

from keras.layers import Dense,Dropout,Input

import numpy as np

from keras.datasets import mnist

import keras

加载数据

(X_train,y_train),(X_test,y_test) = mnist.load_data()

X_train = X_train/255.0
X_test= X_test/255.0
X_train = (X_train.astype(np.float32)).reshape(60000,-1)
X_test = X_test.astype(np.float32).reshape(10000,784)
y_train = keras.utils.to_categorical(y_train)
y_test = keras.utils.to_categorical(y_test)

模型构建成功

input_layer = Input(shape = (784,))
x = Dense(256,activation='relu')(input_layer)# 第一层隐含层 感知机
x = Dense(256,activation='relu')(x)#第二层隐含层 感知机 计算的数据,就是第一层的返回
x = Dense(512,activation=keras.activations.relu)(x)#第三层隐含层 感知机
output_layer = Dense(10,activation='softmax')(x)
model = Model(inputs = input_layer,outputs = output_layer)
model.compile(optimizer=keras.optimizers.sgd(),
              loss=keras.losses.categorical_crossentropy,
              metrics=[keras.metrics.categorical_accuracy])
model.fit(x = X_train,
          y = y_train,
          batch_size=256,
          epochs=10,
          verbose=1,#verbose控制是否有输出
          validation_data=(X_test,y_test))
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 [==============================] - 8s 131us/step - loss: 1.9025 - categorical_accuracy: 0.5257 - val_loss: 1.2492 - val_categorical_accuracy: 0.7527
Epoch 2/10
60000/60000 [==============================] - 4s 73us/step - loss: 0.8528 - categorical_accuracy: 0.8065 - val_loss: 0.5924 - val_categorical_accuracy: 0.8538
Epoch 3/10
60000/60000 [==============================] - 4s 74us/step - loss: 0.5208 - categorical_accuracy: 0.8645 - val_loss: 0.4406 - val_categorical_accuracy: 0.8785
Epoch 4/10
60000/60000 [==============================] - 4s 73us/step - loss: 0.4187 - categorical_accuracy: 0.8849 - val_loss: 0.3730 - val_categorical_accuracy: 0.8945
Epoch 5/10
60000/60000 [==============================] - 4s 75us/step - loss: 0.3684 - categorical_accuracy: 0.8962 - val_loss: 0.3383 - val_categorical_accuracy: 0.9057
Epoch 6/10
60000/60000 [==============================] - 4s 74us/step - loss: 0.3372 - categorical_accuracy: 0.9051 - val_loss: 0.3135 - val_categorical_accuracy: 0.9097
Epoch 7/10
60000/60000 [==============================] - 4s 73us/step - loss: 0.3148 - categorical_accuracy: 0.9102 - val_loss: 0.2949 - val_categorical_accuracy: 0.9155
Epoch 8/10
60000/60000 [==============================] - 4s 75us/step - loss: 0.2976 - categorical_accuracy: 0.9154 - val_loss: 0.2797 - val_categorical_accuracy: 0.9186
Epoch 9/10
60000/60000 [==============================] - 4s 75us/step - loss: 0.2827 - categorical_accuracy: 0.9191 - val_loss: 0.2664 - val_categorical_accuracy: 0.9231
Epoch 10/10
60000/60000 [==============================] - 5s 78us/step - loss: 0.2704 - categorical_accuracy: 0.9221 - val_loss: 0.2582 - val_categorical_accuracy: 0.9260





<keras.callbacks.callbacks.History at 0x57cedc8>

图片的预测

import numpy as  np
import keras
Using TensorFlow backend.
model = keras.models.load_model('./keras-cifar-10-trained-model3.h5')

数据

(X_train,y_train),(X_test,y_test) = keras.datasets.cifar10.load_data()
y_train = keras.utils.to_categorical(y_train,num_classes=10)
y_test = keras.utils.to_categorical(y_test,num_classes=10)

X_train = (X_train/255).astype(np.float32)
X_test = (X_test/255).astype(np.float32)
X_train.shape
(50000, 32, 32, 3)
model.fit(X_train,y_train,batch_size = 32,validation_data=(X_test,y_test),workers=4)
Train on 50000 samples, validate on 10000 samples
Epoch 1/1
50000/50000 [==============================] - 172s 3ms/step - loss: 0.5749 - accuracy: 0.7975 - val_loss: 0.6547 - val_accuracy: 0.7784





<keras.callbacks.callbacks.History at 0x1cedde55888>

从网络上拿到图片,使用模型识别

# 训练的图片尺寸32*32*3
# 网络获取的图片,尺寸,可大可小,图片尺寸需要一致化
# opencv
import cv2
import matplotlib.pyplot as plt
deer = cv2.imread('./ship.jpg')
# opencv加载图片颜色通道是:蓝绿红 -----> 红绿蓝 
deer = cv2.resize(deer,dsize = (32,32))[:,:,::-1]
plt.figure(figsize=(1,1))
plt.imshow(deer)
deer = (deer/255).astype(np.float32).reshape(1,32,32,3)
model.predict_classes(deer)
array([8], dtype=int64)
output_7_1.png
(X_train,y_train),(X_test,y_test) = keras.datasets.cifar10.load_data()
y_test
array([[3],
       [8],
       [8],
       ...,
       [5],
       [1],
       [7]])
plt.figure(figsize=(10,15))
for i in range(100):
    ax = plt.subplot(10,10,i+1)
    ax.imshow(X_train[i])
    ax.axis('off')
    ax.set_title(y_train[i,0])
output_9_0.png
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值