第十章 keras 人工智能神经网络

第十章 keras 人工智能神经网络

编写时间与版本信息:
time: 2021-02-21 03:21:00
python version: 3.8.6
sklearn version: 0.24.1
tensorflow verson: 2.4.1
keras version: 2.4.0

10.2.8 使用TensorBoard可视化

配置TensorBoard路径
# 获取当前路径, 生成一个TensorBoard的日志路径
import os
def get_log_path():
    path = os.path.join(os.curdir, "tsb_log")
    run_path = time.strftime("run_%Y%m%d_%H%M%S")
    return os.path.join(path, run_path)
run_path = get_log_path()
print(run_path)
./tsb_log/run_20210221_032100
训练一个模型
# 导入 手写数据集 数据集训练一个神经网络
from sklearn.datasets import load_digits
data = load_digits()
X, y = data.data, data.target
X.shape, X.dtype, y.dtype
((1797, 64), dtype('float64'), dtype('int64'))
print(data.DESCR)
.. _digits_dataset:

Optical recognition of handwritten digits dataset
--------------------------------------------------

**Data Set Characteristics:**

    :Number of Instances: 1797
    :Number of Attributes: 64
    :Attribute Information: 8x8 image of integer pixels in the range 0..16.
    :Missing Attribute Values: None
    :Creator: E. Alpaydin (alpaydin '@' boun.edu.tr)
    :Date: July; 1998

This is a copy of the test set of the UCI ML hand-written digits datasets
https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits

The data set contains images of hand-written digits: 10 classes where
each class refers to a digit.

Preprocessing programs made available by NIST were used to extract
normalized bitmaps of handwritten digits from a preprinted form. From a
total of 43 people, 30 contributed to the training set and different 13
to the test set. 32x32 bitmaps are divided into nonoverlapping blocks of
4x4 and the number of on pixels are counted in each block. This generates
an input matrix of 8x8 where each element is an integer in the range
0..16. This reduces dimensionality and gives invariance to small
distortions.

For info on NIST preprocessing routines, see M. D. Garris, J. L. Blue, G.
T. Candela, D. L. Dimmick, J. Geist, P. J. Grother, S. A. Janet, and C.
L. Wilson, NIST Form-Based Handprint Recognition System, NISTIR 5469,
1994.

.. topic:: References

  - C. Kaynak (1995) Methods of Combining Multiple Classifiers and Their
    Applications to Handwritten Digit Recognition, MSc Thesis, Institute of
    Graduate Studies in Science and Engineering, Bogazici University.
  - E. Alpaydin, C. Kaynak (1998) Cascading Classifiers, Kybernetika.
  - Ken Tang and Ponnuthurai N. Suganthan and Xi Yao and A. Kai Qin.
    Linear dimensionalityreduction using relevance weighted LDA. School of
    Electrical and Electronic Engineering Nanyang Technological University.
    2005.
  - Claudio Gentile. A New Approximate Maximal Margin Classification
    Algorithm. NIPS. 2000.
data.target_names
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
# 可视化数据
x_one_image = X[0].reshape(8,8)
y_one = y[0]
print(x_one_image)
import matplotlib.pyplot as plt
# plt.figure(figsize=(8,8), dpi=20)
plt.title(y_one)
plt.imshow(x_one_image)
plt.show()
[[ 0.  0.  5. 13.  9.  1.  0.  0.]
 [ 0.  0. 13. 15. 10. 15.  5.  0.]
 [ 0.  3. 15.  2.  0. 11.  8.  0.]
 [ 0.  4. 12.  0.  0.  8.  8.  0.]
 [ 0.  5.  8.  0.  0.  9.  8.  0.]
 [ 0.  4. 11.  0.  1. 12.  7.  0.]
 [ 0.  2. 14.  5. 10. 12.  0.  0.]
 [ 0.  0.  6. 13. 10.  0.  0.  0.]]

在这里插入图片描述

# 数据集分为训练集与测试集
from sklearn.model_selection import train_test_split
X_train_full, X_test, y_train_full, y_test = train_test_split(X, y)
X_train, X_valid, y_train, y_valid = train_test_split(X_train_full, y_train_full)
# 根据数据的表述,图片数据最大值 16,需要将数据归一化
X_train = X_train / 16
X_valid = X_valid / 16
X_test = X_test / 16
# 构建keras模型
input_ = keras.layers.Input(shape=X_train.shape[1:])
hidden1 = keras.layers.Dense(30, activation="relu")(input_)
output = keras.layers.Dense(10, activation="softmax")(hidden1)
model = keras.Model(inputs=[input_], outputs=[output])
# 查看模型各层的详细信息
model.summary()
Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 64)]              0         
_________________________________________________________________
dense (Dense)                (None, 30)                1950      
_________________________________________________________________
dense_1 (Dense)              (None, 10)                310       
=================================================================
Total params: 2,260
Trainable params: 2,260
Non-trainable params: 0
_________________________________________________________________
# 编译模型
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
# 训练模型 并且传入TensorBoard回调函数
ts_board = keras.callbacks.TensorBoard(run_path)
early_stop = keras.callbacks.EarlyStopping(patience=10, restore_best_weights=True)
history = model.fit(X_train, y_train, epochs=100,
         validation_data=(X_valid, y_valid), callbacks=[ts_board]
         )
Epoch 1/100
32/32 [==============================] - 1s 18ms/step - loss: 2.4146 - accuracy: 0.1065 - val_loss: 2.3392 - val_accuracy: 0.1039
Epoch 2/100
32/32 [==============================] - 0s 4ms/step - loss: 2.3136 - accuracy: 0.1095 - val_loss: 2.2624 - val_accuracy: 0.1602
Epoch 3/100
32/32 [==============================] - 0s 3ms/step - loss: 2.2619 - accuracy: 0.1378 - val_loss: 2.2005 - val_accuracy: 0.2196
Epoch 4/100
32/32 [==============================] - 0s 3ms/step - loss: 2.1986 - accuracy: 0.2093 - val_loss: 2.1458 - val_accuracy: 0.2938
Epoch 5/100
32/32 [==============================] - 0s 3ms/step - loss: 2.1421 - accuracy: 0.2728 - val_loss: 2.0934 - val_accuracy: 0.3531
Epoch 6/100
32/32 [==============================] - 0s 4ms/step - loss: 2.0914 - accuracy: 0.3527 - val_loss: 2.0409 - val_accuracy: 0.4125
Epoch 7/100
32/32 [==============================] - 0s 3ms/step - loss: 2.0284 - accuracy: 0.4386 - val_loss: 1.9872 - val_accuracy: 0.4570
Epoch 8/100
32/32 [==============================] - 0s 4ms/step - loss: 1.9840 - accuracy: 0.4660 - val_loss: 1.9336 - val_accuracy: 0.5163
......
Epoch 94/100
32/32 [==============================] - 0s 3ms/step - loss: 0.2487 - accuracy: 0.9400 - val_loss: 0.2426 - val_accuracy: 0.9496
Epoch 95/100
32/32 [==============================] - 0s 3ms/step - loss: 0.2458 - accuracy: 0.9476 - val_loss: 0.2403 - val_accuracy: 0.9525
Epoch 96/100
32/32 [==============================] - 0s 3ms/step - loss: 0.2331 - accuracy: 0.9548 - val_loss: 0.2381 - val_accuracy: 0.9525
Epoch 97/100
32/32 [==============================] - 0s 3ms/step - loss: 0.2415 - accuracy: 0.9527 - val_loss: 0.2365 - val_accuracy: 0.9525
Epoch 98/100
32/32 [==============================] - 0s 3ms/step - loss: 0.2470 - accuracy: 0.9457 - val_loss: 0.2344 - val_accuracy: 0.9525
Epoch 99/100
32/32 [==============================] - 0s 3ms/step - loss: 0.2471 - accuracy: 0.9487 - val_loss: 0.2322 - val_accuracy: 0.9496
Epoch 100/100
32/32 [==============================] - 0s 3ms/step - loss: 0.2011 - accuracy: 0.9651 - val_loss: 0.2302 - val_accuracy: 0.9525
# 绘制学习曲线
import pandas as pd
pd.DataFrame(history.history).plot()
plt.grid(True)
plt.show()

在这里插入图片描述

# 预测数据
import numpy as np
X_new, y_new = X_test[:3], y_test[:3]
y_pred_all = model.predict(X_new)
y_pred = np.argmax(y_pred_all, axis=1)
y_pred_all = np.round(y_pred_all, 2)
print("y_new: ", y_new)
print("y_pred_: ", y_pred)
print("y_pred_all: ",y_pred_all)
y_new:  [3 9 3]
y_pred_:  [3 9 3]
y_pred_all:  [[0.   0.   0.06 0.85 0.   0.02 0.   0.   0.04 0.03]
 [0.01 0.   0.   0.01 0.   0.02 0.   0.   0.01 0.96]
 [0.   0.   0.02 0.68 0.   0.01 0.   0.01 0.02 0.26]]
# 评估模型
loss, accuracy = model.evaluate(X_test, y_test)
15/15 [==============================] - 0s 1ms/step - loss: 0.2820 - accuracy: 0.9333
TensorBoard可视化

在终端命令行(linux), 在项目目录,输入一下命令:

$ tensorboard --logdir="./tsb_log" --port=6006 --bind_all

注:./tsb_log即之前自定义的目录 tesnsorboard 是在安装tensorflow时就已经自带

在浏览器中打开 http://localhost:6006
页面如下:
在这里插入图片描述


  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值