T11:优化器对比实验

Z. 心得感受+知识点补充

1. 优化器定义:

  • 优化器是一种算法,它在模型优化过程中,动态地调整梯度的大小和方向,使模型能够收敛到更好的位置,或者用更快的速度进行收敛。(本例中主要使用了:Adam和SGD)
model1 = create_model(optimizer=tf.keras.optimizers.Adam())
model2 = create_model(optimizer=tf.keras.optimizers.SGD())

2. 优化器分类:

(1)梯度下降法(Gradient Descent)

  • 概括: “在目光所及的范围内,不断寻找最陡最快的路径下山”
  • 算法缺点:1) 训练速度慢;2) 容易陷入局部最优解
  • 改进方法:1) 批量梯度下降法(BGD, Batch Gradient Descent);2) 随机梯度下降法(SGD, Stochastic Gradient Descent);3) 小批量梯度下降法(Mini-batch Gradient Descent)。 主要区别在与:每次参数更新时计算的样本数据量不同

(2)动量优化法(Momentum)

  • 参数更新时在一定程度上保留了之前更新的方向,同时又利用当前的Batch的梯度微调最终的更新方向,简而言之就是通过积累之前的动量来加速当前的梯度
  • 优点:前后梯度一致的时候能够加速学习;前后梯度不一致的时候能够抑制震荡,越过局部极小值(加速收敛,减小震荡)
  • 缺点:多了一个超参数,增加了计算量

(3)自适应学习率优化算法

  • 对每个参与训练的参数设置不同的学习率,在整个学习过程中通过一些算法自动适应这些参数的学习率。如果损失与某一指定参数的偏导的符号相同,name学习率应该增加;如果损失与该参数的偏导的符号不同,那么学习率应该减小。
  • Adam (Adaptive Moment Estimation):结合了一阶动量和二阶动量
    优点:1) Adam梯度经过偏置校正后,每一次迭代学习率都有一个固定范围,使得参数比较平稳;2) 结合了Adagrad善于处理稀疏梯度和RMSprop善于处理非平稳目标的优点;3) 为不同的参数计算不同的自适应学习率;4) 节省了训练集时间、训练成本
    缺点:Adam使用动量的滑动平均,可能会随着训练数据变化而抖动比较剧烈,在online场景可能波动比较大,在广告场景往往效果不如AdaGrad

一、设置GPU

import tensorflow as tf
gpus = tf.config.list_physical_devices("GPU")

if gpus:
    gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPU
    tf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpu0],"GPU")

from tensorflow          import keras
import matplotlib.pyplot as plt
import pandas            as pd
import numpy             as np
import warnings,os,PIL,pathlib

warnings.filterwarnings("ignore")             #忽略警告信息
plt.rcParams['font.sans-serif']    = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False    # 用来正常显示负号

二、 导入数据

1. 导入数据

data_dir    = "./11-data"
data_dir    = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*')))
print("图片总数为:",image_count)

图片总数为: 1800

batch_size = 16
img_height = 336
img_width  = 336
"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="training",
    seed=12,
    image_size=(img_height, img_width),
    batch_size=batch_size)

Found 1800 files belonging to 17 classes.
Using 1440 files for training.

"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="validation",
    seed=12,
    image_size=(img_height, img_width),
    batch_size=batch_size)

Found 1800 files belonging to 17 classes.
Using 360 files for validation.

class_names = train_ds.class_names
print(class_names)

[‘Angelina Jolie’, ‘Brad Pitt’, ‘Denzel Washington’, ‘Hugh Jackman’, ‘Jennifer Lawrence’, ‘Johnny Depp’, ‘Kate Winslet’, ‘Leonardo DiCaprio’, ‘Megan Fox’, ‘Natalie Portman’, ‘Nicole Kidman’, ‘Robert Downey Jr’, ‘Sandra Bullock’, ‘Scarlett Johansson’, ‘Tom Cruise’, ‘Tom Hanks’, ‘Will Smith’]

2. 检查数据

for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break

(16, 336, 336, 3)
(16,)

3. 配置数据集

AUTOTUNE = tf.data.AUTOTUNE

def train_preprocessing(image,label):
    return (image/255.0,label)

train_ds = (
    train_ds.cache()
    .shuffle(1000)
    .map(train_preprocessing)    # 这里可以设置预处理函数
#     .batch(batch_size)           # 在image_dataset_from_directory处已经设置了batch_size
    .prefetch(buffer_size=AUTOTUNE)
)

val_ds = (
    val_ds.cache()
    .shuffle(1000)
    .map(train_preprocessing)    # 这里可以设置预处理函数
#     .batch(batch_size)         # 在image_dataset_from_directory处已经设置了batch_size
    .prefetch(buffer_size=AUTOTUNE)
)

4. 数据可视化

plt.figure(figsize=(10, 8))  # 图形的宽为10高为5
plt.suptitle("数据展示")

for images, labels in train_ds.take(1):
    for i in range(15):
        plt.subplot(4, 5, i + 1)
        plt.xticks([])
        plt.yticks([])
        plt.grid(False)

        # 显示图片
        plt.imshow(images[i])
        # 显示标签
        plt.xlabel(class_names[labels[i]-1])

plt.show()

在这里插入图片描述

三、构建模型

from tensorflow.keras.layers import Dropout,Dense,BatchNormalization
from tensorflow.keras.models import Model

def create_model(optimizer='adam'):
    # 加载预训练模型
    vgg16_base_model = tf.keras.applications.vgg16.VGG16(weights='imagenet',
                                                                include_top=False,
                                                                input_shape=(img_width, img_height, 3),
                                                                pooling='avg')
    for layer in vgg16_base_model.layers:
        layer.trainable = False

    X = vgg16_base_model.output
    
    X = Dense(170, activation='relu')(X)
    X = BatchNormalization()(X)
    X = Dropout(0.5)(X)

    output = Dense(len(class_names), activation='softmax')(X)
    vgg16_model = Model(inputs=vgg16_base_model.input, outputs=output)

    vgg16_model.compile(optimizer=optimizer,
                        loss='sparse_categorical_crossentropy',
                        metrics=['accuracy'])
    return vgg16_model

model1 = create_model(optimizer=tf.keras.optimizers.Adam())
model2 = create_model(optimizer=tf.keras.optimizers.SGD())
model2.summary()

Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5
58892288/58889256 [] - 10s 0us/step
58900480/58889256 [
] - 10s 0us/step
Model: “model_1”


Layer (type) Output Shape Param #

input_2 (InputLayer) [(None, 336, 336, 3)] 0

block1_conv1 (Conv2D) (None, 336, 336, 64) 1792

block1_conv2 (Conv2D) (None, 336, 336, 64) 36928

block1_pool (MaxPooling2D) (None, 168, 168, 64) 0

block2_conv1 (Conv2D) (None, 168, 168, 128) 73856

block2_conv2 (Conv2D) (None, 168, 168, 128) 147584

block2_pool (MaxPooling2D) (None, 84, 84, 128) 0

block3_conv1 (Conv2D) (None, 84, 84, 256) 295168

block3_conv2 (Conv2D) (None, 84, 84, 256) 590080

block3_conv3 (Conv2D) (None, 84, 84, 256) 590080

block3_pool (MaxPooling2D) (None, 42, 42, 256) 0

block4_conv1 (Conv2D) (None, 42, 42, 512) 1180160

block4_conv2 (Conv2D) (None, 42, 42, 512) 2359808

block4_conv3 (Conv2D) (None, 42, 42, 512) 2359808

block4_pool (MaxPooling2D) (None, 21, 21, 512) 0

block5_conv1 (Conv2D) (None, 21, 21, 512) 2359808

block5_conv2 (Conv2D) (None, 21, 21, 512) 2359808

block5_conv3 (Conv2D) (None, 21, 21, 512) 2359808

block5_pool (MaxPooling2D) (None, 10, 10, 512) 0

global_average_pooling2d_1 (None, 512) 0
(GlobalAveragePooling2D)

dense_2 (Dense) (None, 170) 87210

batch_normalization_1 (Batc (None, 170) 680
hNormalization)

dropout_1 (Dropout) (None, 170) 0

dense_3 (Dense) (None, 17) 2907

=================================================================
Total params: 14,805,485
Trainable params: 90,457
Non-trainable params: 14,715,028


四、训练模型

NO_EPOCHS = 50

history_model1  = model1.fit(train_ds, epochs=NO_EPOCHS, verbose=1, validation_data=val_ds)
history_model2  = model2.fit(train_ds, epochs=NO_EPOCHS, verbose=1, validation_data=val_ds)

Epoch 1/50
90/90 [] - 275s 3s/step - loss: 2.8501 - accuracy: 0.1611 - val_loss: 2.7374 - val_accuracy: 0.1222
Epoch 2/50
90/90 [
] - 264s 3s/step - loss: 2.0722 - accuracy: 0.3410 - val_loss: 2.4951 - val_accuracy: 0.2083
Epoch 3/50
90/90 [] - 254s 3s/step - loss: 1.7670 - accuracy: 0.4319 - val_loss: 2.3171 - val_accuracy: 0.2250
Epoch 4/50
90/90 [
] - 259s 3s/step - loss: 1.5552 - accuracy: 0.4993 - val_loss: 2.0565 - val_accuracy: 0.3639
Epoch 5/50
90/90 [] - 258s 3s/step - loss: 1.3696 - accuracy: 0.5722 - val_loss: 1.9346 - val_accuracy: 0.3611
Epoch 6/50
90/90 [
] - 258s 3s/step - loss: 1.2338 - accuracy: 0.6215 - val_loss: 1.8083 - val_accuracy: 0.3778
Epoch 7/50
90/90 [] - 260s 3s/step - loss: 1.1421 - accuracy: 0.6389 - val_loss: 1.6021 - val_accuracy: 0.4667
Epoch 8/50
90/90 [
] - 260s 3s/step - loss: 1.0259 - accuracy: 0.7000 - val_loss: 1.9437 - val_accuracy: 0.3944
Epoch 9/50
90/90 [] - 269s 3s/step - loss: 0.9760 - accuracy: 0.7014 - val_loss: 1.5703 - val_accuracy: 0.5111
Epoch 10/50
90/90 [
] - 269s 3s/step - loss: 0.9094 - accuracy: 0.7181 - val_loss: 1.8719 - val_accuracy: 0.4111
Epoch 11/50
90/90 [] - 266s 3s/step - loss: 0.8194 - accuracy: 0.7632 - val_loss: 1.7336 - val_accuracy: 0.4250
Epoch 12/50
90/90 [
] - 265s 3s/step - loss: 0.7645 - accuracy: 0.7639 - val_loss: 2.1357 - val_accuracy: 0.4111
Epoch 13/50
90/90 [] - 255s 3s/step - loss: 0.6993 - accuracy: 0.7944 - val_loss: 1.7183 - val_accuracy: 0.5361
Epoch 14/50
90/90 [
] - 254s 3s/step - loss: 0.7018 - accuracy: 0.7806 - val_loss: 1.4140 - val_accuracy: 0.5583
Epoch 15/50
90/90 [] - 253s 3s/step - loss: 0.6045 - accuracy: 0.8208 - val_loss: 1.7561 - val_accuracy: 0.5250
Epoch 16/50
90/90 [
] - 253s 3s/step - loss: 0.5811 - accuracy: 0.8201 - val_loss: 2.2377 - val_accuracy: 0.4222
Epoch 17/50
90/90 [] - 252s 3s/step - loss: 0.5509 - accuracy: 0.8285 - val_loss: 1.9067 - val_accuracy: 0.4694
Epoch 18/50
90/90 [
] - 251s 3s/step - loss: 0.5416 - accuracy: 0.8333 - val_loss: 1.6357 - val_accuracy: 0.5528
Epoch 19/50
90/90 [] - 252s 3s/step - loss: 0.4856 - accuracy: 0.8521 - val_loss: 1.7513 - val_accuracy: 0.5139
Epoch 20/50
90/90 [
] - 253s 3s/step - loss: 0.4544 - accuracy: 0.8674 - val_loss: 1.7373 - val_accuracy: 0.5194
Epoch 21/50
90/90 [] - 253s 3s/step - loss: 0.4087 - accuracy: 0.8868 - val_loss: 1.6286 - val_accuracy: 0.5361
Epoch 22/50
90/90 [
] - 252s 3s/step - loss: 0.4061 - accuracy: 0.8764 - val_loss: 2.2422 - val_accuracy: 0.4667
Epoch 23/50
90/90 [] - 252s 3s/step - loss: 0.3680 - accuracy: 0.8958 - val_loss: 1.5844 - val_accuracy: 0.5583
Epoch 24/50
90/90 [
] - 252s 3s/step - loss: 0.3450 - accuracy: 0.9021 - val_loss: 2.1530 - val_accuracy: 0.5194
Epoch 25/50
90/90 [] - 251s 3s/step - loss: 0.3589 - accuracy: 0.8896 - val_loss: 1.8010 - val_accuracy: 0.5250
Epoch 26/50
90/90 [
] - 251s 3s/step - loss: 0.3823 - accuracy: 0.8674 - val_loss: 1.7018 - val_accuracy: 0.5417
Epoch 27/50
90/90 [] - 252s 3s/step - loss: 0.3261 - accuracy: 0.9007 - val_loss: 2.1586 - val_accuracy: 0.4917
Epoch 28/50
90/90 [
] - 252s 3s/step - loss: 0.3216 - accuracy: 0.8993 - val_loss: 1.9169 - val_accuracy: 0.5222
Epoch 29/50
90/90 [] - 255s 3s/step - loss: 0.2980 - accuracy: 0.9111 - val_loss: 1.7166 - val_accuracy: 0.5528
Epoch 30/50
90/90 [
] - 255s 3s/step - loss: 0.3007 - accuracy: 0.9028 - val_loss: 2.2269 - val_accuracy: 0.4806
Epoch 31/50
90/90 [] - 255s 3s/step - loss: 0.3006 - accuracy: 0.9014 - val_loss: 1.9205 - val_accuracy: 0.5667
Epoch 32/50
90/90 [
] - 254s 3s/step - loss: 0.2695 - accuracy: 0.9111 - val_loss: 2.3091 - val_accuracy: 0.5056
Epoch 33/50
90/90 [] - 252s 3s/step - loss: 0.3067 - accuracy: 0.9000 - val_loss: 2.1476 - val_accuracy: 0.5139
Epoch 34/50
90/90 [
] - 251s 3s/step - loss: 0.2631 - accuracy: 0.9167 - val_loss: 2.8037 - val_accuracy: 0.4833
Epoch 35/50
90/90 [] - 265s 3s/step - loss: 0.2621 - accuracy: 0.9153 - val_loss: 2.3359 - val_accuracy: 0.4972
Epoch 36/50
90/90 [
] - 255s 3s/step - loss: 0.2678 - accuracy: 0.9076 - val_loss: 2.0694 - val_accuracy: 0.5111
Epoch 37/50
90/90 [] - 254s 3s/step - loss: 0.2553 - accuracy: 0.9208 - val_loss: 2.0118 - val_accuracy: 0.5417
Epoch 38/50
90/90 [
] - 256s 3s/step - loss: 0.2414 - accuracy: 0.9271 - val_loss: 2.2446 - val_accuracy: 0.5306
Epoch 39/50
90/90 [] - 257s 3s/step - loss: 0.2203 - accuracy: 0.9299 - val_loss: 1.8730 - val_accuracy: 0.5694
Epoch 40/50
90/90 [
] - 254s 3s/step - loss: 0.2233 - accuracy: 0.9333 - val_loss: 2.0222 - val_accuracy: 0.5583
Epoch 41/50
90/90 [] - 253s 3s/step - loss: 0.2106 - accuracy: 0.9299 - val_loss: 2.2241 - val_accuracy: 0.5639
Epoch 42/50
90/90 [
] - 253s 3s/step - loss: 0.2319 - accuracy: 0.9201 - val_loss: 1.9782 - val_accuracy: 0.5667
Epoch 43/50
90/90 [] - 253s 3s/step - loss: 0.2033 - accuracy: 0.9340 - val_loss: 2.7148 - val_accuracy: 0.4722
Epoch 44/50
90/90 [
] - 253s 3s/step - loss: 0.2299 - accuracy: 0.9243 - val_loss: 2.3956 - val_accuracy: 0.5361
Epoch 45/50
90/90 [] - 253s 3s/step - loss: 0.1793 - accuracy: 0.9444 - val_loss: 2.0246 - val_accuracy: 0.5917
Epoch 46/50
90/90 [
] - 254s 3s/step - loss: 0.2161 - accuracy: 0.9208 - val_loss: 1.9537 - val_accuracy: 0.5778
Epoch 47/50
90/90 [] - 265s 3s/step - loss: 0.2054 - accuracy: 0.9361 - val_loss: 2.0954 - val_accuracy: 0.5639
Epoch 48/50
90/90 [
] - 256s 3s/step - loss: 0.2000 - accuracy: 0.9306 - val_loss: 2.2849 - val_accuracy: 0.5389
Epoch 49/50
90/90 [] - 254s 3s/step - loss: 0.1778 - accuracy: 0.9493 - val_loss: 2.4314 - val_accuracy: 0.5306
Epoch 50/50
90/90 [
] - 257s 3s/step - loss: 0.1753 - accuracy: 0.9451 - val_loss: 2.8682 - val_accuracy: 0.5167
Epoch 1/50
90/90 [] - 266s 3s/step - loss: 2.7155 - accuracy: 0.1667 - val_loss: 2.7015 - val_accuracy: 0.1750
Epoch 2/50
90/90 [
] - 267s 3s/step - loss: 2.3487 - accuracy: 0.2479 - val_loss: 2.5150 - val_accuracy: 0.2722
Epoch 3/50
90/90 [] - 265s 3s/step - loss: 2.1698 - accuracy: 0.3021 - val_loss: 2.3335 - val_accuracy: 0.2972
Epoch 4/50
90/90 [
] - 260s 3s/step - loss: 1.9921 - accuracy: 0.3583 - val_loss: 2.1514 - val_accuracy: 0.3694
Epoch 5/50
90/90 [] - 259s 3s/step - loss: 1.8648 - accuracy: 0.3972 - val_loss: 1.9612 - val_accuracy: 0.3722
Epoch 6/50
90/90 [
] - 261s 3s/step - loss: 1.7817 - accuracy: 0.4243 - val_loss: 1.7979 - val_accuracy: 0.4222
Epoch 7/50
90/90 [] - 273s 3s/step - loss: 1.6949 - accuracy: 0.4632 - val_loss: 1.7210 - val_accuracy: 0.4556
Epoch 8/50
90/90 [
] - 275s 3s/step - loss: 1.6086 - accuracy: 0.4833 - val_loss: 1.6379 - val_accuracy: 0.4778
Epoch 9/50
90/90 [] - 266s 3s/step - loss: 1.5665 - accuracy: 0.5035 - val_loss: 1.6767 - val_accuracy: 0.4528
Epoch 10/50
90/90 [
] - 261s 3s/step - loss: 1.5155 - accuracy: 0.5097 - val_loss: 1.5863 - val_accuracy: 0.4750
Epoch 11/50
90/90 [] - 262s 3s/step - loss: 1.4460 - accuracy: 0.5222 - val_loss: 1.7235 - val_accuracy: 0.4000
Epoch 12/50
90/90 [
] - 262s 3s/step - loss: 1.3979 - accuracy: 0.5528 - val_loss: 1.4984 - val_accuracy: 0.5000
Epoch 13/50
90/90 [] - 260s 3s/step - loss: 1.3489 - accuracy: 0.5840 - val_loss: 1.5223 - val_accuracy: 0.4889
Epoch 14/50
90/90 [
] - 261s 3s/step - loss: 1.3260 - accuracy: 0.5757 - val_loss: 1.6443 - val_accuracy: 0.4583
Epoch 15/50
90/90 [] - 272s 3s/step - loss: 1.3101 - accuracy: 0.5833 - val_loss: 1.6651 - val_accuracy: 0.4333
Epoch 16/50
90/90 [
] - 267s 3s/step - loss: 1.2415 - accuracy: 0.6111 - val_loss: 1.6294 - val_accuracy: 0.4722
Epoch 17/50
90/90 [] - 269s 3s/step - loss: 1.2381 - accuracy: 0.6153 - val_loss: 1.5067 - val_accuracy: 0.4972
Epoch 18/50
90/90 [
] - 271s 3s/step - loss: 1.1773 - accuracy: 0.6396 - val_loss: 1.5685 - val_accuracy: 0.4722
Epoch 19/50
90/90 [] - 257s 3s/step - loss: 1.1789 - accuracy: 0.6208 - val_loss: 1.5626 - val_accuracy: 0.4833
Epoch 20/50
90/90 [
] - 259s 3s/step - loss: 1.1308 - accuracy: 0.6500 - val_loss: 1.5244 - val_accuracy: 0.4833
Epoch 21/50
90/90 [] - 266s 3s/step - loss: 1.1133 - accuracy: 0.6451 - val_loss: 1.4237 - val_accuracy: 0.5222
Epoch 22/50
90/90 [
] - 259s 3s/step - loss: 1.0870 - accuracy: 0.6597 - val_loss: 1.7489 - val_accuracy: 0.4306
Epoch 23/50
90/90 [] - 259s 3s/step - loss: 1.0811 - accuracy: 0.6667 - val_loss: 2.7690 - val_accuracy: 0.3194
Epoch 24/50
90/90 [
] - 260s 3s/step - loss: 1.0538 - accuracy: 0.6556 - val_loss: 1.4627 - val_accuracy: 0.5333
Epoch 25/50
90/90 [] - 263s 3s/step - loss: 0.9622 - accuracy: 0.6972 - val_loss: 1.4054 - val_accuracy: 0.5333
Epoch 26/50
90/90 [
] - 264s 3s/step - loss: 0.9830 - accuracy: 0.6917 - val_loss: 1.3378 - val_accuracy: 0.5639
Epoch 27/50
90/90 [] - 263s 3s/step - loss: 0.9693 - accuracy: 0.6944 - val_loss: 1.5944 - val_accuracy: 0.5028
Epoch 28/50
90/90 [
] - 264s 3s/step - loss: 0.9289 - accuracy: 0.7188 - val_loss: 1.3491 - val_accuracy: 0.5611
Epoch 29/50
90/90 [] - 256s 3s/step - loss: 0.9577 - accuracy: 0.6924 - val_loss: 1.4391 - val_accuracy: 0.5333
Epoch 30/50
90/90 [
] - 256s 3s/step - loss: 0.8907 - accuracy: 0.7174 - val_loss: 1.6094 - val_accuracy: 0.5111
Epoch 31/50
90/90 [] - 255s 3s/step - loss: 0.8839 - accuracy: 0.7146 - val_loss: 1.3990 - val_accuracy: 0.5361
Epoch 32/50
90/90 [
] - 258s 3s/step - loss: 0.8774 - accuracy: 0.7299 - val_loss: 1.5470 - val_accuracy: 0.5111
Epoch 33/50
90/90 [] - 257s 3s/step - loss: 0.8331 - accuracy: 0.7424 - val_loss: 1.5455 - val_accuracy: 0.5083
Epoch 34/50
90/90 [
] - 257s 3s/step - loss: 0.8261 - accuracy: 0.7389 - val_loss: 1.4112 - val_accuracy: 0.5361
Epoch 35/50
90/90 [] - 259s 3s/step - loss: 0.8316 - accuracy: 0.7354 - val_loss: 1.4910 - val_accuracy: 0.5194
Epoch 36/50
90/90 [
] - 258s 3s/step - loss: 0.7872 - accuracy: 0.7507 - val_loss: 1.4830 - val_accuracy: 0.5389
Epoch 37/50
90/90 [] - 256s 3s/step - loss: 0.7945 - accuracy: 0.7437 - val_loss: 1.4616 - val_accuracy: 0.5389
Epoch 38/50
90/90 [
] - 256s 3s/step - loss: 0.7855 - accuracy: 0.7507 - val_loss: 1.5593 - val_accuracy: 0.5000
Epoch 39/50
90/90 [] - 256s 3s/step - loss: 0.7417 - accuracy: 0.7653 - val_loss: 1.4031 - val_accuracy: 0.5417
Epoch 40/50
90/90 [
] - 257s 3s/step - loss: 0.7303 - accuracy: 0.7778 - val_loss: 1.2697 - val_accuracy: 0.5861
Epoch 41/50
90/90 [] - 257s 3s/step - loss: 0.7173 - accuracy: 0.7667 - val_loss: 1.5390 - val_accuracy: 0.5667
Epoch 42/50
90/90 [
] - 259s 3s/step - loss: 0.7022 - accuracy: 0.7757 - val_loss: 1.6119 - val_accuracy: 0.5056
Epoch 43/50
90/90 [] - 258s 3s/step - loss: 0.6937 - accuracy: 0.7764 - val_loss: 1.4658 - val_accuracy: 0.5278
Epoch 44/50
90/90 [
] - 258s 3s/step - loss: 0.6612 - accuracy: 0.7917 - val_loss: 1.5065 - val_accuracy: 0.5194
Epoch 45/50
90/90 [] - 257s 3s/step - loss: 0.6519 - accuracy: 0.7972 - val_loss: 1.5183 - val_accuracy: 0.5861
Epoch 46/50
90/90 [
] - 256s 3s/step - loss: 0.6622 - accuracy: 0.7917 - val_loss: 1.5164 - val_accuracy: 0.5361
Epoch 47/50
90/90 [] - 256s 3s/step - loss: 0.6243 - accuracy: 0.8028 - val_loss: 1.5838 - val_accuracy: 0.5278
Epoch 48/50
90/90 [
] - 255s 3s/step - loss: 0.6307 - accuracy: 0.7986 - val_loss: 1.3909 - val_accuracy: 0.5750
Epoch 49/50
90/90 [] - 256s 3s/step - loss: 0.6185 - accuracy: 0.8097 - val_loss: 1.3855 - val_accuracy: 0.5639
Epoch 50/50
90/90 [
] - 256s 3s/step - loss: 0.6042 - accuracy: 0.8090 - val_loss: 1.5076 - val_accuracy: 0.5944

五、模型评估

1. Accuracy与Loss图

from matplotlib.ticker import MultipleLocator
plt.rcParams['savefig.dpi'] = 300 #图片像素
plt.rcParams['figure.dpi']  = 300 #分辨率

acc1     = history_model1.history['accuracy']
acc2     = history_model2.history['accuracy']
val_acc1 = history_model1.history['val_accuracy']
val_acc2 = history_model2.history['val_accuracy']

loss1     = history_model1.history['loss']
loss2     = history_model2.history['loss']
val_loss1 = history_model1.history['val_loss']
val_loss2 = history_model2.history['val_loss']

epochs_range = range(len(acc1))

plt.figure(figsize=(16, 4))
plt.subplot(1, 2, 1)

plt.plot(epochs_range, acc1, label='Training Accuracy-Adam')
plt.plot(epochs_range, acc2, label='Training Accuracy-SGD')
plt.plot(epochs_range, val_acc1, label='Validation Accuracy-Adam')
plt.plot(epochs_range, val_acc2, label='Validation Accuracy-SGD')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
# 设置刻度间隔,x轴每1一个刻度
ax = plt.gca()
ax.xaxis.set_major_locator(MultipleLocator(1))

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss1, label='Training Loss-Adam')
plt.plot(epochs_range, loss2, label='Training Loss-SGD')
plt.plot(epochs_range, val_loss1, label='Validation Loss-Adam')
plt.plot(epochs_range, val_loss2, label='Validation Loss-SGD')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
   
# 设置刻度间隔,x轴每1一个刻度
ax = plt.gca()
ax.xaxis.set_major_locator(MultipleLocator(1))

plt.show()

在这里插入图片描述

2. 模型评估

def test_accuracy_report(model):
    score = model.evaluate(val_ds, verbose=0)
    print('Loss function: %s, accuracy:' % score[0], score[1])
    
test_accuracy_report(model2)

Loss function: 1.5076216459274292, accuracy: 0.5944444537162781

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值