深度学习笔记16_TensorFlow实现好莱坞明星识别

一、我的环境

1.语言环境:Python 3.9

2.编译器:Pycharm

3.深度学习环境:TensorFlow 2.10.0

二、GPU设置

       若使用的是cpu则可忽略

import tensorflow as tf
gpus = tf.config.list_physical_devices("GPU")

if gpus:
    gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPU
    tf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpu0],"GPU")

、导入数据

data_dir = "./data/"
data_dir = pathlib.Path(data_dir)

image_count = len(list(data_dir.glob('*/*/*.jpg')))

print("图片总数为:",image_count)
#图片总数为:1800

、数据预处理

batch_size = 32
img_height = 224
img_width = 224

"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    "./data/train/",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)

"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    "./data/test/",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
class_names = train_ds.class_names
print(class_names)

运行结果: 

['Angelina Jolie', 'Brad Pitt', 'Denzel Washington', 'Hugh Jackman', 'Jennifer Lawrence', 'Johnny Depp', 'Kate Winslet', 'Leonardo DiCaprio', 'Megan Fox', 'Natalie Portman', 'Nicole Kidman', 'Robert Downey Jr', 'Sandra Bullock', 'Scarlett Johansson', 'Tom Cruise', 'Tom Hanks', 'Will Smith']

五、可视化图片

plt.figure(figsize=(20, 10))

for images, labels in train_ds.take(1):
    for i in range(20):
        ax = plt.subplot(5, 10, i + 1)

        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(class_names[np.argmax(labels[i])])
        
        plt.axis("off")
plt.show()

 运行结果:

​​

再次检查数据:

for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break

 运行结果:

(32, 224, 224, 3)
(32,)

六、配置数据集

  • shuffle():打乱数据,关于此函数的详细介绍可以参考:https://zhuanlan.zhihu.com/p/42417456
  • prefetch():预取数据,加速运行
  • cache():将数据集缓存到内存当中,加速运行
AUTOTUNE = tf.data.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

七、构建CNN网络模型

        卷积神经网络(CNN)的输入是张量 (Tensor) 形式的 (image_height, image_width, color_channels),包含了图像高度、宽度及颜色信息。不需要输入batch size。color_channels 为 (R,G,B) 分别对应 RGB 的三个颜色通道(color channel)。在此示例中,我们的 CNN 输入形状是 (180, 180, 3)。我们需要在声明第一层时将形状赋值给参数input_shape

"""
关于卷积核的计算不懂的可以参考文章:https://blog.csdn.net/qq_38251616/article/details/114278995

layers.Dropout(0.4) 作用是防止过拟合,提高模型的泛化能力。
关于Dropout层的更多介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/115826689
"""

model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
    
    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)), # 卷积层1,卷积核3*3  
    layers.AveragePooling2D((2, 2)),               # 池化层1,2*2采样
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.AveragePooling2D((2, 2)),               # 池化层2,2*2采样
    layers.Dropout(0.5),  
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.AveragePooling2D((2, 2)),     
    layers.Dropout(0.5),  
    layers.Conv2D(128, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.Dropout(0.5), 
    
    layers.Flatten(),                       # Flatten层,连接卷积层与全连接层
    layers.Dense(128, activation='relu'),   # 全连接层,特征进一步提取
    layers.Dense(len(class_names))               # 输出层,输出预期结果
])

model.summary()  # 打印网络结构

运行结果:

_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 rescaling (Rescaling)       (None, 224, 224, 3)       0

 conv2d (Conv2D)             (None, 222, 222, 16)      448

 average_pooling2d (AverageP  (None, 111, 111, 16)     0
 ooling2D)

 conv2d_1 (Conv2D)           (None, 109, 109, 32)      4640

 average_pooling2d_1 (Averag  (None, 54, 54, 32)       0
 ePooling2D)

 dropout (Dropout)           (None, 54, 54, 32)        0

 conv2d_2 (Conv2D)           (None, 52, 52, 64)        18496

 average_pooling2d_2 (Averag  (None, 26, 26, 64)       0
 ePooling2D)

 dropout_1 (Dropout)         (None, 26, 26, 64)        0

 conv2d_3 (Conv2D)           (None, 24, 24, 128)       73856

 dropout_2 (Dropout)         (None, 24, 24, 128)       0

 flatten (Flatten)           (None, 73728)             0

 dense (Dense)               (None, 128)               9437312

 dense_1 (Dense)             (None, 17)                2193

=================================================================
Total params: 9,536,945
Trainable params: 9,536,945
Non-trainable params: 0
_________________________________________________________________

八、编译

        在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:

  • 损失函数(loss):用于衡量模型在训练期间的准确率。
  • 优化器(optimizer):决定模型如何根据其看到的数据和自身的损失函数进行更新。
  • 指标(metrics):用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。
# 设置初始学习率
initial_learning_rate = 1e-4

lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
        initial_learning_rate, 
        decay_steps=60,      # 敲黑板!!!这里是指 steps,不是指epochs
        decay_rate=0.96,     # lr经过一次衰减就会变成 decay_rate*lr
        staircase=True)

# 将指数衰减学习率送入优化器
optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule)

model.compile(optimizer=optimizer,
              loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

早停与保存最佳模型参数

from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping

epochs = 100

# 保存最佳模型参数
checkpointer = ModelCheckpoint('best_model.h5',
                                monitor='val_accuracy',
                                verbose=1,
                                save_best_only=True,
                                save_weights_only=True)

# 设置早停
earlystopper = EarlyStopping(monitor='val_accuracy', 
                             min_delta=0.001,
                             patience=20, 
                             verbose=1)

九、训练模型

history = model.fit(train_ds,
                    validation_data=val_ds,
                    epochs=epochs,
                    callbacks=[checkpointer, earlystopper])

运行结果:

Epoch 1/100
50/51 [============================>.] - ETA: 0s - loss: 2.8207 - accuracy: 0.1026
Epoch 1: val_accuracy improved from -inf to 0.13889, saving model to best_model.h5
51/51 [==============================] - 6s 43ms/step - loss: 2.8206 - accuracy: 0.1019 - val_loss: 2.7811 - val_accuracy: 0.1389
Epoch 2/100
51/51 [==============================] - ETA: 0s - loss: 2.7668 - accuracy: 0.1228
Epoch 2: val_accuracy improved from 0.13889 to 0.14444, saving model to best_model.h5
51/51 [==============================] - 2s 33ms/step - loss: 2.7668 - accuracy: 0.1228 - val_loss: 2.7313 - val_accuracy: 0.1444
Epoch 3/100
49/51 [===========================>..] - ETA: 0s - loss: 2.7116 - accuracy: 0.1473
Epoch 3: val_accuracy improved from 0.14444 to 0.18889, saving model to best_model.h5
51/51 [==============================] - 2s 32ms/step - loss: 2.7088 - accuracy: 0.1500 - val_loss: 2.6768 - val_accuracy: 0.1889
Epoch 4/100
51/51 [==============================] - ETA: 0s - loss: 2.6256 - accuracy: 0.1698
Epoch 4: val_accuracy did not improve from 0.18889
51/51 [==============================] - 2s 31ms/step - loss: 2.6256 - accuracy: 0.1698 - val_loss: 2.6234 - val_accuracy: 0.1500
Epoch 5/100
51/51 [==============================] - ETA: 0s - loss: 2.5384 - accuracy: 0.1883
Epoch 5: val_accuracy did not improve from 0.18889
51/51 [==============================] - 2s 31ms/step - loss: 2.5384 - accuracy: 0.1883 - val_loss: 2.6036 - val_accuracy: 0.1667
Epoch 6/100
51/51 [==============================] - ETA: 0s - loss: 2.4647 - accuracy: 0.1981
Epoch 6: val_accuracy did not improve from 0.18889
51/51 [==============================] - 2s 32ms/step - loss: 2.4647 - accuracy: 0.1981 - val_loss: 2.5351 - val_accuracy: 0.1722
Epoch 7/100
51/51 [==============================] - ETA: 0s - loss: 2.3598 - accuracy: 0.2204
Epoch 7: val_accuracy improved from 0.18889 to 0.19444, saving model to best_model.h5
51/51 [==============================] - 2s 33ms/step - loss: 2.3598 - accuracy: 0.2204 - val_loss: 2.5084 - val_accuracy: 0.1944
Epoch 8/100
51/51 [==============================] - ETA: 0s - loss: 2.2737 - accuracy: 0.2667
Epoch 8: val_accuracy improved from 0.19444 to 0.20556, saving model to best_model.h5
51/51 [==============================] - 2s 33ms/step - loss: 2.2737 - accuracy: 0.2667 - val_loss: 2.5591 - val_accuracy: 0.2056
Epoch 9/100
51/51 [==============================] - ETA: 0s - loss: 2.2180 - accuracy: 0.2698
Epoch 9: val_accuracy improved from 0.20556 to 0.21667, saving model to best_model.h5
51/51 [==============================] - 2s 33ms/step - loss: 2.2180 - accuracy: 0.2698 - val_loss: 2.4468 - val_accuracy: 0.2167
Epoch 10/100
51/51 [==============================] - ETA: 0s - loss: 2.1410 - accuracy: 0.3111
Epoch 10: val_accuracy improved from 0.21667 to 0.23333, saving model to best_model.h5
51/51 [==============================] - 2s 33ms/step - loss: 2.1410 - accuracy: 0.3111 - val_loss: 2.5352 - val_accuracy: 0.2333
Epoch 11/100
51/51 [==============================] - ETA: 0s - loss: 2.0839 - accuracy: 0.3327
Epoch 11: val_accuracy did not improve from 0.23333
51/51 [==============================] - 2s 32ms/step - loss: 2.0839 - accuracy: 0.3327 - val_loss: 2.3503 - val_accuracy: 0.1889
Epoch 12/100
51/51 [==============================] - ETA: 0s - loss: 2.0143 - accuracy: 0.3568
Epoch 12: val_accuracy did not improve from 0.23333
51/51 [==============================] - 2s 32ms/step - loss: 2.0143 - accuracy: 0.3568 - val_loss: 2.3954 - val_accuracy: 0.2167
Epoch 13/100
49/51 [===========================>..] - ETA: 0s - loss: 1.9587 - accuracy: 0.3654
Epoch 13: val_accuracy did not improve from 0.23333
51/51 [==============================] - 2s 32ms/step - loss: 1.9502 - accuracy: 0.3691 - val_loss: 2.4125 - val_accuracy: 0.2222
Epoch 14/100
51/51 [==============================] - ETA: 0s - loss: 1.8888 - accuracy: 0.4000
Epoch 14: val_accuracy did not improve from 0.23333
51/51 [==============================] - 2s 32ms/step - loss: 1.8888 - accuracy: 0.4000 - val_loss: 2.3640 - val_accuracy: 0.2278
Epoch 15/100
51/51 [==============================] - ETA: 0s - loss: 1.8186 - accuracy: 0.4191
Epoch 15: val_accuracy did not improve from 0.23333
51/51 [==============================] - 2s 32ms/step - loss: 1.8186 - accuracy: 0.4191 - val_loss: 2.3876 - val_accuracy: 0.2222
Epoch 16/100
51/51 [==============================] - ETA: 0s - loss: 1.7502 - accuracy: 0.4370
Epoch 16: val_accuracy improved from 0.23333 to 0.26667, saving model to best_model.h5
51/51 [==============================] - 2s 33ms/step - loss: 1.7502 - accuracy: 0.4370 - val_loss: 2.3358 - val_accuracy: 0.2667
Epoch 17/100
51/51 [==============================] - ETA: 0s - loss: 1.6739 - accuracy: 0.4617
Epoch 17: val_accuracy did not improve from 0.26667
51/51 [==============================] - 2s 32ms/step - loss: 1.6739 - accuracy: 0.4617 - val_loss: 2.3305 - val_accuracy: 0.2444
Epoch 18/100
51/51 [==============================] - ETA: 0s - loss: 1.6092 - accuracy: 0.4951
Epoch 18: val_accuracy improved from 0.26667 to 0.27778, saving model to best_model.h5
51/51 [==============================] - 2s 33ms/step - loss: 1.6092 - accuracy: 0.4951 - val_loss: 2.3676 - val_accuracy: 0.2778
Epoch 19/100
51/51 [==============================] - ETA: 0s - loss: 1.5549 - accuracy: 0.5080
Epoch 19: val_accuracy did not improve from 0.27778
51/51 [==============================] - 2s 32ms/step - loss: 1.5549 - accuracy: 0.5080 - val_loss: 2.3873 - val_accuracy: 0.2611
Epoch 20/100
51/51 [==============================] - ETA: 0s - loss: 1.5150 - accuracy: 0.5222
Epoch 20: val_accuracy improved from 0.27778 to 0.31111, saving model to best_model.h5
51/51 [==============================] - 2s 33ms/step - loss: 1.5150 - accuracy: 0.5222 - val_loss: 2.3009 - val_accuracy: 0.3111
Epoch 21/100
51/51 [==============================] - ETA: 0s - loss: 1.4628 - accuracy: 0.5420
Epoch 21: val_accuracy did not improve from 0.31111
51/51 [==============================] - 2s 32ms/step - loss: 1.4628 - accuracy: 0.5420 - val_loss: 2.4080 - val_accuracy: 0.2611
Epoch 22/100
51/51 [==============================] - ETA: 0s - loss: 1.3823 - accuracy: 0.5648
Epoch 22: val_accuracy did not improve from 0.31111
51/51 [==============================] - 2s 32ms/step - loss: 1.3823 - accuracy: 0.5648 - val_loss: 2.3269 - val_accuracy: 0.3111
Epoch 23/100
51/51 [==============================] - ETA: 0s - loss: 1.3524 - accuracy: 0.5642
Epoch 23: val_accuracy did not improve from 0.31111
51/51 [==============================] - 2s 32ms/step - loss: 1.3524 - accuracy: 0.5642 - val_loss: 2.3547 - val_accuracy: 0.3056
Epoch 24/100
51/51 [==============================] - ETA: 0s - loss: 1.2668 - accuracy: 0.6012
Epoch 24: val_accuracy did not improve from 0.31111
51/51 [==============================] - 2s 32ms/step - loss: 1.2668 - accuracy: 0.6012 - val_loss: 2.3703 - val_accuracy: 0.3000
Epoch 25/100
51/51 [==============================] - ETA: 0s - loss: 1.2246 - accuracy: 0.6093
Epoch 25: val_accuracy did not improve from 0.31111
51/51 [==============================] - 2s 32ms/step - loss: 1.2246 - accuracy: 0.6093 - val_loss: 2.4212 - val_accuracy: 0.2778
Epoch 26/100
51/51 [==============================] - ETA: 0s - loss: 1.1880 - accuracy: 0.6204
Epoch 26: val_accuracy did not improve from 0.31111
51/51 [==============================] - 2s 32ms/step - loss: 1.1880 - accuracy: 0.6204 - val_loss: 2.4291 - val_accuracy: 0.3111
Epoch 27/100
51/51 [==============================] - ETA: 0s - loss: 1.1134 - accuracy: 0.6395
Epoch 27: val_accuracy did not improve from 0.31111
51/51 [==============================] - 2s 32ms/step - loss: 1.1134 - accuracy: 0.6395 - val_loss: 2.4733 - val_accuracy: 0.2889
Epoch 28/100
51/51 [==============================] - ETA: 0s - loss: 1.0572 - accuracy: 0.6593
Epoch 28: val_accuracy did not improve from 0.31111
51/51 [==============================] - 2s 32ms/step - loss: 1.0572 - accuracy: 0.6593 - val_loss: 2.4906 - val_accuracy: 0.3111
Epoch 29/100
51/51 [==============================] - ETA: 0s - loss: 1.0306 - accuracy: 0.6722
Epoch 29: val_accuracy did not improve from 0.31111
51/51 [==============================] - 2s 32ms/step - loss: 1.0306 - accuracy: 0.6722 - val_loss: 2.4765 - val_accuracy: 0.3111
Epoch 30/100
51/51 [==============================] - ETA: 0s - loss: 0.9729 - accuracy: 0.6988
Epoch 30: val_accuracy improved from 0.31111 to 0.32222, saving model to best_model.h5
51/51 [==============================] - 2s 33ms/step - loss: 0.9729 - accuracy: 0.6988 - val_loss: 2.5010 - val_accuracy: 0.3222
Epoch 31/100
51/51 [==============================] - ETA: 0s - loss: 0.9044 - accuracy: 0.7167
Epoch 31: val_accuracy did not improve from 0.32222
51/51 [==============================] - 2s 32ms/step - loss: 0.9044 - accuracy: 0.7167 - val_loss: 2.5270 - val_accuracy: 0.3111
Epoch 32/100
51/51 [==============================] - ETA: 0s - loss: 0.8984 - accuracy: 0.7241
Epoch 32: val_accuracy did not improve from 0.32222
51/51 [==============================] - 2s 32ms/step - loss: 0.8984 - accuracy: 0.7241 - val_loss: 2.5631 - val_accuracy: 0.3222
Epoch 33/100
51/51 [==============================] - ETA: 0s - loss: 0.8361 - accuracy: 0.7340
Epoch 33: val_accuracy improved from 0.32222 to 0.33889, saving model to best_model.h5
51/51 [==============================] - 2s 33ms/step - loss: 0.8361 - accuracy: 0.7340 - val_loss: 2.6231 - val_accuracy: 0.3389
Epoch 34/100
51/51 [==============================] - ETA: 0s - loss: 0.7865 - accuracy: 0.7673
Epoch 34: val_accuracy improved from 0.33889 to 0.34444, saving model to best_model.h5
51/51 [==============================] - 2s 33ms/step - loss: 0.7865 - accuracy: 0.7673 - val_loss: 2.6041 - val_accuracy: 0.3444
Epoch 35/100
51/51 [==============================] - ETA: 0s - loss: 0.7629 - accuracy: 0.7488
Epoch 35: val_accuracy did not improve from 0.34444
51/51 [==============================] - 2s 32ms/step - loss: 0.7629 - accuracy: 0.7488 - val_loss: 2.6493 - val_accuracy: 0.3222
Epoch 36/100
51/51 [==============================] - ETA: 0s - loss: 0.7633 - accuracy: 0.7451
Epoch 36: val_accuracy did not improve from 0.34444
51/51 [==============================] - 2s 32ms/step - loss: 0.7633 - accuracy: 0.7451 - val_loss: 2.7031 - val_accuracy: 0.3222
Epoch 37/100
51/51 [==============================] - ETA: 0s - loss: 0.7143 - accuracy: 0.7691
Epoch 37: val_accuracy did not improve from 0.34444
51/51 [==============================] - 2s 32ms/step - loss: 0.7143 - accuracy: 0.7691 - val_loss: 2.6244 - val_accuracy: 0.3111
Epoch 38/100
51/51 [==============================] - ETA: 0s - loss: 0.6817 - accuracy: 0.7944
Epoch 38: val_accuracy did not improve from 0.34444
51/51 [==============================] - 2s 32ms/step - loss: 0.6817 - accuracy: 0.7944 - val_loss: 2.7143 - val_accuracy: 0.3000
Epoch 39/100
51/51 [==============================] - ETA: 0s - loss: 0.6598 - accuracy: 0.7852
Epoch 39: val_accuracy did not improve from 0.34444
51/51 [==============================] - 2s 32ms/step - loss: 0.6598 - accuracy: 0.7852 - val_loss: 2.7183 - val_accuracy: 0.3333
Epoch 40/100
51/51 [==============================] - ETA: 0s - loss: 0.6523 - accuracy: 0.7883
Epoch 40: val_accuracy did not improve from 0.34444
51/51 [==============================] - 2s 32ms/step - loss: 0.6523 - accuracy: 0.7883 - val_loss: 2.8403 - val_accuracy: 0.3278
Epoch 41/100
51/51 [==============================] - ETA: 0s - loss: 0.6468 - accuracy: 0.7901
Epoch 41: val_accuracy did not improve from 0.34444
51/51 [==============================] - 2s 32ms/step - loss: 0.6468 - accuracy: 0.7901 - val_loss: 2.8092 - val_accuracy: 0.3389
Epoch 42/100
51/51 [==============================] - ETA: 0s - loss: 0.5937 - accuracy: 0.7994
Epoch 42: val_accuracy did not improve from 0.34444
51/51 [==============================] - 2s 32ms/step - loss: 0.5937 - accuracy: 0.7994 - val_loss: 2.7973 - val_accuracy: 0.3278
Epoch 43/100
51/51 [==============================] - ETA: 0s - loss: 0.5586 - accuracy: 0.8290
Epoch 43: val_accuracy did not improve from 0.34444
51/51 [==============================] - 2s 32ms/step - loss: 0.5586 - accuracy: 0.8290 - val_loss: 2.8677 - val_accuracy: 0.3333
Epoch 44/100
51/51 [==============================] - ETA: 0s - loss: 0.5448 - accuracy: 0.8340
Epoch 44: val_accuracy did not improve from 0.34444
51/51 [==============================] - 2s 32ms/step - loss: 0.5448 - accuracy: 0.8340 - val_loss: 2.8104 - val_accuracy: 0.3222
Epoch 45/100
51/51 [==============================] - ETA: 0s - loss: 0.5233 - accuracy: 0.8358
Epoch 45: val_accuracy did not improve from 0.34444
51/51 [==============================] - 2s 32ms/step - loss: 0.5233 - accuracy: 0.8358 - val_loss: 2.9351 - val_accuracy: 0.3278
Epoch 46/100
51/51 [==============================] - ETA: 0s - loss: 0.5250 - accuracy: 0.8364
Epoch 46: val_accuracy did not improve from 0.34444
51/51 [==============================] - 2s 32ms/step - loss: 0.5250 - accuracy: 0.8364 - val_loss: 2.9308 - val_accuracy: 0.3389
Epoch 47/100
51/51 [==============================] - ETA: 0s - loss: 0.5010 - accuracy: 0.8463
Epoch 47: val_accuracy improved from 0.34444 to 0.37222, saving model to best_model.h5
51/51 [==============================] - 2s 33ms/step - loss: 0.5010 - accuracy: 0.8463 - val_loss: 3.0079 - val_accuracy: 0.3722
Epoch 48/100
51/51 [==============================] - ETA: 0s - loss: 0.4749 - accuracy: 0.8562
Epoch 48: val_accuracy did not improve from 0.37222
51/51 [==============================] - 2s 32ms/step - loss: 0.4749 - accuracy: 0.8562 - val_loss: 2.9437 - val_accuracy: 0.3500
Epoch 49/100
51/51 [==============================] - ETA: 0s - loss: 0.4834 - accuracy: 0.8562
Epoch 49: val_accuracy did not improve from 0.37222
51/51 [==============================] - 2s 32ms/step - loss: 0.4834 - accuracy: 0.8562 - val_loss: 2.9215 - val_accuracy: 0.3611
Epoch 50/100
51/51 [==============================] - ETA: 0s - loss: 0.4626 - accuracy: 0.8549
Epoch 50: val_accuracy did not improve from 0.37222
51/51 [==============================] - 2s 32ms/step - loss: 0.4626 - accuracy: 0.8549 - val_loss: 3.0028 - val_accuracy: 0.3333
Epoch 51/100
51/51 [==============================] - ETA: 0s - loss: 0.4158 - accuracy: 0.8741
Epoch 51: val_accuracy did not improve from 0.37222
51/51 [==============================] - 2s 32ms/step - loss: 0.4158 - accuracy: 0.8741 - val_loss: 2.9503 - val_accuracy: 0.3611
Epoch 52/100
51/51 [==============================] - ETA: 0s - loss: 0.4162 - accuracy: 0.8716
Epoch 52: val_accuracy did not improve from 0.37222
51/51 [==============================] - 2s 32ms/step - loss: 0.4162 - accuracy: 0.8716 - val_loss: 3.0640 - val_accuracy: 0.3500
Epoch 53/100
51/51 [==============================] - ETA: 0s - loss: 0.3978 - accuracy: 0.8790
Epoch 53: val_accuracy did not improve from 0.37222
51/51 [==============================] - 2s 33ms/step - loss: 0.3978 - accuracy: 0.8790 - val_loss: 3.0535 - val_accuracy: 0.3444
Epoch 54/100
51/51 [==============================] - ETA: 0s - loss: 0.4094 - accuracy: 0.8735
Epoch 54: val_accuracy did not improve from 0.37222
51/51 [==============================] - 2s 32ms/step - loss: 0.4094 - accuracy: 0.8735 - val_loss: 3.0998 - val_accuracy: 0.3611
Epoch 55/100
51/51 [==============================] - ETA: 0s - loss: 0.3913 - accuracy: 0.8809
Epoch 55: val_accuracy did not improve from 0.37222
51/51 [==============================] - 2s 32ms/step - loss: 0.3913 - accuracy: 0.8809 - val_loss: 3.0934 - val_accuracy: 0.3556
Epoch 56/100
51/51 [==============================] - ETA: 0s - loss: 0.3913 - accuracy: 0.8809
Epoch 56: val_accuracy did not improve from 0.37222
51/51 [==============================] - 2s 32ms/step - loss: 0.3913 - accuracy: 0.8809 - val_loss: 3.1855 - val_accuracy: 0.3500
Epoch 57/100
51/51 [==============================] - ETA: 0s - loss: 0.3769 - accuracy: 0.8895
Epoch 57: val_accuracy did not improve from 0.37222
51/51 [==============================] - 2s 32ms/step - loss: 0.3769 - accuracy: 0.8895 - val_loss: 3.1300 - val_accuracy: 0.3556
Epoch 58/100
51/51 [==============================] - ETA: 0s - loss: 0.3702 - accuracy: 0.8858
Epoch 58: val_accuracy improved from 0.37222 to 0.37778, saving model to best_model.h5
51/51 [==============================] - 2s 33ms/step - loss: 0.3702 - accuracy: 0.8858 - val_loss: 3.1742 - val_accuracy: 0.3778
Epoch 59/100
51/51 [==============================] - ETA: 0s - loss: 0.3475 - accuracy: 0.9037
Epoch 59: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 32ms/step - loss: 0.3475 - accuracy: 0.9037 - val_loss: 3.1385 - val_accuracy: 0.3556
Epoch 60/100
51/51 [==============================] - ETA: 0s - loss: 0.3530 - accuracy: 0.8957
Epoch 60: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 32ms/step - loss: 0.3530 - accuracy: 0.8957 - val_loss: 3.1489 - val_accuracy: 0.3667
Epoch 61/100
51/51 [==============================] - ETA: 0s - loss: 0.3593 - accuracy: 0.8914
Epoch 61: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 32ms/step - loss: 0.3593 - accuracy: 0.8914 - val_loss: 3.1417 - val_accuracy: 0.3722
Epoch 62/100
51/51 [==============================] - ETA: 0s - loss: 0.3560 - accuracy: 0.8901
Epoch 62: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 32ms/step - loss: 0.3560 - accuracy: 0.8901 - val_loss: 3.2154 - val_accuracy: 0.3611
Epoch 63/100
51/51 [==============================] - ETA: 0s - loss: 0.3186 - accuracy: 0.9123
Epoch 63: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 32ms/step - loss: 0.3186 - accuracy: 0.9123 - val_loss: 3.1975 - val_accuracy: 0.3556
Epoch 64/100
51/51 [==============================] - ETA: 0s - loss: 0.3230 - accuracy: 0.9093
Epoch 64: val_accuracy did not improve from 0.37778
51/51 [==============================] - 2s 32ms/step - loss: 0.3230 - accuracy: 0.9093 - val_loss: 3.2387 - val_accuracy: 0.3667
Epoch 65/100
51/51 [==============================] - ETA: 0s - loss: 0.3313 - accuracy: 0.8994
Epoch 65: val_accuracy improved from 0.37778 to 0.38333, saving model to best_model.h5
51/51 [==============================] - 2s 33ms/step - loss: 0.3313 - accuracy: 0.8994 - val_loss: 3.2296 - val_accuracy: 0.3833
Epoch 66/100
51/51 [==============================] - ETA: 0s - loss: 0.3136 - accuracy: 0.9086
Epoch 66: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 32ms/step - loss: 0.3136 - accuracy: 0.9086 - val_loss: 3.2133 - val_accuracy: 0.3722
Epoch 67/100
51/51 [==============================] - ETA: 0s - loss: 0.3145 - accuracy: 0.9111
Epoch 67: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 32ms/step - loss: 0.3145 - accuracy: 0.9111 - val_loss: 3.2758 - val_accuracy: 0.3722
Epoch 68/100
51/51 [==============================] - ETA: 0s - loss: 0.2988 - accuracy: 0.9111
Epoch 68: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 33ms/step - loss: 0.2988 - accuracy: 0.9111 - val_loss: 3.2570 - val_accuracy: 0.3778
Epoch 69/100
51/51 [==============================] - ETA: 0s - loss: 0.2994 - accuracy: 0.9105
Epoch 69: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 32ms/step - loss: 0.2994 - accuracy: 0.9105 - val_loss: 3.2674 - val_accuracy: 0.3722
Epoch 70/100
51/51 [==============================] - ETA: 0s - loss: 0.3241 - accuracy: 0.9012
Epoch 70: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 32ms/step - loss: 0.3241 - accuracy: 0.9012 - val_loss: 3.2973 - val_accuracy: 0.3778
Epoch 71/100
51/51 [==============================] - ETA: 0s - loss: 0.2920 - accuracy: 0.9142
Epoch 71: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 32ms/step - loss: 0.2920 - accuracy: 0.9142 - val_loss: 3.2839 - val_accuracy: 0.3611
Epoch 72/100
51/51 [==============================] - ETA: 0s - loss: 0.2909 - accuracy: 0.9074
Epoch 72: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 33ms/step - loss: 0.2909 - accuracy: 0.9074 - val_loss: 3.3156 - val_accuracy: 0.3722
Epoch 73/100
51/51 [==============================] - ETA: 0s - loss: 0.2752 - accuracy: 0.9241
Epoch 73: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 33ms/step - loss: 0.2752 - accuracy: 0.9241 - val_loss: 3.3520 - val_accuracy: 0.3722
Epoch 74/100
51/51 [==============================] - ETA: 0s - loss: 0.2662 - accuracy: 0.9216
Epoch 74: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 32ms/step - loss: 0.2662 - accuracy: 0.9216 - val_loss: 3.3365 - val_accuracy: 0.3722
Epoch 75/100
51/51 [==============================] - ETA: 0s - loss: 0.2778 - accuracy: 0.9185
Epoch 75: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 32ms/step - loss: 0.2778 - accuracy: 0.9185 - val_loss: 3.3563 - val_accuracy: 0.3722
Epoch 76/100
51/51 [==============================] - ETA: 0s - loss: 0.2544 - accuracy: 0.9315
Epoch 76: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 32ms/step - loss: 0.2544 - accuracy: 0.9315 - val_loss: 3.3706 - val_accuracy: 0.3833
Epoch 77/100
51/51 [==============================] - ETA: 0s - loss: 0.2624 - accuracy: 0.9222
Epoch 77: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 32ms/step - loss: 0.2624 - accuracy: 0.9222 - val_loss: 3.3534 - val_accuracy: 0.3722
Epoch 78/100
51/51 [==============================] - ETA: 0s - loss: 0.2526 - accuracy: 0.9222
Epoch 78: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 33ms/step - loss: 0.2526 - accuracy: 0.9222 - val_loss: 3.3645 - val_accuracy: 0.3778
Epoch 79/100
51/51 [==============================] - ETA: 0s - loss: 0.2492 - accuracy: 0.9259
Epoch 79: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 33ms/step - loss: 0.2492 - accuracy: 0.9259 - val_loss: 3.3651 - val_accuracy: 0.3833
Epoch 80/100
51/51 [==============================] - ETA: 0s - loss: 0.2335 - accuracy: 0.9315
Epoch 80: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 32ms/step - loss: 0.2335 - accuracy: 0.9315 - val_loss: 3.3623 - val_accuracy: 0.3667
Epoch 81/100
51/51 [==============================] - ETA: 0s - loss: 0.2703 - accuracy: 0.9179
Epoch 81: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 32ms/step - loss: 0.2703 - accuracy: 0.9179 - val_loss: 3.3864 - val_accuracy: 0.3722
Epoch 82/100
51/51 [==============================] - ETA: 0s - loss: 0.2445 - accuracy: 0.9259
Epoch 82: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 32ms/step - loss: 0.2445 - accuracy: 0.9259 - val_loss: 3.4152 - val_accuracy: 0.3611
Epoch 83/100
51/51 [==============================] - ETA: 0s - loss: 0.2390 - accuracy: 0.9321
Epoch 83: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 32ms/step - loss: 0.2390 - accuracy: 0.9321 - val_loss: 3.4169 - val_accuracy: 0.3778
Epoch 84/100
51/51 [==============================] - ETA: 0s - loss: 0.2422 - accuracy: 0.9321
Epoch 84: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 32ms/step - loss: 0.2422 - accuracy: 0.9321 - val_loss: 3.4108 - val_accuracy: 0.3833
Epoch 85/100
51/51 [==============================] - ETA: 0s - loss: 0.2590 - accuracy: 0.9247
Epoch 85: val_accuracy did not improve from 0.38333
51/51 [==============================] - 2s 33ms/step - loss: 0.2590 - accuracy: 0.9247 - val_loss: 3.4434 - val_accuracy: 0.3722
Epoch 85: early stopping
1/1 [==============================] - 1s 509ms/step

 十、模型评估

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(len(loss))

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

十一、指定图片预测

# 加载效果最好的模型权重
model.load_weights('best_model.h5')

from PIL import Image
import numpy as np

img = Image.open("./data/Jennifer Lawrence/003_963a3627.jpg")  #这里选择你需要预测的图片
image = tf.image.resize(img, [img_height, img_width])

img_array = tf.expand_dims(image, 0) 

predictions = model.predict(img_array) # 这里选用你已经训练好的模型
print("预测结果为:",class_names[np.argmax(predictions)])

运行结果:

预测结果为: Jennifer Lawrence

 自己搭建VGG16网络模型

Epoch 1/100
51/51 [==============================] - ETA: 0s - loss: 2.8214 - accuracy: 0.0988
Epoch 1: val_accuracy improved from -inf to 0.13889, saving model to best_model.h5
51/51 [==============================] - 54s 690ms/step - loss: 2.8214 - accuracy: 0.0988 - val_loss: 2.8037 - val_accuracy: 0.1389
Epoch 2/100
51/51 [==============================] - ETA: 0s - loss: 2.7842 - accuracy: 0.1142
Epoch 2: val_accuracy improved from 0.13889 to 0.14444, saving model to best_model.h5
51/51 [==============================] - 24s 478ms/step - loss: 2.7842 - accuracy: 0.1142 - val_loss: 2.7398 - val_accuracy: 0.1444
Epoch 3/100
51/51 [==============================] - ETA: 0s - loss: 2.5815 - accuracy: 0.1346
Epoch 3: val_accuracy improved from 0.14444 to 0.16111, saving model to best_model.h5
51/51 [==============================] - 24s 478ms/step - loss: 2.5815 - accuracy: 0.1346 - val_loss: 2.4529 - val_accuracy: 0.1611
Epoch 4/100
51/51 [==============================] - ETA: 0s - loss: 2.3430 - accuracy: 0.2228
Epoch 4: val_accuracy improved from 0.16111 to 0.23889, saving model to best_model.h5
51/51 [==============================] - 24s 475ms/step - loss: 2.3430 - accuracy: 0.2228 - val_loss: 2.2923 - val_accuracy: 0.2389
Epoch 5/100
51/51 [==============================] - ETA: 0s - loss: 2.1377 - accuracy: 0.2821
Epoch 5: val_accuracy did not improve from 0.23889
51/51 [==============================] - 23s 460ms/step - loss: 2.1377 - accuracy: 0.2821 - val_loss: 2.3213 - val_accuracy: 0.2222
Epoch 6/100
51/51 [==============================] - ETA: 0s - loss: 1.9723 - accuracy: 0.3272
Epoch 6: val_accuracy improved from 0.23889 to 0.28889, saving model to best_model.h5
51/51 [==============================] - 24s 466ms/step - loss: 1.9723 - accuracy: 0.3272 - val_loss: 2.2219 - val_accuracy: 0.2889
Epoch 7/100
51/51 [==============================] - ETA: 0s - loss: 1.7400 - accuracy: 0.4006
Epoch 7: val_accuracy improved from 0.28889 to 0.31667, saving model to best_model.h5
51/51 [==============================] - 24s 476ms/step - loss: 1.7400 - accuracy: 0.4006 - val_loss: 2.1780 - val_accuracy: 0.3167
Epoch 8/100
51/51 [==============================] - ETA: 0s - loss: 1.4288 - accuracy: 0.5167
Epoch 8: val_accuracy improved from 0.31667 to 0.33889, saving model to best_model.h5
51/51 [==============================] - 24s 477ms/step - loss: 1.4288 - accuracy: 0.5167 - val_loss: 2.1190 - val_accuracy: 0.3389
Epoch 9/100
51/51 [==============================] - ETA: 0s - loss: 1.0417 - accuracy: 0.6488
Epoch 9: val_accuracy improved from 0.33889 to 0.34444, saving model to best_model.h5
51/51 [==============================] - 24s 481ms/step - loss: 1.0417 - accuracy: 0.6488 - val_loss: 2.6386 - val_accuracy: 0.3444
Epoch 10/100
51/51 [==============================] - ETA: 0s - loss: 0.6968 - accuracy: 0.7753
Epoch 10: val_accuracy improved from 0.34444 to 0.40000, saving model to best_model.h5
51/51 [==============================] - 25s 492ms/step - loss: 0.6968 - accuracy: 0.7753 - val_loss: 2.9270 - val_accuracy: 0.4000
Epoch 11/100
51/51 [==============================] - ETA: 0s - loss: 0.3429 - accuracy: 0.8932
Epoch 11: val_accuracy did not improve from 0.40000
51/51 [==============================] - 24s 471ms/step - loss: 0.3429 - accuracy: 0.8932 - val_loss: 3.2921 - val_accuracy: 0.3944
Epoch 12/100
51/51 [==============================] - ETA: 0s - loss: 0.1793 - accuracy: 0.9457
Epoch 12: val_accuracy did not improve from 0.40000
51/51 [==============================] - 24s 465ms/step - loss: 0.1793 - accuracy: 0.9457 - val_loss: 4.6613 - val_accuracy: 0.3944
Epoch 13/100
51/51 [==============================] - ETA: 0s - loss: 0.1331 - accuracy: 0.9562
Epoch 13: val_accuracy improved from 0.40000 to 0.40556, saving model to best_model.h5
51/51 [==============================] - 24s 480ms/step - loss: 0.1331 - accuracy: 0.9562 - val_loss: 4.3193 - val_accuracy: 0.4056
Epoch 14/100
51/51 [==============================] - ETA: 0s - loss: 0.0764 - accuracy: 0.9772
Epoch 14: val_accuracy improved from 0.40556 to 0.41667, saving model to best_model.h5
51/51 [==============================] - 25s 491ms/step - loss: 0.0764 - accuracy: 0.9772 - val_loss: 5.4012 - val_accuracy: 0.4167
Epoch 15/100
51/51 [==============================] - ETA: 0s - loss: 0.1288 - accuracy: 0.9562
Epoch 15: val_accuracy did not improve from 0.41667
51/51 [==============================] - 24s 469ms/step - loss: 0.1288 - accuracy: 0.9562 - val_loss: 4.5160 - val_accuracy: 0.3944
Epoch 16/100
51/51 [==============================] - ETA: 0s - loss: 0.0509 - accuracy: 0.9895
Epoch 16: val_accuracy did not improve from 0.41667
51/51 [==============================] - 23s 458ms/step - loss: 0.0509 - accuracy: 0.9895 - val_loss: 4.9735 - val_accuracy: 0.4000
Epoch 17/100
51/51 [==============================] - ETA: 0s - loss: 0.0078 - accuracy: 0.9975
Epoch 17: val_accuracy improved from 0.41667 to 0.43333, saving model to best_model.h5
51/51 [==============================] - 24s 472ms/step - loss: 0.0078 - accuracy: 0.9975 - val_loss: 5.9198 - val_accuracy: 0.4333
Epoch 18/100
51/51 [==============================] - ETA: 0s - loss: 0.0290 - accuracy: 0.9907
Epoch 18: val_accuracy did not improve from 0.43333
51/51 [==============================] - 24s 466ms/step - loss: 0.0290 - accuracy: 0.9907 - val_loss: 5.5717 - val_accuracy: 0.4167
Epoch 19/100
51/51 [==============================] - ETA: 0s - loss: 0.0330 - accuracy: 0.9901
Epoch 19: val_accuracy did not improve from 0.43333
51/51 [==============================] - 23s 454ms/step - loss: 0.0330 - accuracy: 0.9901 - val_loss: 5.1727 - val_accuracy: 0.4222
Epoch 20/100
51/51 [==============================] - ETA: 0s - loss: 0.0170 - accuracy: 0.9944
Epoch 20: val_accuracy did not improve from 0.43333
51/51 [==============================] - 23s 454ms/step - loss: 0.0170 - accuracy: 0.9944 - val_loss: 5.5307 - val_accuracy: 0.4000
Epoch 21/100
51/51 [==============================] - ETA: 0s - loss: 0.0102 - accuracy: 0.9975
Epoch 21: val_accuracy did not improve from 0.43333
51/51 [==============================] - 23s 455ms/step - loss: 0.0102 - accuracy: 0.9975 - val_loss: 5.9334 - val_accuracy: 0.3833
Epoch 22/100
51/51 [==============================] - ETA: 0s - loss: 6.4381e-04 - accuracy: 1.0000
Epoch 22: val_accuracy did not improve from 0.43333
51/51 [==============================] - 23s 454ms/step - loss: 6.4381e-04 - accuracy: 1.0000 - val_loss: 6.2865 - val_accuracy: 0.4000
Epoch 23/100
51/51 [==============================] - ETA: 0s - loss: 1.4138e-04 - accuracy: 1.0000
Epoch 23: val_accuracy did not improve from 0.43333
51/51 [==============================] - 23s 454ms/step - loss: 1.4138e-04 - accuracy: 1.0000 - val_loss: 6.4503 - val_accuracy: 0.4167
Epoch 24/100
51/51 [==============================] - ETA: 0s - loss: 9.5713e-05 - accuracy: 1.0000
Epoch 24: val_accuracy did not improve from 0.43333
51/51 [==============================] - 23s 455ms/step - loss: 9.5713e-05 - accuracy: 1.0000 - val_loss: 6.5534 - val_accuracy: 0.4111
Epoch 25/100
51/51 [==============================] - ETA: 0s - loss: 7.2447e-05 - accuracy: 1.0000
Epoch 25: val_accuracy did not improve from 0.43333
51/51 [==============================] - 23s 455ms/step - loss: 7.2447e-05 - accuracy: 1.0000 - val_loss: 6.6541 - val_accuracy: 0.4111
Epoch 26/100
51/51 [==============================] - ETA: 0s - loss: 5.6730e-05 - accuracy: 1.0000
Epoch 26: val_accuracy did not improve from 0.43333
51/51 [==============================] - 23s 460ms/step - loss: 5.6730e-05 - accuracy: 1.0000 - val_loss: 6.7532 - val_accuracy: 0.4167
Epoch 27/100
51/51 [==============================] - ETA: 0s - loss: 4.4725e-05 - accuracy: 1.0000
Epoch 27: val_accuracy did not improve from 0.43333
51/51 [==============================] - 23s 455ms/step - loss: 4.4725e-05 - accuracy: 1.0000 - val_loss: 6.8532 - val_accuracy: 0.4222
Epoch 28/100
51/51 [==============================] - ETA: 0s - loss: 3.4983e-05 - accuracy: 1.0000
Epoch 28: val_accuracy did not improve from 0.43333
51/51 [==============================] - 23s 459ms/step - loss: 3.4983e-05 - accuracy: 1.0000 - val_loss: 6.9650 - val_accuracy: 0.4222
Epoch 29/100
51/51 [==============================] - ETA: 0s - loss: 2.6848e-05 - accuracy: 1.0000
Epoch 29: val_accuracy did not improve from 0.43333
51/51 [==============================] - 24s 467ms/step - loss: 2.6848e-05 - accuracy: 1.0000 - val_loss: 7.0813 - val_accuracy: 0.4222
Epoch 30/100
51/51 [==============================] - ETA: 0s - loss: 2.0499e-05 - accuracy: 1.0000
Epoch 30: val_accuracy did not improve from 0.43333
51/51 [==============================] - 23s 458ms/step - loss: 2.0499e-05 - accuracy: 1.0000 - val_loss: 7.1986 - val_accuracy: 0.4222
Epoch 31/100
51/51 [==============================] - ETA: 0s - loss: 1.5914e-05 - accuracy: 1.0000
Epoch 31: val_accuracy did not improve from 0.43333
51/51 [==============================] - 24s 464ms/step - loss: 1.5914e-05 - accuracy: 1.0000 - val_loss: 7.3141 - val_accuracy: 0.4222
Epoch 32/100
51/51 [==============================] - ETA: 0s - loss: 1.2740e-05 - accuracy: 1.0000
Epoch 32: val_accuracy did not improve from 0.43333
51/51 [==============================] - 23s 457ms/step - loss: 1.2740e-05 - accuracy: 1.0000 - val_loss: 7.4196 - val_accuracy: 0.4222
Epoch 33/100
51/51 [==============================] - ETA: 0s - loss: 1.0466e-05 - accuracy: 1.0000
Epoch 33: val_accuracy did not improve from 0.43333
51/51 [==============================] - 23s 454ms/step - loss: 1.0466e-05 - accuracy: 1.0000 - val_loss: 7.5173 - val_accuracy: 0.4278
Epoch 34/100
51/51 [==============================] - ETA: 0s - loss: 8.8179e-06 - accuracy: 1.0000
Epoch 34: val_accuracy did not improve from 0.43333
51/51 [==============================] - 23s 456ms/step - loss: 8.8179e-06 - accuracy: 1.0000 - val_loss: 7.6008 - val_accuracy: 0.4278
Epoch 35/100
51/51 [==============================] - ETA: 0s - loss: 7.5923e-06 - accuracy: 1.0000
Epoch 35: val_accuracy did not improve from 0.43333
51/51 [==============================] - 24s 478ms/step - loss: 7.5923e-06 - accuracy: 1.0000 - val_loss: 7.6853 - val_accuracy: 0.4278
Epoch 36/100
51/51 [==============================] - ETA: 0s - loss: 6.6486e-06 - accuracy: 1.0000
Epoch 36: val_accuracy did not improve from 0.43333
51/51 [==============================] - 24s 476ms/step - loss: 6.6486e-06 - accuracy: 1.0000 - val_loss: 7.7584 - val_accuracy: 0.4278
Epoch 37/100
51/51 [==============================] - ETA: 0s - loss: 5.9099e-06 - accuracy: 1.0000
Epoch 37: val_accuracy did not improve from 0.43333
51/51 [==============================] - 24s 466ms/step - loss: 5.9099e-06 - accuracy: 1.0000 - val_loss: 7.8226 - val_accuracy: 0.4278
Epoch 37: early stopping
1/1 [==============================] - 1s 717ms/step

运行结果:

 

十二、总结

  

损失函数Loss详解:

1. binary_crossentropy(对数损失函数)

sigmoid 相对应的损失函数,针对于二分类问题。

2. categorical_crossentropy(多分类的对数损失函数)

softmax 相对应的损失函数,如果是one-hot编码,则使用 categorical_crossentropy

调用方式一:

model.compile(optimizer="adam",
              loss='categorical_crossentropy',
              metrics=['accuracy'])

调用方式二:

model.compile(optimizer="adam",
              loss=tf.keras.losses.CategoricalCrossentropy(),
              metrics=['accuracy'])

3. sparse_categorical_crossentropy(稀疏性多分类的对数损失函数)

  与 softmax 相对应的损失函数,如果是整数编码,则使用 sparse_categorical_crossentropy

调用方式一:

model.compile(optimizer="adam",
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

调用方式二:

model.compile(optimizer="adam",
              loss=tf.keras.losses.SparseCategoricalCrossentropy(),
              metrics=['accuracy'])

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值