深度学习Day-13:实现优化器对比实验

 🍨 本文为:[🔗365天深度学习训练营] 中的学习记录博客
 🍖 原作者:[K同学啊 | 接辅导、项目定制]

一、 基础配置

  • 语言环境:Python3.7
  • 编译器选择:Pycharm
  • 深度学习环境:TensorFlow2.4.1

二、 前期准备 

1.设置GPU

import tensorflow as tf

gpus = tf.config.list_physical_devices("GPU")

if gpus:
    tf.config.experimental.set_memory_growth(gpus[0], True)  #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpus[0]],"GPU")

# 打印显卡信息,确认GPU可用
print(gpus)

根据个人设备情况,选择使用GPU/CPU进行训练,若GPU可用则输出:

[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

由于在设备上安装的CUDA版本与TensorFlow版本不一致,故这里直接安装了CPU版的TensorFlow,无上述输出。

2. 导入数据

本项目所采用的数据集未收录于公开数据中,故需要自己在文件目录中导入相应数据集合,并设置对应文件目录,以供后续学习过程中使用。

运行下述代码:

import matplotlib.pyplot as plt
import warnings,pathlib

warnings.filterwarnings("ignore")             #忽略警告信息
plt.rcParams['font.sans-serif']    = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False    # 用来正常显示负号

data_dir    = "../data-2/data"
data_dir    = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*')))
print("图片总数为:",image_count)

得到如下输出:

图片总数为: 1800

三、数据预处理

1.加载数据

首先,设置基本的图片格式:

batch_size = 16
img_height = 336
img_width  = 336

接下来,使用image_dataset_from_directory方法将磁盘中的数据加载到tf.data.Dataset

train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="training",
    seed=12,
    image_size=(img_height, img_width),
    batch_size=batch_size)


val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="validation",
    seed=12,
    image_size=(img_height, img_width),
    batch_size=batch_size)

得到如下输出:

Found 1800 files belonging to 17 classes.
Using 1440 files for training.

Found 1800 files belonging to 17 classes.
Using 360 files for validation.

接着,我们可以通过class_names输出数据集的标签。标签将按字母顺序对应于目录名称:

class_names = train_ds.class_names
print(class_names)

得到如下输出:

['Angelina Jolie', 'Brad Pitt', 'Denzel Washington', 'Hugh Jackman', 'Jennifer Lawrence', 'Johnny Depp', 'Kate Winslet', 'Leonardo DiCaprio', 'Megan Fox', 'Natalie Portman', 'Nicole Kidman', 'Robert Downey Jr', 'Sandra Bullock', 'Scarlett Johansson', 'Tom Cruise', 'Tom Hanks', 'Will Smith']

2. 再次检查数据

 通过:

for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break

 输出数据集的数据分布情况:

(16, 336, 336, 3)
(16,)
  • Image_batch是形状的张量(16, 336, 336, 3)。这是一批形状336x336x3的8张图片(最后一维指的是彩色通道RGB);
  • Label_batch是形状(16,)的张量,这些标签对应16张图片;

3.配置数据集

  • shuffle() : 打乱数据

  • prefetch() :预取数据,加速运行

  • cache() :将数据集缓存到内存当中,加速运行
AUTOTUNE = tf.data.AUTOTUNE

def train_preprocessing(image,label):
    return (image/255.0,label)

train_ds = (
    train_ds.cache()
    .shuffle(1000)
    .map(train_preprocessing)    # 这里可以设置预处理函数
#     .batch(batch_size)           # 在image_dataset_from_directory处已经设置了batch_size
    .prefetch(buffer_size=AUTOTUNE)
)

val_ds = (
    val_ds.cache()
    .shuffle(1000)
    .map(train_preprocessing)    # 这里可以设置预处理函数
#     .batch(batch_size)         # 在image_dataset_from_directory处已经设置了batch_size
    .prefetch(buffer_size=AUTOTUNE)
)

4.数据可视化 

plt.figure(figsize=(10, 8))  # 图形的宽为10高为5
plt.suptitle("数据展示")

for images, labels in train_ds.take(1):
    for i in range(15):
        plt.subplot(4, 5, i + 1)
        plt.xticks([])
        plt.yticks([])
        plt.grid(False)

        # 显示图片
        plt.imshow(images[i])
        # 显示标签
        plt.xlabel(class_names[labels[i]-1])

plt.show()

得到如下输出:

四、构建网络

from tensorflow.keras.layers import Dropout,Dense,BatchNormalization
from tensorflow.keras.models import Model

def create_model(optimizer='adam'):
    # 加载预训练模型
    vgg16_base_model = tf.keras.applications.vgg16.VGG16(weights='imagenet',
                                                                include_top=False,
                                                                input_shape=(img_width, img_height, 3),
                                                                pooling='avg')
    for layer in vgg16_base_model.layers:
        layer.trainable = False

    X = vgg16_base_model.output
    
    X = Dense(170, activation='relu')(X)
    X = BatchNormalization()(X)
    X = Dropout(0.5)(X)

    output = Dense(len(class_names), activation='softmax')(X)
    vgg16_model = Model(inputs=vgg16_base_model.input, outputs=output)

    vgg16_model.compile(optimizer=optimizer,
                        loss='sparse_categorical_crossentropy',
                        metrics=['accuracy'])
    return vgg16_model

model1 = create_model(optimizer=tf.keras.optimizers.Adam())
model2 = create_model(optimizer=tf.keras.optimizers.SGD())
model2.summary()

可以得到如下输出:

Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         [(None, 336, 336, 3)]     0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 336, 336, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 336, 336, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 168, 168, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 168, 168, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 168, 168, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 84, 84, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 84, 84, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 84, 84, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 84, 84, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 42, 42, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 42, 42, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 42, 42, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 42, 42, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 21, 21, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 21, 21, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 21, 21, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 21, 21, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 10, 10, 512)       0         
_________________________________________________________________
global_average_pooling2d_1 ( (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 170)               87210     
_________________________________________________________________
batch_normalization_1 (Batch (None, 170)               680       
_________________________________________________________________
dropout_1 (Dropout)          (None, 170)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 17)                2907      
=================================================================
Total params: 14,805,485
Trainable params: 90,457
Non-trainable params: 14,715,028
_________________________________________________________________

这里使用了预训练的VGG16模型,并加载了在ImageNet数据集上的预训练权重,移除了分类部分,在后续将VGG16作为新模型的输入,并添加了对应的层及泛化措施,并实现了模型的编译过程。

五、 训练模型 

通过下列示例代码:

NO_EPOCHS = 50

history_model1  = model1.fit(train_ds, epochs=NO_EPOCHS, verbose=1, validation_data=val_ds)
history_model2  = model2.fit(train_ds, epochs=NO_EPOCHS, verbose=1, validation_data=val_ds)

运行得到如下输出: 

Epoch 1/50
90/90 [==============================] - 163s 2s/step - loss: 3.1673 - accuracy: 0.1192 - val_loss: 2.7013 - val_accuracy: 0.1556
Epoch 2/50
90/90 [==============================] - 155s 2s/step - loss: 2.1834 - accuracy: 0.3220 - val_loss: 2.4661 - val_accuracy: 0.2972
Epoch 3/50
90/90 [==============================] - 161s 2s/step - loss: 1.7920 - accuracy: 0.4213 - val_loss: 2.2519 - val_accuracy: 0.3000
Epoch 4/50
90/90 [==============================] - 161s 2s/step - loss: 1.5476 - accuracy: 0.5282 - val_loss: 2.1631 - val_accuracy: 0.3194
Epoch 5/50
90/90 [==============================] - 162s 2s/step - loss: 1.3420 - accuracy: 0.5828 - val_loss: 1.8238 - val_accuracy: 0.4250
Epoch 6/50
90/90 [==============================] - 163s 2s/step - loss: 1.1854 - accuracy: 0.6327 - val_loss: 1.7837 - val_accuracy: 0.3889
Epoch 7/50
90/90 [==============================] - 169s 2s/step - loss: 1.1296 - accuracy: 0.6517 - val_loss: 1.6683 - val_accuracy: 0.4639
Epoch 8/50
90/90 [==============================] - 167s 2s/step - loss: 0.9812 - accuracy: 0.7062 - val_loss: 1.6596 - val_accuracy: 0.4750
Epoch 9/50
90/90 [==============================] - 253s 3s/step - loss: 0.8816 - accuracy: 0.7408 - val_loss: 1.7896 - val_accuracy: 0.4556
Epoch 10/50
90/90 [==============================] - 200s 2s/step - loss: 0.8526 - accuracy: 0.7383 - val_loss: 1.8275 - val_accuracy: 0.4750
Epoch 11/50
90/90 [==============================] - 156s 2s/step - loss: 0.7923 - accuracy: 0.7590 - val_loss: 1.6872 - val_accuracy: 0.4750
Epoch 12/50
90/90 [==============================] - 157s 2s/step - loss: 0.6726 - accuracy: 0.7971 - val_loss: 1.6346 - val_accuracy: 0.5528
Epoch 13/50
90/90 [==============================] - 157s 2s/step - loss: 0.6442 - accuracy: 0.8111 - val_loss: 1.4918 - val_accuracy: 0.5222
Epoch 14/50
90/90 [==============================] - 160s 2s/step - loss: 0.6123 - accuracy: 0.8118 - val_loss: 1.8845 - val_accuracy: 0.5056
Epoch 15/50
90/90 [==============================] - 157s 2s/step - loss: 0.5188 - accuracy: 0.8604 - val_loss: 1.6495 - val_accuracy: 0.5194
Epoch 16/50
90/90 [==============================] - 157s 2s/step - loss: 0.5084 - accuracy: 0.8425 - val_loss: 1.5498 - val_accuracy: 0.5556
Epoch 17/50
90/90 [==============================] - 158s 2s/step - loss: 0.4890 - accuracy: 0.8591 - val_loss: 1.8405 - val_accuracy: 0.5139
Epoch 18/50
90/90 [==============================] - 165s 2s/step - loss: 0.4444 - accuracy: 0.8762 - val_loss: 1.7857 - val_accuracy: 0.4639
Epoch 19/50
90/90 [==============================] - 159s 2s/step - loss: 0.4166 - accuracy: 0.8669 - val_loss: 1.9680 - val_accuracy: 0.4917
Epoch 20/50
90/90 [==============================] - 156s 2s/step - loss: 0.3734 - accuracy: 0.8992 - val_loss: 1.6538 - val_accuracy: 0.5722
Epoch 21/50
90/90 [==============================] - 157s 2s/step - loss: 0.3942 - accuracy: 0.8859 - val_loss: 1.7888 - val_accuracy: 0.5250
Epoch 22/50
90/90 [==============================] - 158s 2s/step - loss: 0.3727 - accuracy: 0.8813 - val_loss: 1.8221 - val_accuracy: 0.5056
Epoch 23/50
90/90 [==============================] - 171s 2s/step - loss: 0.3511 - accuracy: 0.8954 - val_loss: 2.0981 - val_accuracy: 0.4944
Epoch 24/50
90/90 [==============================] - 157s 2s/step - loss: 0.3200 - accuracy: 0.9037 - val_loss: 1.6688 - val_accuracy: 0.5444
Epoch 25/50
90/90 [==============================] - 157s 2s/step - loss: 0.3298 - accuracy: 0.8979 - val_loss: 2.2392 - val_accuracy: 0.4889
Epoch 26/50
90/90 [==============================] - 158s 2s/step - loss: 0.2572 - accuracy: 0.9393 - val_loss: 2.2622 - val_accuracy: 0.5139
Epoch 27/50
90/90 [==============================] - 158s 2s/step - loss: 0.2988 - accuracy: 0.9010 - val_loss: 1.8169 - val_accuracy: 0.5444
Epoch 28/50
90/90 [==============================] - 158s 2s/step - loss: 0.2965 - accuracy: 0.9013 - val_loss: 1.6999 - val_accuracy: 0.5583
Epoch 29/50
90/90 [==============================] - 157s 2s/step - loss: 0.2421 - accuracy: 0.9290 - val_loss: 1.9643 - val_accuracy: 0.5556
Epoch 30/50
90/90 [==============================] - 159s 2s/step - loss: 0.2599 - accuracy: 0.9229 - val_loss: 2.1118 - val_accuracy: 0.4917
Epoch 31/50
90/90 [==============================] - 157s 2s/step - loss: 0.2597 - accuracy: 0.9181 - val_loss: 2.0187 - val_accuracy: 0.5083
Epoch 32/50
90/90 [==============================] - 157s 2s/step - loss: 0.2118 - accuracy: 0.9413 - val_loss: 2.1572 - val_accuracy: 0.5611
Epoch 33/50
90/90 [==============================] - 156s 2s/step - loss: 0.2634 - accuracy: 0.9159 - val_loss: 1.9934 - val_accuracy: 0.5972
Epoch 34/50
90/90 [==============================] - 157s 2s/step - loss: 0.2058 - accuracy: 0.9336 - val_loss: 1.7950 - val_accuracy: 0.5806
Epoch 35/50
90/90 [==============================] - 157s 2s/step - loss: 0.1949 - accuracy: 0.9466 - val_loss: 1.9590 - val_accuracy: 0.5389
Epoch 36/50
90/90 [==============================] - 159s 2s/step - loss: 0.1821 - accuracy: 0.9493 - val_loss: 2.2865 - val_accuracy: 0.5333
Epoch 37/50
90/90 [==============================] - 158s 2s/step - loss: 0.2247 - accuracy: 0.9231 - val_loss: 2.0450 - val_accuracy: 0.5389
Epoch 38/50
90/90 [==============================] - 156s 2s/step - loss: 0.1847 - accuracy: 0.9517 - val_loss: 1.9970 - val_accuracy: 0.5750
Epoch 39/50
90/90 [==============================] - 157s 2s/step - loss: 0.1759 - accuracy: 0.9496 - val_loss: 2.1967 - val_accuracy: 0.5250
Epoch 40/50
90/90 [==============================] - 160s 2s/step - loss: 0.1678 - accuracy: 0.9497 - val_loss: 2.4159 - val_accuracy: 0.5139
Epoch 41/50
90/90 [==============================] - 159s 2s/step - loss: 0.2064 - accuracy: 0.9348 - val_loss: 1.8829 - val_accuracy: 0.5972
Epoch 42/50
90/90 [==============================] - 158s 2s/step - loss: 0.1657 - accuracy: 0.9459 - val_loss: 1.9955 - val_accuracy: 0.5861
Epoch 43/50
90/90 [==============================] - 157s 2s/step - loss: 0.1921 - accuracy: 0.9322 - val_loss: 2.1190 - val_accuracy: 0.5722
Epoch 44/50
90/90 [==============================] - 169s 2s/step - loss: 0.1540 - accuracy: 0.9519 - val_loss: 1.9506 - val_accuracy: 0.5611
Epoch 45/50
90/90 [==============================] - 156s 2s/step - loss: 0.1816 - accuracy: 0.9430 - val_loss: 2.6812 - val_accuracy: 0.5139
Epoch 46/50
90/90 [==============================] - 157s 2s/step - loss: 0.1368 - accuracy: 0.9648 - val_loss: 3.3362 - val_accuracy: 0.4222
Epoch 47/50
90/90 [==============================] - 157s 2s/step - loss: 0.1801 - accuracy: 0.9430 - val_loss: 3.7019 - val_accuracy: 0.3833
Epoch 48/50
90/90 [==============================] - 160s 2s/step - loss: 0.1674 - accuracy: 0.9511 - val_loss: 2.4849 - val_accuracy: 0.5333
Epoch 49/50
90/90 [==============================] - 165s 2s/step - loss: 0.1820 - accuracy: 0.9418 - val_loss: 1.8465 - val_accuracy: 0.6139
Epoch 50/50
90/90 [==============================] - 165s 2s/step - loss: 0.1577 - accuracy: 0.9456 - val_loss: 3.0099 - val_accuracy: 0.4722
Epoch 1/50
90/90 [==============================] - 161s 2s/step - loss: 3.1830 - accuracy: 0.0843 - val_loss: 2.7955 - val_accuracy: 0.0528
Epoch 2/50
90/90 [==============================] - 162s 2s/step - loss: 2.5791 - accuracy: 0.1987 - val_loss: 2.6201 - val_accuracy: 0.1917
Epoch 3/50
90/90 [==============================] - 156s 2s/step - loss: 2.2840 - accuracy: 0.2617 - val_loss: 2.4456 - val_accuracy: 0.2778
Epoch 4/50
90/90 [==============================] - 160s 2s/step - loss: 2.0977 - accuracy: 0.3222 - val_loss: 2.2423 - val_accuracy: 0.3278
Epoch 5/50
90/90 [==============================] - 159s 2s/step - loss: 1.9283 - accuracy: 0.3827 - val_loss: 2.0891 - val_accuracy: 0.3194
Epoch 6/50
90/90 [==============================] - 165s 2s/step - loss: 1.7821 - accuracy: 0.4346 - val_loss: 1.9191 - val_accuracy: 0.3694
Epoch 7/50
90/90 [==============================] - 163s 2s/step - loss: 1.7576 - accuracy: 0.4278 - val_loss: 1.8424 - val_accuracy: 0.4194
Epoch 8/50
90/90 [==============================] - 159s 2s/step - loss: 1.6708 - accuracy: 0.4749 - val_loss: 1.8473 - val_accuracy: 0.4194
Epoch 9/50
90/90 [==============================] - 159s 2s/step - loss: 1.5364 - accuracy: 0.5170 - val_loss: 1.6428 - val_accuracy: 0.4583
Epoch 10/50
90/90 [==============================] - 158s 2s/step - loss: 1.4577 - accuracy: 0.5568 - val_loss: 1.7318 - val_accuracy: 0.4667
Epoch 11/50
90/90 [==============================] - 159s 2s/step - loss: 1.4595 - accuracy: 0.5381 - val_loss: 1.5898 - val_accuracy: 0.4750
Epoch 12/50
90/90 [==============================] - 158s 2s/step - loss: 1.4506 - accuracy: 0.5387 - val_loss: 1.5986 - val_accuracy: 0.4722
Epoch 13/50
90/90 [==============================] - 158s 2s/step - loss: 1.4281 - accuracy: 0.5472 - val_loss: 1.7672 - val_accuracy: 0.4528
Epoch 14/50
90/90 [==============================] - 159s 2s/step - loss: 1.3384 - accuracy: 0.5843 - val_loss: 1.5524 - val_accuracy: 0.5000
Epoch 15/50
90/90 [==============================] - 157s 2s/step - loss: 1.2820 - accuracy: 0.5908 - val_loss: 1.7208 - val_accuracy: 0.4500
Epoch 16/50
90/90 [==============================] - 157s 2s/step - loss: 1.2377 - accuracy: 0.6227 - val_loss: 1.5727 - val_accuracy: 0.4806
Epoch 17/50
90/90 [==============================] - 157s 2s/step - loss: 1.2139 - accuracy: 0.6145 - val_loss: 1.5292 - val_accuracy: 0.4972
Epoch 18/50
90/90 [==============================] - 157s 2s/step - loss: 1.2355 - accuracy: 0.6062 - val_loss: 1.6841 - val_accuracy: 0.4528
Epoch 19/50
90/90 [==============================] - 156s 2s/step - loss: 1.1399 - accuracy: 0.6765 - val_loss: 1.5743 - val_accuracy: 0.4944
Epoch 20/50
90/90 [==============================] - 156s 2s/step - loss: 1.0921 - accuracy: 0.6691 - val_loss: 1.5142 - val_accuracy: 0.5056
Epoch 21/50
90/90 [==============================] - 156s 2s/step - loss: 1.0672 - accuracy: 0.6574 - val_loss: 1.6018 - val_accuracy: 0.4750
Epoch 22/50
90/90 [==============================] - 156s 2s/step - loss: 1.0359 - accuracy: 0.6773 - val_loss: 1.6706 - val_accuracy: 0.4556
Epoch 23/50
90/90 [==============================] - 156s 2s/step - loss: 1.0850 - accuracy: 0.6447 - val_loss: 1.4780 - val_accuracy: 0.5278
Epoch 24/50
90/90 [==============================] - 156s 2s/step - loss: 0.9914 - accuracy: 0.6917 - val_loss: 1.7245 - val_accuracy: 0.4639
Epoch 25/50
90/90 [==============================] - 155s 2s/step - loss: 0.9812 - accuracy: 0.7022 - val_loss: 1.5054 - val_accuracy: 0.5111
Epoch 26/50
90/90 [==============================] - 156s 2s/step - loss: 0.9782 - accuracy: 0.7023 - val_loss: 1.4594 - val_accuracy: 0.5139
Epoch 27/50
90/90 [==============================] - 157s 2s/step - loss: 0.9550 - accuracy: 0.6990 - val_loss: 1.6388 - val_accuracy: 0.5028
Epoch 28/50
90/90 [==============================] - 156s 2s/step - loss: 0.9097 - accuracy: 0.7089 - val_loss: 1.4516 - val_accuracy: 0.5278
Epoch 29/50
90/90 [==============================] - 156s 2s/step - loss: 0.8734 - accuracy: 0.7366 - val_loss: 1.5061 - val_accuracy: 0.5028
Epoch 30/50
90/90 [==============================] - 156s 2s/step - loss: 0.8722 - accuracy: 0.7316 - val_loss: 1.4073 - val_accuracy: 0.5417
Epoch 31/50
90/90 [==============================] - 156s 2s/step - loss: 0.8459 - accuracy: 0.7362 - val_loss: 1.4697 - val_accuracy: 0.5167
Epoch 32/50
90/90 [==============================] - 156s 2s/step - loss: 0.8750 - accuracy: 0.7410 - val_loss: 1.4451 - val_accuracy: 0.5361
Epoch 33/50
90/90 [==============================] - 156s 2s/step - loss: 0.8350 - accuracy: 0.7455 - val_loss: 1.4508 - val_accuracy: 0.5444
Epoch 34/50
90/90 [==============================] - 156s 2s/step - loss: 0.8552 - accuracy: 0.7319 - val_loss: 1.4266 - val_accuracy: 0.5528
Epoch 35/50
90/90 [==============================] - 156s 2s/step - loss: 0.7817 - accuracy: 0.7672 - val_loss: 1.5173 - val_accuracy: 0.5139
Epoch 36/50
90/90 [==============================] - 156s 2s/step - loss: 0.7967 - accuracy: 0.7401 - val_loss: 1.3678 - val_accuracy: 0.6028
Epoch 37/50
90/90 [==============================] - 156s 2s/step - loss: 0.7289 - accuracy: 0.7644 - val_loss: 1.3598 - val_accuracy: 0.5833
Epoch 38/50
90/90 [==============================] - 156s 2s/step - loss: 0.7353 - accuracy: 0.7653 - val_loss: 1.3493 - val_accuracy: 0.5778
Epoch 39/50
90/90 [==============================] - 156s 2s/step - loss: 0.7640 - accuracy: 0.7667 - val_loss: 1.5670 - val_accuracy: 0.5111
Epoch 40/50
90/90 [==============================] - 156s 2s/step - loss: 0.6865 - accuracy: 0.7797 - val_loss: 1.5664 - val_accuracy: 0.5389
Epoch 41/50
90/90 [==============================] - 156s 2s/step - loss: 0.7501 - accuracy: 0.7509 - val_loss: 1.5279 - val_accuracy: 0.5167
Epoch 42/50
90/90 [==============================] - 156s 2s/step - loss: 0.6817 - accuracy: 0.7971 - val_loss: 1.4328 - val_accuracy: 0.5250
Epoch 43/50
90/90 [==============================] - 155s 2s/step - loss: 0.6593 - accuracy: 0.8004 - val_loss: 1.5536 - val_accuracy: 0.5056
Epoch 44/50
90/90 [==============================] - 156s 2s/step - loss: 0.6318 - accuracy: 0.7910 - val_loss: 1.4573 - val_accuracy: 0.5472
Epoch 45/50
90/90 [==============================] - 155s 2s/step - loss: 0.6175 - accuracy: 0.7973 - val_loss: 1.5266 - val_accuracy: 0.5361
Epoch 46/50
90/90 [==============================] - 155s 2s/step - loss: 0.5952 - accuracy: 0.8163 - val_loss: 1.4765 - val_accuracy: 0.5556
Epoch 47/50
90/90 [==============================] - 155s 2s/step - loss: 0.6088 - accuracy: 0.8214 - val_loss: 1.7042 - val_accuracy: 0.5417
Epoch 48/50
90/90 [==============================] - 156s 2s/step - loss: 0.5877 - accuracy: 0.8040 - val_loss: 1.4859 - val_accuracy: 0.5444
Epoch 49/50
90/90 [==============================] - 156s 2s/step - loss: 0.6197 - accuracy: 0.7912 - val_loss: 1.4172 - val_accuracy: 0.5722
Epoch 50/50
90/90 [==============================] - 161s 2s/step - loss: 0.5607 - accuracy: 0.8304 - val_loss: 1.4166 - val_accuracy: 0.5556
Loss function: 1.4166204929351807, accuracy: 0.5555555820465088

Process finished with exit code 0

相较与pytorch的训练过程,本次训练中添加了进度条用于显示训练进度及准确率等特征。

六、 模型评估

1.Loss与Accuracy图

from matplotlib.ticker import MultipleLocator
plt.rcParams['savefig.dpi'] = 300 #图片像素
plt.rcParams['figure.dpi']  = 300 #分辨率

acc1     = history_model1.history['accuracy']
acc2     = history_model2.history['accuracy']
val_acc1 = history_model1.history['val_accuracy']
val_acc2 = history_model2.history['val_accuracy']

loss1     = history_model1.history['loss']
loss2     = history_model2.history['loss']
val_loss1 = history_model1.history['val_loss']
val_loss2 = history_model2.history['val_loss']

epochs_range = range(len(acc1))

plt.figure(figsize=(16, 4))
plt.subplot(1, 2, 1)

plt.plot(epochs_range, acc1, label='Training Accuracy-Adam')
plt.plot(epochs_range, acc2, label='Training Accuracy-SGD')
plt.plot(epochs_range, val_acc1, label='Validation Accuracy-Adam')
plt.plot(epochs_range, val_acc2, label='Validation Accuracy-SGD')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
# 设置刻度间隔,x轴每1一个刻度
ax = plt.gca()
ax.xaxis.set_major_locator(MultipleLocator(1))

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss1, label='Training Loss-Adam')
plt.plot(epochs_range, loss2, label='Training Loss-SGD')
plt.plot(epochs_range, val_loss1, label='Validation Loss-Adam')
plt.plot(epochs_range, val_loss2, label='Validation Loss-SGD')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
   
# 设置刻度间隔,x轴每1一个刻度
ax = plt.gca()
ax.xaxis.set_major_locator(MultipleLocator(1))

plt.show()

得到的可视化结果:

 2.模型评估

def test_accuracy_report(model):
    score = model.evaluate(val_ds, verbose=0)
    print('Loss function: %s, accuracy:' % score[0], score[1])
    
test_accuracy_report(model2)

得到如下输出: 

Loss function: 1.4166204929351807, accuracy: 0.5555555820465088

七、个人理解

本项目需要针对不同的优化器做对比实验,不论在论文撰写方面还是企业实操方面,这一技能都是必须的。本项目中,除实现了在一张图中绘制不同优化器的目标外,同时复习了对预训练网络的成功调用及修改,在后续的学习过程中,会尝试将不同的网络模型进行对比实验。

  • 11
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值