tensorflow2中数据扩充在cat_dog数据集上的CNN的过拟合改善

Copyright 2018 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

Image classification

View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook

This tutorial shows how to classify cats or dogs from images. It builds an image classifier using a tf.keras.Sequential model and load data using tf.keras.preprocessing.image.ImageDataGenerator. You will get some practical experience and develop intuition for the following concepts:

  • Building data input pipelines using the tf.keras.preprocessing.image.ImageDataGenerator class to efficiently work with data on disk to use with the model.
  • Overfitting —How to identify and prevent it.
  • Data augmentation and dropout —Key techniques to fight overfitting in computer vision tasks to incorporate into the data pipeline and image classifier model.

This tutorial follows a basic machine learning workflow:

  1. Examine and understand data
  2. Build an input pipeline
  3. Build the model
  4. Train the model
  5. Test the model
  6. Improve the model and repeat the process

Import packages

Let’s start by importing the required packages. The os package is used to read files and directory structure, NumPy is used to convert python list to numpy array and to perform required matrix operations and matplotlib.pyplot to plot the graph and display images in the training and validation data.

Import Tensorflow and the Keras classes needed to construct our model.

import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator

import os
import numpy as np
import matplotlib.pyplot as plt

Load data

Begin by downloading the dataset. This tutorial uses a filtered version of Dogs vs Cats dataset from Kaggle. Download the archive version of the dataset and store it in the “/tmp/” directory.

_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'

path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)

PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
Downloading data from https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip
68608000/68606236 [==============================] - 7s 0us/step

The dataset has the following directory structure:

cats_and_dogs_filtered
|__ train
    |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....]
    |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]
|__ validation
    |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....]
    |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...]

After extracting its contents, assign variables with the proper file path for the training and validation set.

train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats')  # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')  # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')  # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')  # directory with our validation dog pictures

Understand the data

Let’s look at how many cats and dogs images are in the training and validation directory:

num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))

num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))

total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)

print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
total training cat images: 1000
total training dog images: 1000
total validation cat images: 500
total validation dog images: 500
--
Total training images: 2000
Total validation images: 1000

For convenience, set up variables to use while pre-processing the dataset and training the network.

batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150

Data preparation

Format the images into appropriately pre-processed floating point tensors before feeding to the network:

  1. Read images from the disk.
  2. Decode contents of these images and convert it into proper grid format as per their RGB content.
  3. Convert them into floating point tensors.
  4. Rescale the tensors from values between 0 and 255 to values between 0 and 1, as neural networks prefer to deal with small input values.

Fortunately, all these tasks can be done with the ImageDataGenerator class provided by tf.keras. It can read images from disk and preprocess them into proper tensors. It will also set up generators that convert these images into batches of tensors—helpful when training the network.

train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data

After defining the generators for training and validation images, the flow_from_directory method load images from the disk, applies rescaling, and resizes the images into the required dimensions.

train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
                                                           directory=train_dir,
                                                           shuffle=True,
                                                           target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                           class_mode='binary')
Found 2000 images belonging to 2 classes.
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
                                                              directory=validation_dir,
                                                              target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                              class_mode='binary')
Found 1000 images belonging to 2 classes.

Visualize training images

Visualize the training images by extracting a batch of images from the training generator—which is 32 images in this example—then plot five of them with matplotlib.

sample_training_images, _ = next(train_data_gen)

The next function returns a batch from the dataset. The return value of next function is in form of (x_train, y_train) where x_train is training features and y_train, its labels. Discard the labels to only visualize the training images.

# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
    fig, axes = plt.subplots(1, 5, figsize=(20,20))
    axes = axes.flatten()
    for img, ax in zip( images_arr, axes):
        ax.imshow(img)
        ax.axis('off')
    plt.tight_layout()
    plt.show()
plotImages(sample_training_images[:5])

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-E2kWVEo5-1589878740963)(output_34_0.png)]

Create the model

The model consists of three convolution blocks with a max pool layer in each of them. There’s a fully connected layer with 512 units on top of it that is activated by a relu activation function.

model = Sequential([
    Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
    MaxPooling2D(),
    Conv2D(32, 3, padding='same', activation='relu'),
    MaxPooling2D(),
    Conv2D(64, 3, padding='same', activation='relu'),
    MaxPooling2D(),
    Flatten(),
    Dense(512, activation='relu'),
    Dense(1)
])

Compile the model

For this tutorial, choose the ADAM optimizer and binary cross entropy loss function. To view training and validation accuracy for each training epoch, pass the metrics argument.

model.compile(optimizer='adam',
              loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
              metrics=['accuracy'])

Model summary

View all the layers of the network using the model’s summary method:

model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 150, 150, 16)      448       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 75, 75, 16)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 75, 75, 32)        4640      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 37, 37, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 37, 37, 64)        18496     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 18, 18, 64)        0         
_________________________________________________________________
flatten (Flatten)            (None, 20736)             0         
_________________________________________________________________
dense (Dense)                (None, 512)               10617344  
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 513       
=================================================================
Total params: 10,641,441
Trainable params: 10,641,441
Non-trainable params: 0
_________________________________________________________________

Train the model

Use the fit_generator method of the ImageDataGenerator class to train the network.

history = model.fit_generator(
    train_data_gen,
    steps_per_epoch=total_train // batch_size,
    epochs=epochs,
    validation_data=val_data_gen,
    validation_steps=total_val // batch_size
)
WARNING:tensorflow:From <ipython-input-19-01c6f78f4d4f>:6: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
Please use Model.fit, which supports generators.
Epoch 1/15
15/15 [==============================] - 14s 925ms/step - loss: 1.1494 - accuracy: 0.4984 - val_loss: 0.6927 - val_accuracy: 0.5011
Epoch 2/15
15/15 [==============================] - 14s 917ms/step - loss: 0.6901 - accuracy: 0.4995 - val_loss: 0.6799 - val_accuracy: 0.5033
Epoch 3/15
15/15 [==============================] - 14s 920ms/step - loss: 0.6581 - accuracy: 0.5577 - val_loss: 0.6552 - val_accuracy: 0.5603
Epoch 4/15
15/15 [==============================] - 14s 925ms/step - loss: 0.6109 - accuracy: 0.6432 - val_loss: 0.6149 - val_accuracy: 0.6473
Epoch 5/15
15/15 [==============================] - 14s 922ms/step - loss: 0.5417 - accuracy: 0.7110 - val_loss: 0.5868 - val_accuracy: 0.7132
Epoch 6/15
15/15 [==============================] - 14s 926ms/step - loss: 0.5096 - accuracy: 0.7447 - val_loss: 0.5808 - val_accuracy: 0.6864
Epoch 7/15
15/15 [==============================] - 14s 927ms/step - loss: 0.4556 - accuracy: 0.7740 - val_loss: 0.5716 - val_accuracy: 0.6775
Epoch 8/15
15/15 [==============================] - 14s 935ms/step - loss: 0.3982 - accuracy: 0.8178 - val_loss: 0.5743 - val_accuracy: 0.7087
Epoch 9/15
15/15 [==============================] - 14s 930ms/step - loss: 0.3909 - accuracy: 0.8259 - val_loss: 0.5571 - val_accuracy: 0.7199
Epoch 10/15
15/15 [==============================] - 14s 927ms/step - loss: 0.3494 - accuracy: 0.8381 - val_loss: 0.5841 - val_accuracy: 0.7321
Epoch 11/15
15/15 [==============================] - 14s 944ms/step - loss: 0.2925 - accuracy: 0.8718 - val_loss: 0.7554 - val_accuracy: 0.6864
Epoch 12/15
15/15 [==============================] - 14s 931ms/step - loss: 0.2786 - accuracy: 0.8723 - val_loss: 0.6516 - val_accuracy: 0.7299
Epoch 13/15
15/15 [==============================] - 14s 930ms/step - loss: 0.2070 - accuracy: 0.9252 - val_loss: 0.6622 - val_accuracy: 0.7143
Epoch 14/15
15/15 [==============================] - 14s 927ms/step - loss: 0.1624 - accuracy: 0.9364 - val_loss: 0.7217 - val_accuracy: 0.7377
Epoch 15/15
15/15 [==============================] - 14s 934ms/step - loss: 0.1164 - accuracy: 0.9631 - val_loss: 0.8251 - val_accuracy: 0.7243

Visualize training results

Now visualize the results after training the network.

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss=history.history['loss']
val_loss=history.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ZOP3by7x-1589878740966)(output_47_0.png)]

As you can see from the plots, training accuracy and validation accuracy are off by large margin and the model has achieved only around 70% accuracy on the validation set.

Let’s look at what went wrong and try to increase overall performance of the model.

Overfitting

In the plots above, the training accuracy is increasing linearly over time, whereas validation accuracy stalls around 70% in the training process. Also, the difference in accuracy between training and validation accuracy is noticeable—a sign of overfitting.

When there are a small number of training examples, the model sometimes learns from noises or unwanted details from training examples—to an extent that it negatively impacts the performance of the model on new examples. This phenomenon is known as overfitting. It means that the model will have a difficult time generalizing on a new dataset.

There are multiple ways to fight overfitting in the training process. In this tutorial, you’ll use data augmentation and add dropout to our model.

Data augmentation

Overfitting generally occurs when there are a small number of training examples. One way to fix this problem is to augment the dataset so that it has a sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples by augmenting the samples using random transformations that yield believable-looking images. The goal is the model will never see the exact same picture twice during training. This helps expose the model to more aspects of the data and generalize better.

Implement this in tf.keras using the ImageDataGenerator class. Pass different transformations to the dataset and it will take care of applying it during the training process.

Augment and visualize data

Begin by applying random horizontal flip augmentation to the dataset and see how individual images look like after the transformation.

Apply horizontal flip

Pass horizontal_flip as an argument to the ImageDataGenerator class and set it to True to apply this augmentation.

image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
                                               directory=train_dir,
                                               shuffle=True,
                                               target_size=(IMG_HEIGHT, IMG_WIDTH))
Found 2000 images belonging to 2 classes.

Take one sample image from the training examples and repeat it five times so that the augmentation is applied to the same image five times.

augmented_images = [train_data_gen[0][0][0] for i in range(5)]
# Re-use the same custom plotting function defined and used
# above to visualize the training images
plotImages(augmented_images)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-FnbbV8o1-1589878740967)(output_61_0.png)]

Randomly rotate the image

Let’s take a look at a different augmentation called rotation and apply 45 degrees of rotation randomly to the training examples.

image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
                                               directory=train_dir,
                                               shuffle=True,
                                               target_size=(IMG_HEIGHT, IMG_WIDTH))

augmented_images = [train_data_gen[0][0][0] for i in range(5)]
Found 2000 images belonging to 2 classes.
plotImages(augmented_images)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-rxFs4Hol-1589878740968)(output_66_0.png)]

Apply zoom augmentation

Apply a zoom augmentation to the dataset to zoom images up to 50% randomly.

# zoom_range from 0 - 1 where 1 = 100%.
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5) # 
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
                                               directory=train_dir,
                                               shuffle=True,
                                               target_size=(IMG_HEIGHT, IMG_WIDTH))

augmented_images = [train_data_gen[0][0][0] for i in range(5)]
Found 2000 images belonging to 2 classes.
plotImages(augmented_images)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-gi5nmyLE-1589878740970)(output_71_0.png)]

Put it all together

Apply all the previous augmentations. Here, you applied rescale, 45 degree rotation, width shift, height shift, horizontal flip and zoom augmentation to the training images.

image_gen_train = ImageDataGenerator(
                    rescale=1./255,
                    rotation_range=45,
                    width_shift_range=.15,
                    height_shift_range=.15,
                    horizontal_flip=True,
                    zoom_range=0.5
                    )
train_data_gen = image_gen_train.flow_from_directory(batch_size=batch_size,
                                                     directory=train_dir,
                                                     shuffle=True,
                                                     target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                     class_mode='binary')
Found 2000 images belonging to 2 classes.

Visualize how a single image would look five different times when passing these augmentations randomly to the dataset.

augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-IEJRRiKx-1589878740970)(output_77_0.png)]

Create validation data generator

Generally, only apply data augmentation to the training examples. In this case, only rescale the validation images and convert them into batches using ImageDataGenerator.

image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
                                                 directory=validation_dir,
                                                 target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                 class_mode='binary')
Found 1000 images belonging to 2 classes.

Dropout

Another technique to reduce overfitting is to introduce dropout to the network. It is a form of regularization that forces the weights in the network to take only small values, which makes the distribution of weight values more regular and the network can reduce overfitting on small training examples. Dropout is one of the regularization technique used in this tutorial

When you apply dropout to a layer it randomly drops out (set to zero) number of output units from the applied layer during the training process. Dropout takes a fractional number as its input value, in the form such as 0.1, 0.2, 0.4, etc. This means dropping out 10%, 20% or 40% of the output units randomly from the applied layer.

When appling 0.1 dropout to a certain layer, it randomly kills 10% of the output units in each training epoch.

Create a network architecture with this new dropout feature and apply it to different convolutions and fully-connected layers.

Creating a new network with Dropouts

Here, you apply dropout to first and last max pool layers. Applying dropout will randomly set 20% of the neurons to zero during each training epoch. This helps to avoid overfitting on the training dataset.

model_new = Sequential([
    Conv2D(16, 3, padding='same', activation='relu', 
           input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
    MaxPooling2D(),
    Dropout(0.2),
    Conv2D(32, 3, padding='same', activation='relu'),
    MaxPooling2D(),
    Conv2D(64, 3, padding='same', activation='relu'),
    MaxPooling2D(),
    Dropout(0.2),
    Flatten(),
    Dense(512, activation='relu'),
    Dense(1)
])

Compile the model

After introducing dropouts to the network, compile the model and view the layers summary.

model_new.compile(optimizer='adam',
                  loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
                  metrics=['accuracy'])

model_new.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_3 (Conv2D)            (None, 150, 150, 16)      448       
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 75, 75, 16)        0         
_________________________________________________________________
dropout (Dropout)            (None, 75, 75, 16)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 75, 75, 32)        4640      
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 37, 37, 32)        0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 37, 37, 64)        18496     
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 18, 18, 64)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 18, 18, 64)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 20736)             0         
_________________________________________________________________
dense_2 (Dense)              (None, 512)               10617344  
_________________________________________________________________
dense_3 (Dense)              (None, 1)                 513       
=================================================================
Total params: 10,641,441
Trainable params: 10,641,441
Non-trainable params: 0
_________________________________________________________________

Train the model

After successfully introducing data augmentations to the training examples and adding dropouts to the network, train this new network:

history = model_new.fit_generator(
    train_data_gen,
    steps_per_epoch=total_train // batch_size,
    epochs=epochs,
    validation_data=val_data_gen,
    validation_steps=total_val // batch_size
)
Epoch 1/15
15/15 [==============================] - 16s 1s/step - loss: 1.0521 - accuracy: 0.4979 - val_loss: 0.6914 - val_accuracy: 0.4955
Epoch 2/15
15/15 [==============================] - 17s 1s/step - loss: 0.6925 - accuracy: 0.5032 - val_loss: 0.6929 - val_accuracy: 0.5033
Epoch 3/15
15/15 [==============================] - 17s 1s/step - loss: 0.6923 - accuracy: 0.5027 - val_loss: 0.6910 - val_accuracy: 0.4978
Epoch 4/15
15/15 [==============================] - 16s 1s/step - loss: 0.6889 - accuracy: 0.5037 - val_loss: 0.6906 - val_accuracy: 0.4989
Epoch 5/15
15/15 [==============================] - 16s 1s/step - loss: 0.6873 - accuracy: 0.4925 - val_loss: 0.6873 - val_accuracy: 0.4978
Epoch 6/15
15/15 [==============================] - 17s 1s/step - loss: 0.6841 - accuracy: 0.5027 - val_loss: 0.6741 - val_accuracy: 0.5056
Epoch 7/15
15/15 [==============================] - 17s 1s/step - loss: 0.6805 - accuracy: 0.4973 - val_loss: 0.6674 - val_accuracy: 0.5045
Epoch 8/15
15/15 [==============================] - 17s 1s/step - loss: 0.6667 - accuracy: 0.5260 - val_loss: 0.6581 - val_accuracy: 0.5391
Epoch 9/15
15/15 [==============================] - 17s 1s/step - loss: 0.6659 - accuracy: 0.5646 - val_loss: 0.6458 - val_accuracy: 0.5703
Epoch 10/15
15/15 [==============================] - 16s 1s/step - loss: 0.6502 - accuracy: 0.5710 - val_loss: 0.6332 - val_accuracy: 0.5971
Epoch 11/15
15/15 [==============================] - 17s 1s/step - loss: 0.6491 - accuracy: 0.5892 - val_loss: 0.6343 - val_accuracy: 0.6038
Epoch 12/15
15/15 [==============================] - 17s 1s/step - loss: 0.6370 - accuracy: 0.6036 - val_loss: 0.6598 - val_accuracy: 0.5558
Epoch 13/15
15/15 [==============================] - 17s 1s/step - loss: 0.6339 - accuracy: 0.6010 - val_loss: 0.6225 - val_accuracy: 0.6496
Epoch 14/15
15/15 [==============================] - 17s 1s/step - loss: 0.6264 - accuracy: 0.6245 - val_loss: 0.6281 - val_accuracy: 0.5792
Epoch 15/15
15/15 [==============================] - 17s 1s/step - loss: 0.6153 - accuracy: 0.6276 - val_loss: 0.5905 - val_accuracy: 0.6685

Visualize the model

Visualize the new model after training, you can see that there is significantly less overfitting than before. The accuracy should go up after training the model for more epochs.

acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5Yj8tI7c-1589878740971)(output_95_0.png)]

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值