本文为为🔗365天深度学习训练营内部文章
原作者:K同学啊
当原数据量很小的时候,进行数据增强,并通过数据增强用少量数据达到非常非常棒的识别准确率
一 加载数据
import matplotlib.pyplot as plt
import numpy as np
#隐藏警告
import warnings
warnings.filterwarnings('ignore')
from tensorflow.keras import layers
import tensorflow as tf
data_dir = "./data/"
img_height = 224
img_width = 224
batch_size = 32
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.3,
subset="training",
seed=12,
image_size=(img_height, img_width),
batch_size=batch_size)
Found 600 files belonging to 2 classes.
Using 420 files for training.
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.3,
subset="validation",
seed=12,
image_size=(img_height, img_width),
batch_size=batch_size)
Found 600 files belonging to 2 classes. Using 180 files for validation.
由于原始数据集不包含测试集,因此需要创建一个。使用 tf.data.experimental.cardinality 确定验证集中有多少批次的数据,然后将其中的 20% 移至测试集。
val_batches = tf.data.experimental.cardinality(val_ds)
test_ds = val_ds.take(val_batches // 5)
val_ds = val_ds.skip(val_batches // 5)
print('Number of validation batches: %d' % tf.data.experimental.cardinality(val_ds))
print('Number of test batches: %d' % tf.data.experimental.cardinality(test_ds))
Number of validation batches: 12
Number of test batches: 2
一共有猫、狗两类
二 数据预处理
这里展示数据的归一化
AUTOTUNE = tf.data.AUTOTUNE
def preprocess_image(image,label):
return (image/255.0,label)
# 归一化处理
train_ds = train_ds.map(preprocess_image, num_parallel_calls=AUTOTUNE)
val_ds = val_ds.map(preprocess_image, num_parallel_calls=AUTOTUNE)
test_ds = test_ds.map(preprocess_image, num_parallel_calls=AUTOTUNE)
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
plt.figure(figsize=(15, 10)) # 图形的宽为15高为10
for images, labels in train_ds.take(1):
for i in range(8):
ax = plt.subplot(5, 8, i + 1)
plt.imshow(images[i])
plt.title(class_names[labels[i]])
plt.axis("off")
三、数据增强
我们可以使用
tf.keras.layers.experimental.preprocessing.RandomFlip
与tf.keras.layers.experimental.preprocessing.RandomRotation
进行数据增强
tf.keras.layers.experimental.preprocessing.RandomFlip
:水平和垂直随机翻转每个图像。tf.keras.layers.experimental.preprocessing.RandomRotation
:随机旋转每个图像
data_augmentation = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.RandomFlip("horizontal_and_vertical"),
tf.keras.layers.experimental.preprocessing.RandomRotation(0.2),
])
第一个层表示进行随机的水平和垂直翻转,而第二个层表示按照 0.2 的弧度值进行随机旋转
# Add the image to a batch.
image = tf.expand_dims(images[i], 0)
将图片添加到批次当中
plt.figure(figsize=(8, 8))
for i in range(9):
augmented_image = data_augmentation(image)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_image[0])
plt.axis("off")
四、增强方式
方法一:将其嵌入model中
model = tf.keras.Sequential([
data_augmentation,
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
])
这样做的好处是:
- 数据增强这块的工作可以得到GPU的加速(如果你使用了GPU训练的话)
注意:只有在模型训练时(Model.fit)才会进行增强,在模型评估(Model.evaluate)以及预测(Model.predict)时并不会进行增强操作。
方法二:在Dataset数据集中进行数据增强
batch_size = 32
AUTOTUNE = tf.data.AUTOTUNE
def prepare(ds):
ds = ds.map(lambda x, y: (data_augmentation(x, training=True), y), num_parallel_calls=AUTOTUNE)
return ds
train_ds = prepare(train_ds)
五 训练模型
model = tf.keras.Sequential([
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(len(class_names))
])
在准备对模型进行训练之前,还需要再对其进行一些设置。以下内容是在模型的编译步骤中添加的:
- 损失函数(loss):用于衡量模型在训练期间的准确率。
- 优化器(optimizer):决定模型如何根据其看到的数据和自身的损失函数进行更新。
- 评价函数(metrics):用于监控训练和测试步骤。以下示例使用了准确率,即被正确分类的图像的比率。
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
epochs=20
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
Epoch 1/20 14/14 [==============================] - 9s 522ms/step - loss: 1.2811 - accuracy: 0.4810 - val_loss: 0.6834 - val_accuracy: 0.5473 Epoch 2/20 14/14 [==============================] - 7s 470ms/step - loss: 0.6916 - accuracy: 0.5167 - val_loss: 0.6863 - val_accuracy: 0.5541 Epoch 3/20 14/14 [==============================] - 7s 449ms/step - loss: 0.6725 - accuracy: 0.5976 - val_loss: 0.6804 - val_accuracy: 0.5203 Epoch 4/20 14/14 [==============================] - 7s 442ms/step - loss: 0.6406 - accuracy: 0.6810 - val_loss: 0.6071 - val_accuracy: 0.7770 Epoch 5/20 14/14 [==============================] - 7s 456ms/step - loss: 0.5705 - accuracy: 0.7405 - val_loss: 0.5379 - val_accuracy: 0.7568 Epoch 6/20 14/14 [==============================] - 7s 472ms/step - loss: 0.4925 - accuracy: 0.7857 - val_loss: 0.5400 - val_accuracy: 0.7432 Epoch 7/20 14/14 [==============================] - 7s 480ms/step - loss: 0.4367 - accuracy: 0.8214 - val_loss: 0.5519 - val_accuracy: 0.7297 Epoch 8/20 14/14 [==============================] - 7s 471ms/step - loss: 0.4035 - accuracy: 0.8214 - val_loss: 0.5957 - val_accuracy: 0.7230 Epoch 9/20 14/14 [==============================] - 7s 457ms/step - loss: 0.3758 - accuracy: 0.8333 - val_loss: 0.4349 - val_accuracy: 0.7905 Epoch 10/20 14/14 [==============================] - 7s 452ms/step - loss: 0.2980 - accuracy: 0.8881 - val_loss: 0.4214 - val_accuracy: 0.8311 Epoch 11/20 14/14 [==============================] - 7s 477ms/step - loss: 0.3206 - accuracy: 0.8643 - val_loss: 0.3316 - val_accuracy: 0.8514 Epoch 12/20 14/14 [==============================] - 7s 471ms/step - loss: 0.2666 - accuracy: 0.8929 - val_loss: 0.3477 - val_accuracy: 0.8649 Epoch 13/20 14/14 [==============================] - 7s 452ms/step - loss: 0.2402 - accuracy: 0.9238 - val_loss: 0.3949 - val_accuracy: 0.8716 Epoch 14/20 14/14 [==============================] - 8s 503ms/step - loss: 0.2027 - accuracy: 0.9333 - val_loss: 0.2815 - val_accuracy: 0.8919 Epoch 15/20 14/14 [==============================] - 7s 454ms/step - loss: 0.2168 - accuracy: 0.9381 - val_loss: 0.2824 - val_accuracy: 0.9122 Epoch 16/20 14/14 [==============================] - 7s 443ms/step - loss: 0.2097 - accuracy: 0.9071 - val_loss: 0.3573 - val_accuracy: 0.8784 Epoch 17/20 14/14 [==============================] - 7s 448ms/step - loss: 0.2030 - accuracy: 0.9071 - val_loss: 0.3212 - val_accuracy: 0.8784 Epoch 18/20 14/14 [==============================] - 7s 444ms/step - loss: 0.1749 - accuracy: 0.9310 - val_loss: 0.2591 - val_accuracy: 0.9257 Epoch 19/20 14/14 [==============================] - 7s 445ms/step - loss: 0.1435 - accuracy: 0.9405 - val_loss: 0.2557 - val_accuracy: 0.8986 Epoch 20/20 14/14 [==============================] - 7s 446ms/step - loss: 0.1288 - accuracy: 0.9500 - val_loss: 0.2351 - val_accuracy: 0.9257
查看测试集准确率
loss, acc = model.evaluate(test_ds)
print("Accuracy", acc)
1/1 [==============================] - 0s 170ms/step - loss: 0.3631 - accuracy: 0.8125 Accuracy 0.8125
六 自定义增强函数
import random
# 这是大家可以自由发挥的一个地方
def aug_img(image):
seed = (random.randint(0,9), 0)
# 随机改变图像对比度
stateless_random_brightness = tf.image.stateless_random_contrast(image, lower=0.1, upper=1.0, seed=seed)
return stateless_random_brightness
image = tf.expand_dims(images[3]*255, 0)
print("Min and max pixel values:", image.numpy().min(), image.numpy().max())
Min and max pixel values: 14.000048 253.28577
plt.figure(figsize=(8, 8))
for i in range(9):
augmented_image = aug_img(image)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_image[0].numpy().astype("uint8"))
plt.axis("off")
如何将自定义增强函数应用到我们数据上呢
在数据预处理的步骤中嵌套数据增强的函数即可
def preprocess_image(image, label):
# 归一化图像
normalized_image = image / 255.0
# 应用增强函数
augmented_image = aug_img(normalized_image)
return (augmented_image, label)
def aug_img(image):
seed = (random.randint(0, 9), 0)
# 随机改变图像对比度
stateless_random_brightness = tf.image.stateless_random_contrast(image, lower=0.1, upper=1.0, seed=seed)
return stateless_random_brightness