基于ResNet50做图片分类的tensorflow代码实现

目标任务:将数据集中5类美食图片进行分类,每一类有1000张图片,共5000张。

实验总结:刚开始设置训练集和验证集的比例为8:2,有些欠拟合,因此后来调整到了9:1;分别测试了原生的ResNet50、ResNet101、ResNet152和改进后的ResNet50、ResNet101,但最终在验证集上的最佳精度只能达到75%左右。

改进后的ResNet101表现:

  • 训练集和验证集的精确度变化

  •  训练集和验证集的损失值变化

实验环境:TensorFlow-2.1.0。

ResNet50结构:

ResNet-50 结构 - 简书 (jianshu.com)

ResNet有2个基本的block,一个是Identity Block,输入和输出的dimension是一样的,所以可以连续串联多个;另外一个基本的block是Conv Block,输入和输出的dimension不一样,所以不能连续串联,它的作用就是为了改变特征向量的dimension。

在这里插入图片描述

CNN最后都要把输入的图像逐步转换为平面尺度很小但是depth很深的feature map,一般采用统一的比较小的kernel(比如VGG用3*3)进行操作,但是随着网络深度的增加,output的channel也增大(学到的东西越来越复杂),所以有必要在进入Identity Block之前,用Conv Block转换一下维度,这样之后就可以连续堆积Identity Block。

这里写图片描述

Conv Block:

Identity Block:

Conv Block中,在shortcut path边加上一个conv2D layer(1*1 filter size),可以在main path改变dimension之后,保证shortcut path进行变换之后的输出维度与之相同。

ResNetV1-50流程如下, 不使用bottleneck, 且只有resnetv1在initial_conv后面做BN和Relu:

block_sizes=[3, 4, 6, 3]指的是stage1(first pool)之后的4个layer的block数, 分别对应res2,res3,res4,res5,
    每一个layer的第一个block在shortcut上做conv+BN, 即Conv Block
inputs: (1, 720, 1280, 3)
initial_conv:
    conv2d_fixed_padding()
    1. kernel_size=7, 先做padding(1, 720, 1280, 3) -> (1, 726, 1286, 3)
    2. conv2d kernels=[7, 7, 3, 64], stride=2, VALID 卷积. 7x7的kernel, padding都为3, 为了保证左上角和卷积核中心点对其
       (1, 726, 1286, 3) -> (1, 360, 640, 64)
    3. BN, Relu (只有resnetv1在第一次conv后面做BN和Relu)
initial_max_pool:
    k=3, s=2, padding='SAME', (1, 360, 640, 64) -> (1, 180, 320, 64)
以下均为不使用bottleneck的building_block
block_layer1:
    (有3个block, layer间stride=1(上一层做pool了), 64个filter, 不使用bottleneck(若使用bottleneck 卷积核数量需乘4))
    1. 第一个block:
    Conv Block有projection_shortcut, 且strides可以等于1或者2
    Identity Block没有projection_shortcut, 且strides只能等于1
        `inputs = block_fn(inputs, filters, training, projection_shortcut, strides, data_format)`
        shortcut做[1, 1, 64, 64], stride=1的conv和BN, shape不变
        然后和主要分支里input做3次卷积后的结果相加, 一起Relu, 注意block里最后一次卷积后只有BN没有Relu
        input:    conv-bn-relu-conv-bn-relu-conv-bn  和shortcut相加后再做relu
        shortcut: conv-bn                            
        shortcut: [1, 1, 64, 64], s=1, (1, 180, 320, 64) -> (1, 180, 320, 64)
        input做两次[3, 3, 64, 64], s=1的卷积, shape不变(1, 180, 320, 64) -> (1, 180, 320, 64) -> (1, 180, 320, 64)
        inputs += shortcut, 再relu
    2. 对剩下的2个block, 每个block操作相同:
        `inputs = block_fn(inputs, filters, training, None, 1, data_format)`
        shortcut直接和input卷积结果相加, 不做conv-bn
        input做两次[3, 3, 64, 64], s=1的卷积, shape不变(1, 180, 320, 64) -> (1, 180, 320, 64) -> (1, 180, 320, 64)
        inputs += shortcut, 再relu
block_layer2/3/4同block_layer1, 只是每个layer的identity block数量不同, 卷积核数量和layer间stride也不同, 不过仍然只有第一个conv block的shortcut做conv-bn
block_layer2: 4个block, 128个filter, layer间stride=2 (因为上一层出来后没有pool)
    1. 第一个block:
        对shortcut做kernel=[1, 1, 64, 128], s=2的conv和BN, (1, 180, 320, 64) -> (1, 90, 160, 128)
        对主要分支先做kernel=[3, 3, 64, 128], s=2的卷积, padding='VALID', (1, 180, 320, 64) -> (1, 90, 160, 128)
                再做kernel=[3, 3, 128, 128], s=1的卷积, padding='SAME', (1, 90, 160, 128) -> (1, 90, 160, 128)
    2. 剩下的3个block, 每个block操作相同:
        shortcut不操作直接和结果相加做Relu
        对主要分支做两次[3, 3, 128, 128], s=1的卷积, padding='SAME', (1, 90, 160, 128) -> (1, 90, 160, 128) -> (1, 90, 160, 128)
block_layer3: 6个block, 256个filter, layer间stride=2
    1. 第一个block:
        对shortcut做kernel=[1, 1, 128, 256], s=2的conv和BN, (1, 90, 160, 128) -> (1, 45, 80, 256)
        对主要分支先做kernel=[3, 3, 128, 256], s=2的卷积, padding='VALID', (1, 90, 160, 128) -> (1, 45, 80, 256)
                再做kernel=[3, 3, 256, 256], s=1的卷积, padding='SAME', (1, 45, 80, 256) -> (1, 45, 80, 256)
    2. 剩下的5个block, 每个block操作相同:
        shortcut不操作直接和结果相加做Relu
        对主要分支做两次[3, 3, 256, 256], s=1的卷积, padding='SAME', (1, 45, 80, 256) -> (1, 45, 80, 256) -> (1, 45, 80, 256)
block_layer4: 3个block, 512个filter, layer间stride=2
    1. 第一个block:
        对shortcut做kernel=[1, 1, 256, 512], s=2的conv和BN, (1, 45, 80, 256) -> (1, 23, 40, 512)
        对主要分支先做kernel=[3, 3, 256, 512], s=2的卷积, padding='VALID', (1, 45, 80, 256) -> (1, 23, 40, 512)
                再做kernel=[3, 3, 512, 512], s=1的卷积, padding='SAME', (1, 23, 40, 512) -> (1, 23, 40, 512)
    2. 剩下的2个block, 每个block操作相同:
        shortcut不操作直接和结果相加做Relu
        对主要分支做两次[3, 3, 512, 512], s=1的卷积, padding='SAME', (1, 23, 40, 512) -> (1, 23, 40, 512)
avg_pool, 7*7
FC, output1000
softmax
输出prediction

Keras版结构如下, res2a代表stage2的第1个block, branch1是shortcut path, branch2是main path, branch2a代表是main path的第1个卷积:

 实验过程:

1. 原生ResNet50 + 优化器①

model.compile(
    optimizer='rmsprop',
    loss=tf.keras.losses.SparseCategoricalCrossentropy(),
    metrics=['accuracy']
)

train_count = len(train_path)
test_count = len(test_path)

steps_per_epoch = train_count // BATCH_SIZE
validation_steps = test_count // BATCH_SIZE

history = model.fit(
    train_datasets,
    steps_per_epoch=steps_per_epoch,
    epochs=120,
    #verbose=1,
    validation_data=test_datasets,
    validation_steps=validation_steps
)

实验结果:

loss: 0.0414 - accuracy: 0.9880 - val_loss: 3.8412 - val_accuracy: 0.5560

2. 原生ResNet50 + 优化器②

model.compile(optimizer=tf.keras.optimizers.Adam(0.0001),
             loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
             metrics=['acc']
             )

history = model.fit(train_datasets,
                    epochs = 150,
                    steps_per_epoch = steps_per_epoch,
                    validation_data = test_datasets,
                    validation_steps = validation_steps
                    )

实验结果:

loss: 0.0322 - acc: 0.9896 - val_loss: 2.4060 - val_acc: 0.5938

3. 改进后的ResNet50/101

  • 改进方法:

在每个卷积操作之后加入了Dropout(rate=0.2),并且在训练过程中加入了EarlyStopping,以减弱过拟合的问题;SGD优化器中加入了momentum=0.8,训练速度有一定提升。

  • 完整代码:
import tensorflow as tf
import glob
import numpy as np
import matplotlib.pyplot as plt

from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Dense,Conv2D,MaxPooling2D,ZeroPadding2D,AveragePooling2D
from tensorflow.keras.layers import Activation,BatchNormalization,Flatten
from tensorflow.keras.models import Model

from tensorflow.keras.preprocessing import image
import tensorflow.keras.backend as K
#from tensorflow.keras.utils.data_utils import get_file
from tensorflow.keras.applications.imagenet_utils import decode_predictions
from tensorflow.keras.applications.imagenet_utils import preprocess_input

from tensorflow.keras import layers
!unzip 'Food/foods.zip'
glob.glob('*/*')
img_path = glob.glob('foods/train/*/*.jpg')

labels = [img.split('/')[2] for img in img_path]
label = np.unique(labels)

label_to_index = dict((item, index) for (index, item) in enumerate(label))
index_to_label = dict((index, item) for (item, index) in label_to_index.items())

index_labels = [label_to_index.get(name) for name in labels] # 映射
index_labels[:3] # index_labels为标签数组
random_index = np.random.permutation(len(img_path))

train_img_path = np.array(img_path)[random_index]
train_img_label = np.array(index_labels)[random_index]

divider = int(len(img_path) * 0.9)

train_path = train_img_path[:divider]
train_label = train_img_label[:divider]
test_path = train_img_path[divider:]
test_label = train_img_label[divider:]
# 数据集
train_datasets = tf.data.Dataset.from_tensor_slices((train_path, train_label))
test_datasets = tf.data.Dataset.from_tensor_slices((test_path, test_label))
def load_img(img_path, img_label):
    image = tf.io.read_file(img_path)
    image = tf.image.decode_jpeg(image, channels = 3) # 将image解码,通道数为3
    image = tf.image.resize(image, [224, 224])
    image = tf.cast(image, tf.float32)
    image = image / 255 # 归一化
    return image, img_label

AUTOTUNE = tf.data.experimental.AUTOTUNE # 多线程
train_datasets = train_datasets.map(load_img, num_parallel_calls=AUTOTUNE)
test_datasets = test_datasets.map(load_img, num_parallel_calls=AUTOTUNE)

BATCH_SIZE = 32
train_datasets = train_datasets.repeat().shuffle(300).batch(BATCH_SIZE)
# <BatchDataset shapes: ((None, 224, 224, 3), (None,)), types: (tf.float32, tf.int64)>

test_datasets = test_datasets.batch(BATCH_SIZE)
# <BatchDataset shapes: ((None, 224, 224, 3), (None,)), types: (tf.float32, tf.int64)>
from tensorflow.keras.layers import Dropout

def conv_block(input_tensor, kernel_size, filters, stage, block, strides=(2, 2)):

    # 64,64,256
    filters1, filters2, filters3 = filters

    conv_name_base = 'res' + str(stage) + block + '_branch'
    bn_name_base = 'bn' + str(stage) + block + '_branch'

    # 降维
    x = Conv2D(filters1, (1, 1), strides=strides,
               name=conv_name_base + '2a')(input_tensor)
    x = BatchNormalization(name=bn_name_base + '2a')(x)
    x = Dropout(0.2)(x) # dropout
    x = Activation('relu')(x)

    # 3x3卷积
    x = Conv2D(filters2, kernel_size, padding='same',
               name=conv_name_base + '2b')(x)
    x = BatchNormalization(name=bn_name_base + '2b')(x)
    
    # dropout
    x = Dropout(0.2)(x)
    
    x = Activation('relu')(x)

    # 升维
    x = Conv2D(filters3, (1, 1), name=conv_name_base + '2c')(x)
    x = BatchNormalization(name=bn_name_base + '2c')(x)
    x = Dropout(0.2)(x) # dropout

    # 残差边
    shortcut = Conv2D(filters3, (1, 1), strides=strides, name=conv_name_base + '1')(input_tensor) # 将input_tensor转换为对应维度(w x h x c)
    shortcut = BatchNormalization(name=bn_name_base + '1')(shortcut)


    x = layers.add([x, shortcut])
    x = Activation('relu')(x)
    return x
def identity_block(input_tensor, kernel_size, filters, stage, block):

    [filters1, filters2, filters3] = filters
    # filters = [512, 512, 1024]
    # filters1 = 512, filters2 = 512, filters3 = 1024
    # print(filters1)
    # print(filters)

    conv_name_base = 'res' + str(stage) + block + '_branch'
    bn_name_base = 'bn' + str(stage) + block + '_branch'

    # 降维
    x = Conv2D(filters1, (1, 1), name=conv_name_base + '2a')(input_tensor)
    x = BatchNormalization(name=bn_name_base + '2a')(x)
    x = Dropout(0.2)(x) # dropout
    x = Activation('relu')(x)
    
    
    # 3x3卷积
    x = Conv2D(filters2, kernel_size,padding='same', name=conv_name_base + '2b')(x)
    
    x = Dropout(0.2)(x) # dropout
    
    x = BatchNormalization(name=bn_name_base + '2b')(x)
    x = Activation('relu')(x)
    
    
    # 升维
    x = Conv2D(filters3, (1, 1), name=conv_name_base + '2c')(x)
    x = BatchNormalization(name=bn_name_base + '2c')(x)
    x = Dropout(0.2)(x) # dropout

    x = layers.add([x, input_tensor])
    x = Activation('relu')(x)
    return x
# resnet18: ResNet(BasicBlock, [2, 2, 2, 2])

# resnet34: ResNet(BasicBlock, [3, 4, 6, 3])

# resnet50:ResNet(Bottleneck, [3, 4, 6, 3])

# resnet101:ResNet(Bottleneck, [3, 4, 23, 3])

# resnet152:ResNet(Bottleneck, [3, 8, 36, 3])

def ResNet50(input_shape=[224,224,3],classes=5):
    # [224,224,3]
    img_input = tf.keras.layers.Input(shape=input_shape)
    x = ZeroPadding2D((3, 3))(img_input)   # [230,230,3]
    # [112,112,64]
    x = Conv2D(64, (7, 7), strides=(2, 2), name='conv1')(x)   #[112,112,64]
    x = BatchNormalization(name='bn_conv1')(x)
    x = Activation('relu')(x)

    # [56,56,64]
    x = MaxPooling2D((3, 3), strides=(2, 2))(x)

    # [56,56,256]
    x = conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1))
    x = identity_block(x, 3, [64, 64, 256], stage=2, block='b')
    x = identity_block(x, 3, [64, 64, 256], stage=2, block='c')

    # [28,28,512]
    x = conv_block(x, 3, [128, 128, 512], stage=3, block='a')
    x = identity_block(x, 3, [128, 128, 512], stage=3, block='b')
    x = identity_block(x, 3, [128, 128, 512], stage=3, block='c')
    x = identity_block(x, 3, [128, 128, 512], stage=3, block='d')

    # [14,14,1024]
    x = conv_block(x, 3, [256, 256, 1024], stage=4, block='a')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='b')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='c')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='d')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='e')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='f')
   

    # [7,7,2048]
    x = conv_block(x, 3, [512, 512, 2048], stage=5, block='a')
    x = identity_block(x, 3, [512, 512, 2048], stage=5, block='b')
    x = identity_block(x, 3, [512, 512, 2048], stage=5, block='c')

    # 代替全连接层
    x = AveragePooling2D((7, 7), name='avg_pool')(x)

    # 进行预测
    x = Flatten()(x)
    x = Dense(classes, activation='softmax', name='fc5')(x)

    model = Model(img_input, x, name='resnet50')

    return model
def ResNet101(input_shape=[224,224,3],classes=5):
    # [224,224,3]
    img_input = tf.keras.layers.Input(shape=input_shape)
    x = ZeroPadding2D((3, 3))(img_input)   # [230,230,3]
    # [112,112,64]
    x = Conv2D(64, (7, 7), strides=(2, 2), name='conv1')(x)   #[112,112,64]
    x = BatchNormalization(name='bn_conv1')(x)
    x = Activation('relu')(x)

    # [56,56,64]
    x = MaxPooling2D((3, 3), strides=(2, 2))(x)

    # [56,56,256]
    x = conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1))
    x = identity_block(x, 3, [64, 64, 256], stage=2, block='b')
    x = identity_block(x, 3, [64, 64, 256], stage=2, block='c')

    # [28,28,512]
    x = conv_block(x, 3, [128, 128, 512], stage=3, block='a')
    x = identity_block(x, 3, [128, 128, 512], stage=3, block='b')
    x = identity_block(x, 3, [128, 128, 512], stage=3, block='c')
    x = identity_block(x, 3, [128, 128, 512], stage=3, block='d')

    # [14,14,1024]
    x = conv_block(x, 3, [256, 256, 1024], stage=4, block='a')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='b')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='c')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='d')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='e')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='f')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='g')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='h')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='i')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='j')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='k')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='l')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='m')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='n')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='o')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='p')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='q')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='r')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='s')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='t')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='u')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='v')
    x = identity_block(x, 3, [256, 256, 1024], stage=4, block='w')

    # [7,7,2048]
    x = conv_block(x, 3, [512, 512, 2048], stage=5, block='a')
    x = identity_block(x, 3, [512, 512, 2048], stage=5, block='b')
    x = identity_block(x, 3, [512, 512, 2048], stage=5, block='c')

    # 代替全连接层
    x = AveragePooling2D((7, 7), name='avg_pool')(x)

    # 分类预测
    x = Flatten()(x)
    x = Dense(classes, activation='softmax', name='fc5')(x)

    model = Model(img_input, x, name='resnet101')

    return model
# model = ResNet50(input_shape=[224,224,3],classes=5)
# model = ResNet50()
model = ResNet101()
# model.summary()
sgd = tf.keras.optimizers.SGD(lr=0.01, momentum=0.8, decay=0.0, nesterov=False)

model.compile(optimizer=sgd,
             #loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
             loss = 'sparse_categorical_crossentropy', #交叉熵损失
             metrics=['accuracy']
             )

train_count = len(train_path)
test_count = len(test_path)

steps_per_epoch = train_count // BATCH_SIZE
validation_steps = test_count // BATCH_SIZE
from tensorflow.keras.callbacks import EarlyStopping

early_stopping = EarlyStopping(
    min_delta=0.0008, # minimium amount of change to count as an improvement
    patience=40, # how many epochs to wait before stopping
    restore_best_weights=True,
)
# 开始训练
history = model.fit(train_datasets,
                    epochs = 120,
                    steps_per_epoch = steps_per_epoch,
                    validation_data = test_datasets,
                    validation_steps = validation_steps,
                    callbacks=[early_stopping], # put your callbacks in a list
                    )
# model.save('myModel.h5')

history.history.keys()
# dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
# 画出精确值变化图像
plt.plot(history.epoch, history.history.get('accuracy'), label='accuracy')
plt.plot(history.epoch, history.history.get('val_accuracy'), label='val_accuracy')
plt.legend()
# 画出损失值变化图像
plt.plot(history.epoch, history.history.get('loss'), label='loss')
plt.plot(history.epoch, history.history.get('val_loss'), label='val_loss')
plt.legend()

实验结果:

loss: 0.0115 - accuracy: 0.9960 - val_loss: 1.8830 - val_accuracy: 0.7542

  • 2
    点赞
  • 58
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 4
    评论
TensorFlow是一个由Google开发的开源机器学习库,用于构建和训练各种机器学习模型。其中一个流行的模型是ResNet50,它是一个深度卷积神经网络模型,由50个卷积层组成。 在TensorFlow中,我们可以使用tf.keras.applications模块中的ResNet50类来构建ResNet50网络模型。下面是一个简单的代码示例: ``` import tensorflow as tf from tensorflow.keras.applications import ResNet50 # 定义输入张量的形状 input_shape = (224, 224, 3) # 创建ResNet50对象,包括预训练的权重 resnet50 = ResNet50(weights='imagenet', input_shape=input_shape) # 打印模型的摘要信息 resnet50.summary() ``` 首先,我们导入所需的模块,其中tf.keras.applications模块包含了许多常用的预训练模型。然后,我们定义了输入张量的形状为(224, 224, 3),这是ResNet50模型预期的输入形状。 接下来,我们创建了ResNet50对象,并使用'imagenet'参数来加载预训练的权重。这意味着我们可以使用该模型在ImageNet数据集上进行分类任务,并获得高性能的结果。 最后,我们打印了模型的摘要信息,这包括每个层的名称、输出形状和参数数量等。这对于理解和调试模型非常有用。 通过这段代码,我们可以看到如何使用TensorFlow中的ResNet50类来构建和加载预训练的ResNet50模型。这样我们就可以将其用于各种图像分类任务,或者在其基础上进行更进一步的调整和优化。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

你这个代码我看不懂

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值