深度学习Week22——利用Tensoflow实现DenseNet算法

文章目录
深度学习Week22——利用Tensoflow实现DenseNet算法
一、前言
二、我的环境
三、学习DenseNet算法
四、代码复现
4.1 配置数据集
4.2 构建模型
五、模型应用与评估
5.1 训练模型
5.2 开始训练
5.3 结果可视化

一、前言

由于进阶营的难度较大,一周的时间我们难以完成一期内容的深入学习与理解,为了更好的学习,我们一篇文章将花费两周学习,上周了解并研究 DenseNet与ResNet的区别,并学习pytorch代码。本周将完成拓展内容,使用tensorflow实现。

二、我的环境

  • 电脑系统:Windows 10
  • 语言环境:Python 3.8.0
  • 编译器:Pycharm2023.2.3
    深度学习环境:Tensorflow 2.4.2
    显卡及显存:RTX 3060 8G

三、学习DenseNet算法

通过阅读K同学啊的代码以及DenseNe论文原文,我学习到DenseNe相比于之前学过的内容t提出了一个更激进的密集连接机制:即互相连接所有的层,具体来说就是每个层都会接受其前面所有层作为其额外的输入。
图1为标准的神经网络传播过程,图2为ResNet网络的残差连接机制,作为对比,图3为DenseNet的密集连接机制。可以看到,ResNet是每个层与前面的某层(一般是2~4层)短路连接在一起,连接方式是通过元素相加。而在DenseNet中,每个层都会与前面所有层在channel维度上连接(concat)在一起(即元素叠加),并作为下一层的输入。
图1.标准的神经网络传播过程

在这里插入图片描述
图2. ResNet网络的短路连接机制(其中+代表的是元素级相加操作)
在这里插入图片描述
图3 DenseNet网络的密集连接机制(其中c代表的是channel级连接操作)
在这里插入图片描述
通俗点来说,假设在一个团队工作,每个人(网络层)都有自己的任务(特征提取)。

  • 在传统的网络中,成员A完成任务后,把结果交给成员B,B再交给C,以此类推。
  • 而在ResNet网络中,成员A完成任务后,不仅把结果交给B,还留了一条快速通道给D(跨过了B和C)。这样,如果B和C做得不好,D仍然能直接使用A的结果,确保信息不丢失。
  • 但是在DenseNet中,成员A不仅把结果交给B,还同时交给C、D、E等所有后续成员。这样,每个成员都可以直接利用前面所有成员的工作成果。

四、代码复现

4.1 配置数据集

import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision
from torchvision import transforms, datasets
import os,PIL,pathlib,warnings

warnings.filterwarnings("ignore")             #忽略警告信息
data_dir = "/home/mw/input/data7619//bird_photos/bird_photos"

data_dir = pathlib.Path(data_dir)
 
data_paths  = list(data_dir.glob('*'))
classeNames = [str(path).split("\\")[0] for path in data_paths]
print(classeNames)
['/home/mw/input/data7619/bird_photos/bird_photos/Bananaquit', '/home/mw/input/data7619/bird_photos/bird_photos/Black Throated Bushtiti', '/home/mw/input/data7619/bird_photos/bird_photos/Cockatoo', '/home/mw/input/data7619/bird_photos/bird_photos/Black Skimmer']
image_count = len(list(data_dir.glob('*/*')))
print("图片总数为:",image_count)

图片总数为: 565
数据增强
import torchvision.transforms as transforms
from torchvision import datasets

# 这里我们运用上前面学习到的数据增强的方式
train_transforms = transforms.Compose([
    transforms.Resize([224, 224]),  # 将输入图片resize成统一尺寸
    transforms.RandomHorizontalFlip(),  # 随机水平翻转
    transforms.RandomVerticalFlip(),  # 随机垂直翻转
    transforms.RandomRotation(15),  # 随机旋转图片,范围为-15度到15度
    transforms.ToTensor(),  # 将PIL Image或numpy.ndarray转换为tensor,并归一化到[0,1]之间
])

test_transform = transforms.Compose([
    transforms.Resize([224, 224]),  # 将输入图片resize成统一尺寸
    transforms.ToTensor(),  # 将PIL Image或numpy.ndarray转换为tensor,并归一化到[0,1]之间
    transforms.Normalize(  # 标准化处理-->转换为标准正太分布(高斯分布),使模型更容易收敛
        mean=[0.485, 0.456, 0.406],
        std=[0.229, 0.224, 0.225])  # 其中 mean=[0.485,0.456,0.406]与std=[0.229,0.224,0.225] 从数据集中随机抽样计算得到的。
])

total_data = datasets.ImageFolder("/home/mw/input/data7619//bird_photos/bird_photos", transform=train_transforms)
print(total_data.class_to_idx)
{'Bananaquit': 0, 'Black Skimmer': 1, 'Black Throated Bushtiti': 2, 'Cockatoo': 3}
划分训练集、测试集
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,  # 分割数据集
    subset="training",  # 数据集类型
    seed=123,
    image_size=(224, 224),
    batch_size=16)
 
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,  # 分割数据集
    subset="validation",  # 数据集类型
    seed=123,
    image_size=(224, 224),
    batch_size=16)
Found 565 files belonging to 4 classes.
Using 452 files for training.
Found 565 files belonging to 4 classes.
Using 113 files for validation.

4.2 构建模型

首先我们实现BottleNeck,实现高效的特征传递和重用

class BottleNeck(keras.Model):
    def __init__(self, growth_rate, bn_size=4, dropout=0.3):
        super().__init__()
        self.bn1 = layers.BatchNormalization()
        self.relu = layers.Activation("relu"),
        self.conv1 = layers.Conv2D(filters=bn_size * growth_rate, kernel_size=(1, 1),
                                   strides=1, padding='same')
        self.bn2 = layers.BatchNormalization()
        self.conv2 = layers.Conv2D(filters=growth_rate, kernel_size=(3, 3),
                                   strides=1, padding='same')
        self.dropout = layers.Dropout(rate=dropout)
 
        self.listLayers = [
            self.bn1,
            self.relu,
            self.conv1,
            self.bn2,
            self.relu,
            self.conv2,
            self.dropout
        ]
 
    def call(self, x):
        tem = x
        for layer in self.listLayers.layers:
            x = layer(x)
        return layers.concatenate([tem, x], axis=-1)

据此,实现DenseBlock模块,有多个DenseLayer,内部是密集连接方式(输入特征数线性增长):

class DenseBlock(tf.keras.Model):
    def __init__(self, num_layer, growth_rate, bn_size=4, dropout=0.3, efficient=False):
        super().__init__()
        self.efficient = efficient
        self.listLayers = []
        if self.efficient:
            _x = tf.recompute_grad(BottleNeck(growth_rate, bn_size=bn_size, dropout=dropout))
        else:
            _x = BottleNeck(growth_rate, bn_size=bn_size, dropout=dropout)
        for _ in range(num_layer):
            self.listLayers.append(BottleNeck(growth_rate, bn_size=bn_size, dropout=dropout))
 
    def call(self, x):
        for layer in self.listLayers.layers:
            x = layer(x)
        return x

然后,我们实现Transition层,它主要是一个卷积层和一个池化层:

class Transition(tf.keras.Model):
    def __init__(self, growth_rate):
        super().__init__()
        self.bn1 = layers.BatchNormalization()
        self.relu = layers.Activation('relu')
        self.conv1 = layers.Conv2D(filters=growth_rate, kernel_size=(1, 1),
                                   strides=1, activation='relu', padding='same')
        self.pooling = layers.AveragePooling2D(pool_size=(2, 2), strides=2, padding='same')
 
        self.listLayers = [
            self.bn1,
            self.relu,
            self.conv1,
            self.pooling
        ]
 
    def call(self, x):
        for layer in self.listLayers.layers:
            x = layer(x)
        return x

接下来我们实现DenseNet网络:

class DenseNet(tf.keras.Model):
    def __init__(self, num_init_feature, growth_rate, block_config, num_classes,
                 bn_size=4, dropout=0.3, compression_rate=0.5, efficient=False):
        super().__init__()
        self.num_channels = num_init_feature
        self.conv = layers.Conv2D(filters=num_init_feature, kernel_size=7,
                                  strides=2, padding='same')
        self.bn = layers.BatchNormalization()
        self.relu = layers.Activation('relu')
        self.max_pool = layers.MaxPool2D(pool_size=3, strides=2, padding='same')
 
        self.dense_block_layers = []
        for i in block_config[:-1]:
            self.dense_block_layers.append(DenseBlock(num_layer=i, growth_rate=growth_rate,
                                                      bn_size=bn_size, dropout=dropout, efficient=efficient))
            self.num_channels = compression_rate * (self.num_channels + growth_rate * i)
            self.dense_block_layers.append(Transition(self.num_channels))
 
        self.dense_block_layers.append(DenseBlock(num_layer=block_config[-1], growth_rate=growth_rate,
                                                  bn_size=bn_size, dropout=dropout, efficient=efficient))
 
        self.avgpool = layers.GlobalAveragePooling2D()
        self.fc = tf.keras.layers.Dense(units=num_classes, activation=tf.keras.activations.softmax)
 
    def call(self, x):
        x = self.conv(x)
        x = self.bn(x)
        x = self.relu(x)
        x = self.max_pool(x)
 
        for layer in self.dense_block_layers.layers:
            x = layer(x)
 
        x = self.avgpool(x)
        return self.fc(x)

实现DenseNet-121网络,使用提供的预训练好的网络参数:

def densenet121(pretrained=False, **kwargs):
    """DenseNet121"""
    model = DenseNet(num_init_features=64, growth_rate=32, block_config=(6, 12, 24, 16),
                     **kwargs)

    if pretrained:
        # '.'s are no longer allowed in module names, but pervious _DenseLayer
        # has keys 'norm.1', 'relu.1', 'conv.1', 'norm.2', 'relu.2', 'conv.2'.
        # They are also in the checkpoints in model_urls. This pattern is used
        # to find such keys.
        pattern = re.compile(
            r'^(.*denselayer\d+\.(?:norm|relu|conv))\.((?:[12])\.(?:weight|bias|running_mean|running_var))$')
        state_dict = model_zoo.load_url(model_urls['densenet121'])
        for key in list(state_dict.keys()):
            res = pattern.match(key)
            if res:
                new_key = res.group(1) + res.group(2)
                state_dict[new_key] = state_dict[key]
                del state_dict[key]
        model.load_state_dict(state_dict)
    return model

model = DenseNet(num_init_feature=64,
                 growth_rate=32,
                 block_config=[6, 12, 24, 16],
                 compression_rate=0.5,
                 num_classes=4,
                 dropout=0.0,
                 efficient=True)
 
x = tf.random.normal((1, 224, 224, 3))
for layer in model.layers:
    x = layer(x)
    print(layer.name, 'output shape:\t', x.shape)

conv2d output shape:	 (1, 112, 112, 64)
batch_normalization output shape:	 (1, 112, 112, 64)
activation output shape:	 (1, 112, 112, 64)
max_pooling2d output shape:	 (1, 56, 56, 64)
dense_block output shape:	 (1, 56, 56, 256)
transition output shape:	 (1, 28, 28, 128)
dense_block_1 output shape:	 (1, 28, 28, 512)
transition_1 output shape:	 (1, 14, 14, 256)
dense_block_2 output shape:	 (1, 14, 14, 1024)
transition_2 output shape:	 (1, 7, 7, 512)
dense_block_3 output shape:	 (1, 7, 7, 1024)
global_average_pooling2d output shape:	 (1, 1024)
dense output shape:	 (1, 4)

五 、模型应用与评估

5.1 训练模型

# 定义损失函数和优化器
AUTOTUNE = tf.data.AUTOTUNE
 
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
opt = tf.keras.optimizers.Adam(learning_rate=1e-4, decay=0.01)

5.2 开始训练

model.compile(optimizer=opt,
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])
epochs = 30
history = model.fit(train_ds, validation_data=val_ds, epochs=epochs)
print('Done')

Epoch 1/30
29/29 [==============================] - 26s 379ms/step - loss: 1.0768 - accuracy: 0.5479 - val_loss: 5.5336 - val_accuracy: 0.2655
Epoch 2/30
29/29 [==============================] - 6s 195ms/step - loss: 0.4562 - accuracy: 0.8296 - val_loss: 2.3944 - val_accuracy: 0.2655
Epoch 3/30
29/29 [==============================] - 6s 192ms/step - loss: 0.2295 - accuracy: 0.9385 - val_loss: 1.4455 - val_accuracy: 0.3717
Epoch 4/30
29/29 [==============================] - 6s 193ms/step - loss: 0.1099 - accuracy: 0.9784 - val_loss: 1.3085 - val_accuracy: 0.3894
Epoch 5/30
29/29 [==============================] - 6s 193ms/step - loss: 0.0589 - accuracy: 0.9961 - val_loss: 1.2878 - val_accuracy: 0.4602
Epoch 6/30
29/29 [==============================] - 6s 193ms/step - loss: 0.0256 - accuracy: 1.0000 - val_loss: 1.2492 - val_accuracy: 0.4867
Epoch 7/30
29/29 [==============================] - 6s 194ms/step - loss: 0.0188 - accuracy: 1.0000 - val_loss: 1.2144 - val_accuracy: 0.5221
Epoch 8/30
29/29 [==============================] - 6s 195ms/step - loss: 0.0123 - accuracy: 1.0000 - val_loss: 1.1655 - val_accuracy: 0.5310
Epoch 9/30
29/29 [==============================] - 6s 195ms/step - loss: 0.0104 - accuracy: 1.0000 - val_loss: 1.1124 - val_accuracy: 0.5664
Epoch 10/30
29/29 [==============================] - 6s 195ms/step - loss: 0.0078 - accuracy: 1.0000 - val_loss: 1.0159 - val_accuracy: 0.5929
Epoch 11/30
29/29 [==============================] - 6s 196ms/step - loss: 0.0063 - accuracy: 1.0000 - val_loss: 0.9289 - val_accuracy: 0.6283
Epoch 12/30
29/29 [==============================] - 6s 195ms/step - loss: 0.0057 - accuracy: 1.0000 - val_loss: 0.8265 - val_accuracy: 0.6372
Epoch 13/30
29/29 [==============================] - 6s 196ms/step - loss: 0.0047 - accuracy: 1.0000 - val_loss: 0.7328 - val_accuracy: 0.6991
Epoch 14/30
29/29 [==============================] - 6s 196ms/step - loss: 0.0040 - accuracy: 1.0000 - val_loss: 0.6665 - val_accuracy: 0.7257
Epoch 15/30
29/29 [==============================] - 6s 196ms/step - loss: 0.0043 - accuracy: 1.0000 - val_loss: 0.6038 - val_accuracy: 0.7876
Epoch 16/30
29/29 [==============================] - 6s 197ms/step - loss: 0.0039 - accuracy: 1.0000 - val_loss: 0.5667 - val_accuracy: 0.7965
Epoch 17/30
29/29 [==============================] - 6s 197ms/step - loss: 0.0032 - accuracy: 1.0000 - val_loss: 0.5295 - val_accuracy: 0.8142
Epoch 18/30
29/29 [==============================] - 6s 197ms/step - loss: 0.0032 - accuracy: 1.0000 - val_loss: 0.5142 - val_accuracy: 0.8319
Epoch 19/30
29/29 [==============================] - 6s 197ms/step - loss: 0.0028 - accuracy: 1.0000 - val_loss: 0.4950 - val_accuracy: 0.8319
Epoch 20/30
29/29 [==============================] - 6s 197ms/step - loss: 0.0026 - accuracy: 1.0000 - val_loss: 0.4780 - val_accuracy: 0.8407
Epoch 21/30
29/29 [==============================] - 6s 197ms/step - loss: 0.0028 - accuracy: 1.0000 - val_loss: 0.4832 - val_accuracy: 0.8407
Epoch 22/30
29/29 [==============================] - 6s 197ms/step - loss: 0.0024 - accuracy: 1.0000 - val_loss: 0.4737 - val_accuracy: 0.8496
Epoch 23/30
29/29 [==============================] - 6s 197ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 0.4688 - val_accuracy: 0.8584
Epoch 24/30
29/29 [==============================] - 6s 197ms/step - loss: 0.0021 - accuracy: 1.0000 - val_loss: 0.4646 - val_accuracy: 0.8584
Epoch 25/30
29/29 [==============================] - 6s 197ms/step - loss: 0.0021 - accuracy: 1.0000 - val_loss: 0.4663 - val_accuracy: 0.8584
Epoch 26/30
29/29 [==============================] - 6s 196ms/step - loss: 0.0018 - accuracy: 1.0000 - val_loss: 0.4673 - val_accuracy: 0.8584
Epoch 27/30
29/29 [==============================] - 6s 197ms/step - loss: 0.0016 - accuracy: 1.0000 - val_loss: 0.4666 - val_accuracy: 0.8584
Epoch 28/30
29/29 [==============================] - 6s 197ms/step - loss: 0.0015 - accuracy: 1.0000 - val_loss: 0.4704 - val_accuracy: 0.8584
Epoch 29/30
29/29 [==============================] - 6s 197ms/step - loss: 0.0017 - accuracy: 1.0000 - val_loss: 0.4723 - val_accuracy: 0.8584
Epoch 30/30
29/29 [==============================] - 6s 197ms/step - loss: 0.0015 - accuracy: 1.0000 - val_loss: 0.4663 - val_accuracy: 0.8584
Done

5.3 结果可视化

import matplotlib.pyplot as plt
# 隐藏警告
warnings.filterwarnings("ignore")
plt.rcParams['font.sans-serif']    = ['SimHei'] # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False      # 用来正常显示负号
plt.rcParams['figure.dpi']         = 100        # 分辨率

# 获取训练历史
train_acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
train_loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(epochs)

# 绘图
plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

在这里插入图片描述

可以发现,我们的模型出现了过拟合现象,与笔者技术水平不足有关,希望未来能够加强理论的学习,尽快早日解决此问题。

  • 7
    点赞
  • 19
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值