昇思25天学习打卡营第19天|基于MobileNetv2的垃圾分类函数式自动微分

        本文主要介绍如何使用MobileNetv2进行垃圾分类,通过读取本地图像数据作为输入,对图像中的垃圾物体进行检测,并且将检测结果图片保存到文件中。

实验目的

  1. 了解熟悉垃圾分类应用代码的编写(Python语言)。
  2. 了解Linux操作系统的基本使用。
  3. 掌握atc命令进行模型转换的基本操作。

MobileNetv2模型原理介绍

        MobileNet网络是由Google团队于2017年提出的专注于移动端、嵌入式或IoT设备的轻量级CNN网络。相较于传统卷积神经网络,MobileNet网络通过使用深度可分离卷积(Depthwise Separable Convolution)在准确率小幅降低的前提下,大大减小了模型参数与运算量。MobileNetV2则在此基础上引入倒残差结构(Inverted residual block)和线性瓶颈(Linear Bottlenecks),进一步优化模型结构,使得在处理低维特征时信息丢失更少,模型更小且更高效。

实验环境

        本案例支持win_x86和Linux系统,CPU/GPU/Ascend均可运行。在动手实践之前,确保已经正确安装了MindSpore。不同平台下的环境准备请参考《MindSpore环境搭建实验手册》。

数据处理

数据准备

        MobileNetV2的代码默认使用ImageFolder格式管理数据集,每一类图片整理成单独的一个文件夹,下载并解压数据集:

from download import download

# 下载垃圾分类数据集
dataset_url = "https://ascend-professional-construction-dataset.obs.cn-north-4.myhuaweicloud.com:443/MindStudio-pc/data_en.zip"
path = download(dataset_url, "./", kind="zip", replace=True)

数据加载

        将模块导入,进行相关配置,并定义垃圾分类的数据集标签及用于标签映射的字典。

import math
import numpy as np
import os
import random
from matplotlib import pyplot as plt
from easydict import EasyDict
from PIL import Image
import mindspore as ms
import mindspore.nn as nn
import mindspore.dataset as de
import mindspore.dataset.vision as C
import mindspore.dataset.transforms as C2
from mindspore import set_context, load_checkpoint, Tensor
from mindspore.train import Model

# 配置实验环境
set_context(mode=ms.GRAPH_MODE, device_target="CPU")

# 垃圾分类标签及映射
garbage_classes = {
    '干垃圾': ['贝壳', '打火机', '旧镜子', '扫把', '陶瓷碗', '牙刷', '一次性筷子', '脏污衣服'],
    '可回收物': ['报纸', '玻璃制品', '篮球', '塑料瓶', '硬纸板', '玻璃瓶', '金属制品', '帽子', '易拉罐', '纸张'],
    '湿垃圾': ['菜叶', '橙皮', '蛋壳', '香蕉皮'],
    '有害垃圾': ['电池', '药片胶囊', '荧光灯', '油漆桶']
}

class_cn = ['贝壳', '打火机', '旧镜子', '扫把', '陶瓷碗', '牙刷', '一次性筷子', '脏污衣服',
            '报纸', '玻璃制品', '篮球', '塑料瓶', '硬纸板', '玻璃瓶', '金属制品', '帽子', '易拉罐', '纸张',
            '菜叶', '橙皮', '蛋壳', '香蕉皮',
            '电池', '药片胶囊', '荧光灯', '油漆桶']
class_en = ['Seashell', 'Lighter', 'Old Mirror', 'Broom', 'Ceramic Bowl', 'Toothbrush', 'Disposable Chopsticks', 'Dirty Cloth',
            'Newspaper', 'Glassware', 'Basketball', 'Plastic Bottle', 'Cardboard', 'Glass Bottle', 'Metalware', 'Hats', 'Cans', 'Paper',
            'Vegetable Leaf', 'Orange Peel', 'Eggshell', 'Banana Peel',
            'Battery', 'Tablet capsules', 'Fluorescent lamp', 'Paint bucket']

index_en = {name: idx for idx, name in enumerate(class_en)}

# 训练超参数配置
config = EasyDict({
    "num_classes": 26,
    "image_height": 224,
    "image_width": 224,
    "batch_size": 16,
    "eval_batch_size": 8,
    "epochs": 10,
    "lr_max": 0.05,
    "momentum": 0.9,
    "weight_decay": 1e-4,
    "dataset_path": "./data_en",
    "class_index": index_en,
    "pretrained_ckpt": "./mobilenetV2-200_1067.ckpt"
})

数据预处理操作

        利用ImageFolderDataset方法读取垃圾分类数据集,并对数据进行处理。

def create_dataset(dataset_path, config, training=True, buffer_size=1000):
    data_path = os.path.join(dataset_path, 'train' if training else 'test')
    ds = de.ImageFolderDataset(data_path, num_parallel_workers=4, class_indexing=config.class_index)
    resize_height, resize_width = config.image_height, config.image_width
    normalize_op = C.Normalize(mean=[0.485*255, 0.456*255, 0.406*255], std=[0.229*255, 0.224*255, 0.225*255])
    change_swap_op = C.HWC2CHW()
    type_cast_op = C2.TypeCast(ms.int32)

    if training:
        train_trans = [
            C.RandomCropDecodeResize(resize_height, scale=(0.08, 1.0), ratio=(0.75, 1.333)),
            C.RandomHorizontalFlip(prob=0.5),
            C.RandomColorAdjust(brightness=0.4, contrast=0.4, saturation=0.4),
            normalize_op,
            change_swap_op
        ]
        ds = ds.map(input_columns="image", operations=train_trans, num_parallel_workers=4)
        ds = ds.map(input_columns="label", operations=type_cast_op, num_parallel_workers=4)
        ds = ds.shuffle(buffer_size=buffer_size).batch(config.batch_size, drop_remainder=True)
    else:
        eval_trans = [
            C.Decode(),
            C.Resize((int(resize_width/0.875), int(resize_width/0.875))),
            C.CenterCrop(resize_width),
            normalize_op,
            change_swap_op
        ]
        ds = ds.map(input_columns="image", operations=eval_trans, num_parallel_workers=4)
        ds = ds.map(input_columns="label", operations=type_cast_op, num_parallel_workers=4).batch(config.eval_batch_size, drop_remainder=True)

    return ds

# 展示部分处理后的数据
ds = create_dataset(dataset_path=config.dataset_path, config=config, training=False)
print(ds.get_dataset_size())
data = ds.create_dict_iterator(output_numpy=True)._get_next()
images = data['image']
labels = data['label']

for i in range(1, 5):
    plt.subplot(2, 2, i)
    plt.imshow(np.transpose(images[i], (1,2,0)))
    plt.title('label: %s' % class_en[labels[i]])
    plt.xticks([])
plt.show()

MobileNetV2模型搭建

        使用MindSpore定义MobileNetV2网络的各模块时需要继承mindspore.nn.Cell。神经网络的各层需要预先在__init__方法中定义,然后通过定义construct方法来完成神经网络的前向构造。原始模型激活函数为ReLU6,池化模块采用全局平均池化层。

# 定义MobileNetV2的各个模块
class GlobalAvgPooling(nn.Cell):
    def __init__(self):
        super(GlobalAvgPooling, self).__init__()

    def construct(self, x):
        return P.ReduceMean(keep_dims=False)(x, (2, 3))

class ConvBNReLU(nn.Cell):
    def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1):
        super(ConvBNReLU, self).__init__()
        padding = (kernel_size - 1) // 2
        if groups == 1:
            conv = nn.Conv2d(in_planes, out_planes, kernel_size, stride, pad_mode='pad', padding=padding)
        else:
            conv = nn.Conv2d(in_planes, in_planes, kernel_size, stride, pad_mode='pad', padding=padding, group=in_planes)
        self.features = nn.SequentialCell([conv, nn.BatchNorm2d(out_planes), nn.ReLU6()])

    def construct(self, x):
        return self.features(x)

class InvertedResidual(nn.Cell):
    def __init__(self, inp, oup, stride, expand_ratio):
        super(InvertedResidual, self).__init__()
        hidden_dim = int(round(inp * expand_ratio))
        self.use_res_connect = stride == 1 and inp == oup
        layers = []
        if expand_ratio != 1:
            layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1))
        layers.extend([
            ConvBNReLU(hidden_dim, hidden_dim, stride=stride, groups=hidden_dim),
            nn.Conv2d(hidden_dim, oup, kernel_size=1, stride=1, has_bias=False),
            nn.BatchNorm2d(oup)
        ])
        self.conv = nn.SequentialCell(layers)

    def construct(self, x):
        if self.use_res_connect:
            return x + self.conv(x)
        else:
            return self.conv(x)

class MobileNetV2Backbone(nn.Cell):
    def __init__(self, width_mult=1., inverted_residual_setting=None, round_nearest=8, input_channel=32, last_channel=1280):
        super(MobileNetV2Backbone, self).__init__()
        block = InvertedResidual
        if inverted_residual_setting is None:
            inverted_residual_setting = [
                [1, 16, 1, 1],
                [6, 24, 2, 2],
                [6, 32, 3, 2],
                [6, 64, 4, 2],
                [6, 96, 3, 1],
                [6, 160, 3, 2],
                [6, 320, 1, 1]
            ]
        input_channel = _make_divisible(input_channel * width_mult, round_nearest)
        self.out_channels = _make_divisible(last_channel * max(1.0, width_mult), round_nearest)
        features = [ConvBNReLU(3, input_channel, stride=2)]
        for t, c, n, s in inverted_residual_setting:
            output_channel = _make_divisible(c * width_mult, round_nearest)
            for i in range(n):
                stride = s if i == 0 else 1
                features.append(block(input_channel, output_channel, stride, expand_ratio=t))
                input_channel = output_channel
        features.append(ConvBNReLU(input_channel, self.out_channels, kernel_size=1))
        self.features = nn.SequentialCell(features)

    def construct(self, x):
        return self.features(x)

class MobileNetV2Head(nn.Cell):
    def __init__(self, input_channel=1280, num_classes=1000, has_dropout=False):
        super(MobileNetV2Head, self).__init__()
        head = [GlobalAvgPooling(), nn.Dense(input_channel, num_classes)]
        if has_dropout:
            head.insert(1, nn.Dropout(0.2))
        self.head = nn.SequentialCell(head)

    def construct(self, x):
        return self.head(x)

class MobileNetV2(nn.Cell):
    def __init__(self, num_classes=1000, width_mult=1., has_dropout=False):
        super(MobileNetV2, self).__init__()
        self.backbone = MobileNetV2Backbone(width_mult=width_mult)
        self.head = MobileNetV2Head(input_channel=self.backbone.out_channels, num_classes=num_classes, has_dropout=has_dropout)

    def construct(self, x):
        x = self.backbone(x)
        x = self.head(x)
        return x

MobileNetV2模型的训练与测试

训练策略

        采用余弦退火策略调整学习率,以提高模型的训练效果。具体实现如下:        

def cosine_decay(total_steps, lr_init=0.0, lr_end=0.0, lr_max=0.1, warmup_steps=0):
    lr_init, lr_end, lr_max = float(lr_init), float(lr_end), float(lr_max)
    decay_steps = total_steps - warmup_steps
    lr_all_steps = []
    inc_per_step = (lr_max - lr_init) / warmup_steps if warmup_steps else 0
    for i in range(total_steps):
        if i < warmup_steps:
            lr = lr_init + inc_per_step * (i + 1)
        else:
            cosine_decay = 0.5 * (1 + math.cos(math.pi * (i - warmup_steps) / decay_steps))
            lr = (lr_max - lr_end) * cosine_decay + lr_end
        lr_all_steps.append(lr)
    return lr_all_steps

        在模型训练过程中,可以添加检查点(Checkpoint)用于保存模型的参数,以便进行推理及中断后再训练使用。以下是模型训练与测试的具体实现:

from mindspore.amp import FixedLossScaleManager
LOSS_SCALE = 1024

train_dataset = create_dataset(dataset_path=config.dataset_path, config=config)
eval_dataset = create_dataset(dataset_path=config.dataset_path, config=config)
step_size = train_dataset.get_dataset_size()

backbone = MobileNetV2Backbone()
for param in backbone.get_parameters():
    param.requires_grad = False
load_checkpoint(config.pretrained_ckpt, backbone)

head = MobileNetV2Head(input_channel=backbone.out_channels, num_classes=config.num_classes)
network = MobileNetV2(num_classes=config.num_classes)
network.backbone = backbone
network.head = head

loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
loss_scale = FixedLossScaleManager(LOSS_SCALE, drop_overflow_update=False)
lrs = cosine_decay(config.epochs * step_size, lr_max=config.lr_max)
opt = nn.Momentum(network.trainable_params(), lrs, config.momentum, config.weight_decay, loss_scale=LOSS_SCALE)

def train_loop(model, dataset, loss_fn, optimizer):
    def forward_fn(data, label):
        logits = model(data)
        loss = loss_fn(logits, label)
        return loss

    grad_fn = ms.value_and_grad(forward_fn, None, optimizer.parameters)

    def train_step(data, label):
        loss, grads = grad_fn(data, label)
        optimizer(grads)
        return loss

    size = dataset.get_dataset_size()
    model.set_train()
    for batch, (data, label) in enumerate(dataset.create_tuple_iterator()):
        loss = train_step(data, label)
        if batch % 10 == 0:
            loss, current = loss.asnumpy(), batch
            print(f"loss: {loss:>7f}  [{current:>3d}/{size:>3d}]")

def test_loop(model, dataset, loss_fn):
    num_batches = dataset.get_dataset_size()
    model.set_train(False)
    total, test_loss, correct = 0, 0, 0
    for data, label in dataset.create_tuple_iterator():
        pred = model(data)
        total += len(data)
        test_loss += loss_fn(pred, label).asnumpy()
        correct += (pred.argmax(1) == label).asnumpy().sum()
    test_loss /= num_batches
    correct /= total
    print(f"Test: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")

print("============== Starting Training ==============")
epochs = 10
for t in range(epochs):
    print(f"Epoch {t+1}\n-------------------------------")
    train_loop(network, train_dataset, loss, opt)
    ms.save_checkpoint(network, "save_mobilenetV2_model.ckpt")
    test_loop(network, eval_dataset, loss)
print("Done!")

模型推理

        加载模型Checkpoint进行推理,使用load_checkpoint接口加载数据时,需要把数据传入给原始网络,而不能传递给带有优化器和损失函数的训练网络。        

def image_process(image):
    mean=[0.485*255, 0.456*255, 0.406*255]
    std=[0.229*255, 0.224*255, 0.225*255]
    image = (np.array(image) - mean) / std
    image = image.transpose((2,0,1))
    return Tensor(np.array([image], np.float32))

def infer_one(network, image_path):
    image = Image.open(image_path).resize((config.image_height, config.image_width))
    logits = network(image_process(image))
    pred = np.argmax(logits.asnumpy(), axis=1)[0]
    print(image_path, class_en[pred])

def infer():
    backbone = MobileNetV2Backbone(last_channel=config.backbone_out_channels)
    head = MobileNetV2Head(input_channel=backbone.out_channels, num_classes=config.num_classes)
    network = MobileNetV2(num_classes=config.num_classes)
    network.backbone = backbone
    network.head = head
    load_checkpoint("save_mobilenetV2_model.ckpt", network)
    for i in range(91, 100):
        infer_one(network, f'data_en/test/Cardboard/000{i}.jpg')
infer()

导出模型文件

        导出ONNX模型文件,用于后续在其他平台上进行推理。

backbone = MobileNetV2Backbone(last_channel=config.backbone_out_channels)
head = MobileNetV2Head(input_channel=backbone.out_channels, num_classes=config.num_classes)
network = MobileNetV2(num_classes=config.num_classes)
network.backbone = backbone
network.head = head
load_checkpoint("save_mobilenetV2_model.ckpt", network)

input = np.random.uniform(0.0, 1.0, size=[1, 3, 224, 224]).astype(np.float32)
export(network, Tensor(input), file_name='mobilenetv2.onnx', file_format='ONNX')

结果

学习心得:通过本次实验,大家学会了如何使用MobileNetV2进行垃圾分类任务。从数据准备、模型构建到训练与评估,再到推理和模型导出,每一步都让我们深入理解了深度学习的实际应用。特别是在数据预处理和模型训练过程中,掌握了使用MindSpore框架进行图像分类的技术和方法。通过不断调整超参数和训练策略,我们发现模型性能可以显著提升。这个过程不仅提高了大家的技术能力,也增强了我们解决实际问题的信心。

如果你觉得这篇博文对你有帮助,请点赞、收藏、关注我,并且可以打赏支持我!

欢迎关注我的后续博文,我将分享更多关于深度学习和计算机视觉的精彩内容。

谢谢大家的支持!

  • 24
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

会飞的Anthony

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值