昇思25天学习打卡营第13天 | ShuffleNet图像分类

内容介绍:

ShuffleNetV1是旷视科技提出的一种计算高效的CNN模型,和MobileNet, SqueezeNet等一样主要应用在移动端,所以模型的设计目标就是利用有限的计算资源来达到最好的模型精度。ShuffleNetV1的设计核心是引入了两种操作:Pointwise Group Convolution和Channel Shuffle,这在保持精度的同时大大降低了模型的计算量。因此,ShuffleNetV1和MobileNet类似,都是通过设计更高效的网络结构来实现模型的压缩和加速。

ShuffleNet最显著的特点在于对不同通道进行重排来解决Group Convolution带来的弊端。通过对ResNet的Bottleneck单元进行改进,在较小的计算量的情况下达到了较高的准确率。

Pointwise Group Convolution

Group Convolution(分组卷积)原理如下图所示,相比于普通的卷积操作,分组卷积的情况下,每一组的卷积核大小为in_channels/g*k*k,一共有g组,所有组共有(in_channels/g*k*k)*out_channels个参数,是正常卷积参数的1/g。分组卷积中,每个卷积核只处理输入特征图的一部分通道,其优点在于参数量会有所降低,但输出通道数仍等于卷积核的数量

具体内容:

1. 导包

from mindspore import nn
import mindspore.ops as ops
from mindspore import Tensor
from download import download
import mindspore as ms
from mindspore.dataset import Cifar10Dataset
from mindspore.dataset import vision, transforms
import time
import mindspore
import numpy as np
from mindspore import Tensor, nn
from mindspore.train import ModelCheckpoint, CheckpointConfig, TimeMonitor, LossMonitor, Model, Top1CategoricalAccuracy, Top5CategoricalAccuracy
from mindspore import load_checkpoint, load_param_into_net
import mindspore
import matplotlib.pyplot as plt
import mindspore.dataset as ds

2. 模型

class GroupConv(nn.Cell):
    def __init__(self, in_channels, out_channels, kernel_size,
                 stride, pad_mode="pad", pad=0, groups=1, has_bias=False):
        super(GroupConv, self).__init__()
        self.groups = groups
        self.convs = nn.CellList()
        for _ in range(groups):
            self.convs.append(nn.Conv2d(in_channels // groups, out_channels // groups,
                                        kernel_size=kernel_size, stride=stride, has_bias=has_bias,
                                        padding=pad, pad_mode=pad_mode, group=1, weight_init='xavier_uniform'))

    def construct(self, x):
        features = ops.split(x, split_size_or_sections=int(len(x[0]) // self.groups), axis=1)
        outputs = ()
        for i in range(self.groups):
            outputs = outputs + (self.convs[i](features[i].astype("float32")),)
        out = ops.cat(outputs, axis=1)
        return out
class ShuffleV1Block(nn.Cell):
    def __init__(self, inp, oup, group, first_group, mid_channels, ksize, stride):
        super(ShuffleV1Block, self).__init__()
        self.stride = stride
        pad = ksize // 2
        self.group = group
        if stride == 2:
            outputs = oup - inp
        else:
            outputs = oup
        self.relu = nn.ReLU()
        branch_main_1 = [
            GroupConv(in_channels=inp, out_channels=mid_channels,
                      kernel_size=1, stride=1, pad_mode="pad", pad=0,
                      groups=1 if first_group else group),
            nn.BatchNorm2d(mid_channels),
            nn.ReLU(),
        ]
        branch_main_2 = [
            nn.Conv2d(mid_channels, mid_channels, kernel_size=ksize, stride=stride,
                      pad_mode='pad', padding=pad, group=mid_channels,
                      weight_init='xavier_uniform', has_bias=False),
            nn.BatchNorm2d(mid_channels),
            GroupConv(in_channels=mid_channels, out_channels=outputs,
                      kernel_size=1, stride=1, pad_mode="pad", pad=0,
                      groups=group),
            nn.BatchNorm2d(outputs),
        ]
        self.branch_main_1 = nn.SequentialCell(branch_main_1)
        self.branch_main_2 = nn.SequentialCell(branch_main_2)
        if stride == 2:
            self.branch_proj = nn.AvgPool2d(kernel_size=3, stride=2, pad_mode='same')

    def construct(self, old_x):
        left = old_x
        right = old_x
        out = old_x
        right = self.branch_main_1(right)
        if self.group > 1:
            right = self.channel_shuffle(right)
        right = self.branch_main_2(right)
        if self.stride == 1:
            out = self.relu(left + right)
        elif self.stride == 2:
            left = self.branch_proj(left)
            out = ops.cat((left, right), 1)
            out = self.relu(out)
        return out

    def channel_shuffle(self, x):
        batchsize, num_channels, height, width = ops.shape(x)
        group_channels = num_channels // self.group
        x = ops.reshape(x, (batchsize, group_channels, self.group, height, width))
        x = ops.transpose(x, (0, 2, 1, 3, 4))
        x = ops.reshape(x, (batchsize, num_channels, height, width))
        return x
class ShuffleNetV1(nn.Cell):
    def __init__(self, n_class=1000, model_size='2.0x', group=3):
        super(ShuffleNetV1, self).__init__()
        print('model size is ', model_size)
        self.stage_repeats = [4, 8, 4]
        self.model_size = model_size
        if group == 3:
            if model_size == '0.5x':
                self.stage_out_channels = [-1, 12, 120, 240, 480]
            elif model_size == '1.0x':
                self.stage_out_channels = [-1, 24, 240, 480, 960]
            elif model_size == '1.5x':
                self.stage_out_channels = [-1, 24, 360, 720, 1440]
            elif model_size == '2.0x':
                self.stage_out_channels = [-1, 48, 480, 960, 1920]
            else:
                raise NotImplementedError
        elif group == 8:
            if model_size == '0.5x':
                self.stage_out_channels = [-1, 16, 192, 384, 768]
            elif model_size == '1.0x':
                self.stage_out_channels = [-1, 24, 384, 768, 1536]
            elif model_size == '1.5x':
                self.stage_out_channels = [-1, 24, 576, 1152, 2304]
            elif model_size == '2.0x':
                self.stage_out_channels = [-1, 48, 768, 1536, 3072]
            else:
                raise NotImplementedError
        input_channel = self.stage_out_channels[1]
        self.first_conv = nn.SequentialCell(
            nn.Conv2d(3, input_channel, 3, 2, 'pad', 1, weight_init='xavier_uniform', has_bias=False),
            nn.BatchNorm2d(input_channel),
            nn.ReLU(),
        )
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, pad_mode='same')
        features = []
        for idxstage in range(len(self.stage_repeats)):
            numrepeat = self.stage_repeats[idxstage]
            output_channel = self.stage_out_channels[idxstage + 2]
            for i in range(numrepeat):
                stride = 2 if i == 0 else 1
                first_group = idxstage == 0 and i == 0
                features.append(ShuffleV1Block(input_channel, output_channel,
                                               group=group, first_group=first_group,
                                               mid_channels=output_channel // 4, ksize=3, stride=stride))
                input_channel = output_channel
        self.features = nn.SequentialCell(features)
        self.globalpool = nn.AvgPool2d(7)
        self.classifier = nn.Dense(self.stage_out_channels[-1], n_class)

    def construct(self, x):
        x = self.first_conv(x)
        x = self.maxpool(x)
        x = self.features(x)
        x = self.globalpool(x)
        x = ops.reshape(x, (-1, self.stage_out_channels[-1]))
        x = self.classifier(x)
        return x

3. 数据处理

def get_dataset(train_dataset_path, batch_size, usage):
    image_trans = []
    if usage == "train":
        image_trans = [
            vision.RandomCrop((32, 32), (4, 4, 4, 4)),
            vision.RandomHorizontalFlip(prob=0.5),
            vision.Resize((224, 224)),
            vision.Rescale(1.0 / 255.0, 0.0),
            vision.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010]),
            vision.HWC2CHW()
        ]
    elif usage == "test":
        image_trans = [
            vision.Resize((224, 224)),
            vision.Rescale(1.0 / 255.0, 0.0),
            vision.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010]),
            vision.HWC2CHW()
        ]
    label_trans = transforms.TypeCast(ms.int32)
    dataset = Cifar10Dataset(train_dataset_path, usage=usage, shuffle=True)
    dataset = dataset.map(image_trans, 'image')
    dataset = dataset.map(label_trans, 'label')
    dataset = dataset.batch(batch_size, drop_remainder=True)
    return dataset

dataset = get_dataset("./dataset/cifar-10-batches-bin", 128, "train")
batches_per_epoch = dataset.get_dataset_size()

4. 模型训练

def train():
    mindspore.set_context(mode=mindspore.PYNATIVE_MODE, device_target="Ascend")
    net = ShuffleNetV1(model_size="2.0x", n_class=10)
    loss = nn.CrossEntropyLoss(weight=None, reduction='mean', label_smoothing=0.1)
    min_lr = 0.0005
    base_lr = 0.05
    lr_scheduler = mindspore.nn.cosine_decay_lr(min_lr,
                                                base_lr,
                                                batches_per_epoch*250,
                                                batches_per_epoch,
                                                decay_epoch=250)
    lr = Tensor(lr_scheduler[-1])
    optimizer = nn.Momentum(params=net.trainable_params(), learning_rate=lr, momentum=0.9, weight_decay=0.00004, loss_scale=1024)
    loss_scale_manager = ms.amp.FixedLossScaleManager(1024, drop_overflow_update=False)
    model = Model(net, loss_fn=loss, optimizer=optimizer, amp_level="O3", loss_scale_manager=loss_scale_manager)
    callback = [TimeMonitor(), LossMonitor()]
    save_ckpt_path = "./"
    config_ckpt = CheckpointConfig(save_checkpoint_steps=batches_per_epoch, keep_checkpoint_max=5)
    ckpt_callback = ModelCheckpoint("shufflenetv1", directory=save_ckpt_path, config=config_ckpt)
    callback += [ckpt_callback]

    print("============== Starting Training ==============")
    start_time = time.time()
    # 由于时间原因,epoch = 5,可根据需求进行调整
    model.train(5, dataset, callbacks=callback)
    use_time = time.time() - start_time
    hour = str(int(use_time // 60 // 60))
    minute = str(int(use_time // 60 % 60))
    second = str(int(use_time % 60))
    print("total time:" + hour + "h " + minute + "m " + second + "s")
    print("============== Train Success ==============")

if __name__ == '__main__':
    train()

5. 模型评估

def test():
    mindspore.set_context(mode=mindspore.GRAPH_MODE, device_target="Ascend")
    dataset = get_dataset("./dataset/cifar-10-batches-bin", 128, "test")
    net = ShuffleNetV1(model_size="2.0x", n_class=10)
    param_dict = load_checkpoint("shufflenetv1-5_390.ckpt")
    load_param_into_net(net, param_dict)
    net.set_train(False)
    loss = nn.CrossEntropyLoss(weight=None, reduction='mean', label_smoothing=0.1)
    eval_metrics = {'Loss': nn.Loss(), 'Top_1_Acc': Top1CategoricalAccuracy(),
                    'Top_5_Acc': Top5CategoricalAccuracy()}
    model = Model(net, loss_fn=loss, metrics=eval_metrics)
    start_time = time.time()
    res = model.eval(dataset, dataset_sink_mode=False)
    use_time = time.time() - start_time
    hour = str(int(use_time // 60 // 60))
    minute = str(int(use_time // 60 % 60))
    second = str(int(use_time % 60))
    log = "result:" + str(res) + ", ckpt:'" + "./shufflenetv1-5_390.ckpt" \
        + "', time: " + hour + "h " + minute + "m " + second + "s"
    print(log)
    filename = './eval_log.txt'
    with open(filename, 'a') as file_object:
        file_object.write(log + '\n')

if __name__ == '__main__':
    test()

6. 模型预测

net = ShuffleNetV1(model_size="2.0x", n_class=10)
show_lst = []
param_dict = load_checkpoint("shufflenetv1-5_390.ckpt")
load_param_into_net(net, param_dict)
model = Model(net)
dataset_predict = ds.Cifar10Dataset(dataset_dir="./dataset/cifar-10-batches-bin", shuffle=False, usage="train")
dataset_show = ds.Cifar10Dataset(dataset_dir="./dataset/cifar-10-batches-bin", shuffle=False, usage="train")
dataset_show = dataset_show.batch(16)
show_images_lst = next(dataset_show.create_dict_iterator())["image"].asnumpy()
image_trans = [
    vision.RandomCrop((32, 32), (4, 4, 4, 4)),
    vision.RandomHorizontalFlip(prob=0.5),
    vision.Resize((224, 224)),
    vision.Rescale(1.0 / 255.0, 0.0),
    vision.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010]),
    vision.HWC2CHW()
        ]
dataset_predict = dataset_predict.map(image_trans, 'image')
dataset_predict = dataset_predict.batch(16)
class_dict = {0:"airplane", 1:"automobile", 2:"bird", 3:"cat", 4:"deer", 5:"dog", 6:"frog", 7:"horse", 8:"ship", 9:"truck"}
# 推理效果展示(上方为预测的结果,下方为推理效果图片)
plt.figure(figsize=(16, 5))
predict_data = next(dataset_predict.create_dict_iterator())
output = model.predict(ms.Tensor(predict_data['image']))
pred = np.argmax(output.asnumpy(), axis=1)
index = 0
for image in show_images_lst:
    plt.subplot(2, 8, index+1)
    plt.title('{}'.format(class_dict[pred[index]]))
    index += 1
    plt.imshow(image)
    plt.axis("off")
plt.show()

ShuffleNet作为一种轻量级的神经网络结构,其独特的数据重排和分组卷积设计,使得它在保持较高精度的同时,大大降低了模型的计算量和参数量。这种设计思路让我意识到,在追求模型性能的同时,也需要考虑模型的轻量化和效率问题。这对于我们来说是一个非常重要的启示,特别是在移动设备和嵌入式系统上的深度学习应用中。

在使用MindSpore和ShuffleNet进行图像分类的实践过程中,我也遇到了一些挑战和困难。例如,如何调整超参数以优化模型的性能、如何处理数据的不平衡问题等。

  • 10
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
### 回答1: 以下是一个简单的微信小程序制作学习计划打卡记录页面的代码。 在 wxml 文件中,我们可以设置页面的布局和样式,以及与用户进行交互的组件,例如按钮、输入框等。 ```html <view class="container"> <view class="header"> <text class="title">学习计划打卡记录</text> </view> <view class="card"> <view class="card-heading"> <text class="card-title">Day 1</text> <text class="card-date">2021-01-01</text> </view> <view class="card-body"> <text class="card-label">学习时间:</text> <input class="card-input" type="number" placeholder="请输入学习时间(单位:小时)" /> </view> <view class="card-footer"> <button class="card-button">提交</button> </view> </view> <!-- Day 2、Day 3 等的卡片同上 --> </view> ``` 在 wxss 文件中,我们可以设置页面的样式,例如字体、颜色、边距等。 ```css .container { padding: 20rpx; } .header { text-align: center; margin-bottom: 20rpx; } .title { font-size: 36rpx; font-weight: bold; color: #333333; } .card { background-color: #ffffff; border-radius: 10rpx; box-shadow: 0 2rpx 6rpx rgba(0, 0, 0, 0.1); margin-bottom: 20rpx; } .card-heading { display: flex; justify-content: space-between; align-items: center; padding: 20rpx; border-bottom: 1rpx solid #f2f2f2; } .card-title { font-size: 28rpx; font-weight: bold; color: #333333; } .card-date { font-size: 24rpx; color: #999999; } .card-body { display: flex; align-items: center; padding: 20rpx; } .card-label { font-size: 28rpx; color: #333333; } .card-input { flex: 1; margin-left: 20rpx; font-size: 28rpx; color: #999999; border: none; } .card-footer { text-align: center; padding: 20rpx; } .card-button { font-size: 28rpx; color: #ffffff; background-color: #007aff; border-radius: 5rpx; padding: 10rpx 20rpx; } ``` 在 js 文件中,我们可以编写页面的逻辑代码。 ```javascript Page({ data: { days: [ { title: 'Day 1', date: '2021-01-01', time: null }, { title: 'Day 2', date: '2021-01-02', time: null }, { title: 'Day 3', date: '2021-01-03', time: null } // Day 4、Day 5 等的数据同上 ] }, handleInput: function(event) { // 获取输入框的值 let index = event.currentTarget.dataset.index; let value = event.detail.value; // 更新数据 let days = this.data.days; days[index].time = value; this.setData({ days: days }); }, handleSubmit: function(event) { // 获取提交按钮的索引 let index = event.currentTarget.dataset.index; // 获取对应的数据 let days = this.data.days; let day = days[index]; // 判断是否填写了学习时间 if (day.time === null || day.time === '') { wx.showToast({ title: '请填写学习时间', icon: 'none' }); return; } // 提交数据 wx.cloud.callFunction({ name: 'submit', data: { date: day.date, time: day.time }, success: res => { wx.showToast({ title: '提交成功' }); }, fail: err => { wx.showToast({ title: '提交失败', icon: 'none' }); } }); } }) ``` 以上代码是一个简单的微信小程序制作学习计划打卡记录页面的示例,仅供参考。具体实现方式可能因个人需求而异。 ### 回答2: 微信小程序制作学习计划打卡记录页面包含前几学习时间的全部代码如下: 首先,在小程序中创建一个page,命名为"studyRecord",在studyRecord.json文件中进行配置,设置"navigationBarTitleText"为"学习打卡",并设置"usingComponents"引入相关组件: ``` { "navigationBarTitleText": "学习打卡", "usingComponents": {} } ``` 接下来,在studyRecord.wxml文件中编写页面结构,包括一个日期选择器和一个列表用于展示打卡记录: ``` <view class="container"> <view class="header"> <picker mode="date" bindchange="dateChange"> <view class="date-picker">{{ currentDate }}</view> </picker> </view> <view class="record-list"> <block wx:for="{{ studyRecords }}" wx:key="index"> <view class="record-item"> <view class="item-date">{{ item.date }}</view> <view class="item-duration">{{ item.duration }}</view> </view> </block> </view> </view> ``` 我们在studyRecord.js文件中定义相关的事件处理函数和数据: ``` Page({ data: { currentDate: '', // 当前选择的日期 studyRecords: [] // 学习打卡记录 }, onLoad: function () { // 获取最近几学习打卡记录 this.getStudyRecords(); }, dateChange: function (event) { this.setData({ currentDate: event.detail.value }); // 根据选择日期的变化更新学习打卡记录 this.getStudyRecords(); }, getStudyRecords: function () { // 根据当前日期获取学习打卡记录,假设获取到的数据格式为[{ date: '2022/01/01', duration: '2小时' }, ...] // 可以通过调用接口或其他方式获取数据 const currentDate = this.data.currentDate; const studyRecords = this.getStudyRecordsByDate(currentDate); this.setData({ studyRecords: studyRecords }); }, getStudyRecordsByDate: function (date) { // 根据日期获取学习打卡记录的逻辑实现 // ... return studyRecords; // 返回按日期查询到的学习打卡记录 } }) ``` 在studyRecord.wxss文件中定义样式: ``` .container { padding: 10px; } .header { margin-bottom: 10px; } .date-picker { font-size: 18px; color: #333; padding: 10px; background-color: #f5f5f5; border-radius: 4px; text-align: center; } .record-list { background-color: #fff; border-radius: 4px; } .record-item { padding: 10px; border-bottom: solid 1px #eee; } .item-date { font-size: 14px; color: #666; } .item-duration { font-size: 16px; color: #333; } ``` 这样,一个包含前几学习时间的微信小程序制作学习计划打卡记录页面的代码就完成了。 ### 回答3: 要制作微信小程序的学习计划打卡记录页面,可以按照以下步骤进行: 1. 首先,需要在微信开发者工具中创建一个新的小程序项目,并在app.json文件中配置页面路由信息。 2. 在项目的根目录下创建一个新的文件夹,用于存放页面相关的文件,比如study-record文件夹。 3. 在study-record文件夹中创建一个study-record.wxml文件用于编写页面的结构。 4. 在study-record文件夹中创建一个study-record.wxss文件用于编写页面的样式。 5. 在study-record文件夹中创建一个study-record.js文件用于编写页面的逻辑代码。 6. 在study-record.js中定义一个数据对象,用于存储前几学习时间。可以使用数组来存储每一学习时间,比如每个元素都是一个包含日期和学习时间的对象。 7. 在study-record.js中编写一个函数来获取前几学习时间。可以使用Date对象和相关的方法来计算前几的日期,然后根据日期从数据对象中获取对应的学习时间。 8. 在study-record.js中编写一个函数来更新学习时间。可以通过用户输入的方式来更新某一学习时间,并将更新后的数据保存到数据对象中。 9. 在study-record.wxml中使用wx:for循环来遍历数据对象中的学习时间,并将日期和学习时间显示在页面上。 10. 在study-record.wxml中添加一个按钮,用于触发更新学习时间的函数。 11. 在study-record.js中监听按钮的点击事件,并在点击时触发更新学习时间的函数。 12. 在study-record.wxss中设置页面的样式,比如学习时间的字体大小、颜色等。 通过以上步骤,就可以完成微信小程序的学习计划打卡记录页面的制作。在页面中包含了前几学习时间,并提供了更新学习时间的功能。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

神奇的布欧

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值