OpenMMLab AI实战营第三课笔记及作业随记

环境介绍

作业全程使用华为云ECS平台完成

硬件环境
CPU:Intel® Xeon® Gold 6278C CPU @ 2.60GHz
内存:64G
GPU:Nvidia Tesla V100S-PCI

软件环境
操作系统:Ubuntu 20.02
CUDA:11.2
Cudnn:8.1

环境准备

安装Anaconda

下载地址:https://repo.anaconda.com/archive/Anaconda3-2022.10-Linux-x86_64.sh

下载Anaconda

wget https://repo.anaconda.com/archive/Anaconda3-2022.10-Linux-x86_64.sh

安装Anaconda

chmod a+x Anaconda3-2022.10-Linux-x86_64.sh
sh Anaconda3-2022.10-Linux-x86_64.sh

根据提示安装完成即可

设置conda源

vim ~/.condarc

.condarc文件内容:

default_channels:
  - https://mirror.sjtu.edu.cn/anaconda/pkgs/r
  - https://mirror.sjtu.edu.cn/anaconda/pkgs/main
custom_channels:
  conda-forge: https://mirror.sjtu.edu.cn/anaconda/cloud/
  pytorch: https://mirror.sjtu.edu.cn/anaconda/cloud/
channels:
  - defaults

内容来源:https://mirrors.sjtug.sjtu.edu.cn/docs/anaconda

设置Pip源

pip config set global.index-url https://mirror.sjtu.edu.cn/pypi/web/simple

创建Python3.8环境

conda create --name opennmmlab_mmclassification python=3.8

激活环境

conda activate opennmmlab_mmclassification

安装Pytorch

pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html

.安装 mmcv-full 模块

pip install mmcv-full==1.7.0 -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.10/index.html

安装 openmmlab/mmclassification 模块

# git 下载 mmclassification 代码
git clone https://github.com/open-mmlab/mmclassification.git
# 编译安装
cd mmclassification
pip install -e .

数据集

flower数据集
flower 数据集包含 5 种类别的花卉图像:雏菊 daisy 588张,蒲公英 dandelion 556张,玫瑰 rose 583张,向⽇
葵 sunflower 536张,郁⾦⾹ tulip 585张。

数据集下载链接:
国际⽹:https://www.dropbox.com/s/snom6v4zfky0flx/flower_dataset.zip
国内⽹:https://pan.baidu.com/s/1RJmAoxCD_aNPyTRX6w97xQ 提取码: 9x5u

划分数据集

将数据集按照 8:2 的⽐例划分成训练和验证⼦数据集,并将数据集整理成 ImageNet的格式,将训练⼦集和验证⼦集放到 train 和 val ⽂件夹下,⽣成训练(可选)和验证⼦集标注列表 train.txt 和 val.txt ,每⾏应包含⼀个⽂件名和其对应的标签
数据集划分代码 split_data.py,代码如下:

import os
import sys
import shutil
import numpy as np


def load_data(data_path):
    count = 0
    data = {}
    for dir_name in os.listdir(data_path):
        dir_path = os.path.join(data_path, dir_name)
        if not os.path.isdir(dir_path):
            continue

        data[dir_name] = []
        for file_name in os.listdir(dir_path):
            file_path = os.path.join(dir_path, file_name)
            if not os.path.isfile(file_path):
                continue
	    
            data[dir_name].append(file_path)

        count += len(data[dir_name])
        print("{} :{}".format(dir_name, len(data[dir_name])))

    print("total of image : {}".format(count))
    return data


def copy_dataset(src_img_list, data_index, target_path):
    target_img_list = []
    for index in data_index:
        src_img = src_img_list[index]
        img_name = os.path.split(src_img)[-1]

        shutil.copy(src_img, target_path)
        target_img_list.append(os.path.join(target_path, img_name))
    return target_img_list


def write_file(data, file_name):
    if isinstance(data, dict):
        write_data = []
        for lab, img_list in data.items():
            for img in img_list:
                write_data.append("{} {}".format(img, lab))
    else:
        write_data = data

    with open(file_name, "w") as f:
        for line in write_data:
            f.write(line + "\n")

    print("{} write over!".format(file_name))


def split_data(src_data_path, target_data_path, train_rate=0.8):
    src_data_dict = load_data(src_data_path)

    classes = []
    train_dataset, val_dataset = {}, {}
    train_count, val_count = 0, 0
    for i, (cls_name, img_list) in enumerate(src_data_dict.items()):
        img_data_size = len(img_list)
        random_index = np.random.choice(img_data_size, img_data_size, replace=False)
                
        train_data_size = int(img_data_size * train_rate)
        train_data_index = random_index[:train_data_size]
        val_data_index = random_index[train_data_size:]

        train_data_path = os.path.join(target_data_path, "train", cls_name)
        val_data_path = os.path.join(target_data_path, "val", cls_name)
        os.makedirs(train_data_path, exist_ok=True)
        os.makedirs(val_data_path, exist_ok=True)

        classes.append(cls_name)
        train_dataset[i] = copy_dataset(img_list, train_data_index,train_data_path)
        val_dataset[i] = copy_dataset(img_list, val_data_index, val_data_path)


        print("target {} train:{}, val:{}".format(cls_name,len(train_dataset[i]), len(val_dataset[i])))

        train_count += len(train_dataset[i])
        val_count += len(val_dataset[i])

    print("train size:{}, val size:{}, total:{}".format(train_count, val_count,train_count + val_count))

    write_file(classes, os.path.join(target_data_path,"classes.txt"))
    write_file(train_dataset, os.path.join(target_data_path, "train.txt"))
    write_file(val_dataset, os.path.join(target_data_path, "val.txt"))

def main():
    src_data_path = sys.argv[1]
    target_data_path = sys.argv[2]
    split_data(src_data_path, target_data_path, train_rate=0.8)

if __name__ == '__main__':
    main()

执行代码:

python split_data.py [源数据集路径] [⽬标数据集路径]

MMCls 配置⽂件

构建配置⽂件可以使⽤继承机制,从 configs/base 中继承 ImageNet 预训练的任何模型,ImageNet 的数据集配置,学习率策略等。

模型配置文件
可以使⽤任何模型,这⾥以 resnet 为例进⾏介绍。
⾸先在 /configs/resnet 下创建 resnet18_b16_flower.py ⽂件。
为了适配数据集 flower 这个 5 分类数据集,需要修改配置⽂件中模型对应的 head 和 num_classes 。预训练模型的权重,除了最后⼀层线性层外,其他的部分会复⽤。

_base_ = ['../_base_/resnet18.py']
model = dict(
        head=dict(
            num_classes=5,
            topk = (1, ))
        ))

数据配置
同样在 resnet18_b16_flower.py ⽂件中,继承 ImageNet 的数据配置,然后根据 flower 数据集进⾏修改。

_base_ = ['../_base_/models/resnet18.py', '../_base_/datasets/imagenet_bs32.py']

data = dict(
        # 根据实验环境调整每个 batch_size 和 workers 数量
        samples_per_gpu = 32,
        workers_per_gpu=2,
        # 指定训练集路径
        train = dict(
            data_prefix = 'data/flower_dataset/train',
            ann_file = 'data/flower_dataset/train.txt',
            classes = 'data/flower_dataset/classes.txt'
        ),
        # 指定验证集路径
        val = dict(
            data_prefix = 'data/flower_dataset/val',
            ann_file = 'data/flower_dataset/val.txt',
            classes = 'data/flower_dataset/classes.txt'
        ),
    )

# 定义评估方法
evaluation = dict(metric_options={'topk': (1, )})

学习率
模型微调的策略与从头开始训练的策略差别很⼤。微调⼀版会要求更⼩的学习率和更少的训练周期。依旧是在resnet18_b16_flower.py ⽂件中进⾏修改。

# 优化器
optimizer = dict(type='SGD', lr=0.001, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
# 学习率策略
lr_config = dict(
        policy='step',
        step=[1])
runner = dict(type='EpochBasedRunner', max_epochs=2)

加载预训练模型
从 mmcls ⽂档找到对应匹配的模型权重参数。并将改权重参数⽂件下载下来,放到 checkpoints ⽂件夹中。

mkdir checkpoints
wget https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_batch256_imag
enet_20200708-34ab8f90.pth -P checkpoints

然后在 resnet18_b16_flower.py ⽂件中将预训练模型的访问路径加⼊。

load_from =
'${YOUPATH}/mmclassification/checkpoints/resnet18_batch256_imagenet_20200708-
34ab8f90.pth'

微调
使⽤ tools/train.py 进⾏模型微调

1 python tools/train.py ${CONFIG_FILE} [optional arguments]

指定训练过程中相关⽂件的保存位置,可以增加⼀个参数 --work_dir ${YOUR_WORK_DIR}.

python tools/train.py configs/resnet/resnet18_b16_flower.py --work-dir work_dirs/flower

完整示例

_base_ = ['../_base_/models/resnet18.py', '../_base_/datasets/imagenet_bs32.py','../_base_/default_runtime.py']

model = dict(
        head=dict(
            num_classes=5,
            topk = (1,)
        ))

data = dict(
        samples_per_gpu = 32,
        workers_per_gpu = 2,
        train = dict(
            data_prefix = '/home/jeffding/dataset/',
            ann_file = '/home/jeffding/dataset/output/train.txt',
            classes = '/home/jeffding/dataset/output/classes.txt'
        ),
        val = dict(
            data_prefix = '/home/jeffding/dataset/',
            ann_file = '/home/jeffding/dataset/output/val.txt',
            classes = '/home/jeffding/dataset/output/classes.txt'
        )
)

optimizer = dict(type='SGD', lr=0.001, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
        policy='step',
        step=[1])
runner = dict(type='EpochBasedRunner', max_epochs=100)

# 预训练模型
load_from ='/home/jeffding/mmclassification/checkpoints/resnet18_batch256_imagenet_20200708-34ab8f90.pth'

执行实例

python tools/train.py configs/resnet18/resnet18_b32_flower.py --work-dir work/resnet18_b32_flower
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值