全球人工智能技术创新大赛【热身赛一】布匹疵点智能识别(基于mmdetection)

全球人工智能技术创新大赛【热身赛一】布匹疵点智能识别(基于mmdetection的docker配置及上传)

之前用yolov5参考上传了一次,但是想到自己用的比较多的是mmdetection,所以打算用mmdetection配置一次,再次练习一下docker的使用,PS:这次我使用的是cascade_rcnn模型。

一、Docker CE 及 nvidia-docker2 安装

具体请参考大佬的文章,很详细的:Ubuntu 18.04安装Docker CE及nvidia-docker2流程

二、数据的转换(fabric2coco.py)

文件修改于 Fabric2COCO,这次比赛类别相比之前的20类变成了15类,此外我转换格式的过程中加入了train\val集的划分。具体代码如下:

'''
@bushou
'''
import os
import json
import numpy as np
import shutil
import pandas as pd

defect_name2label = {
    '沾污': 0, '错花': 1,  '水印': 2, '花毛': 3, '缝头': 4, '缝头印': 5, '虫粘': 6, '破洞': 7,
    '褶子': 8, '织疵': 9, '漏印': 10, '蜡斑': 11, '色差': 12, '网折': 13, '其他': 14,
}
# defect_name2label = {
#     '破洞': 1, '水渍': 2, '油渍': 3, '污渍': 4, '三丝': 5, '结头': 6, '花板跳': 7, '百脚': 8, '毛粒': 9,
#     '粗经': 10, '松经': 11, '断经': 12, '吊经': 13, '粗维': 14, '纬缩': 15, '浆斑': 16, '整经结': 17, '星跳': 18, '跳花': 19,
#     '断氨纶': 20, '稀密档': 21, '浪纹档': 22, '色差档': 23, '磨痕': 24, '轧痕': 25, '修痕': 26, '烧毛痕': 27, '死皱': 28, '云织': 29,
#     '双纬': 30, '双经': 31, '跳纱': 32, '筘路': 33, '纬纱不良': 34,
# }


class Fabric2COCO:

    def __init__(self):
        self.train_images = []
        self.train_annotations = []
        self.val_images = []
        self.val_annotations = []
        self.categories = []
        self.img_id = 0
        self.ann_id = 0
        if not os.path.exists("data/coco/{}".format('train2017')):
            os.makedirs("data/coco/{}".format('train2017'))
        if not os.path.exists("data/coco/{}".format('val2017')):
            os.makedirs("data/coco/{}".format('val2017'))

    def to_coco(self, anno_file, img_dir):
        self._init_categories()
        anno_result = pd.read_json(open(anno_file, "r"))
        name_list = anno_result["name"].unique()
        index = list(set(name_list))
        print(index)
        for fold in [0]:
            val_index = index[len(index) * fold // 5:len(index) * (fold + 1) //5]
            print(len(val_index))
            for img_name in name_list:
                if img_name in val_index:
                    img_anno = anno_result[anno_result["name"] == img_name]
                    bboxs = img_anno["bbox"].tolist()
                    defect_names = img_anno["defect_name"].tolist()
                    assert img_anno["name"].unique()[0] == img_name
                    img_defect = img_name.split('.')[0]
                    img_path = os.path.join(img_dir, img_defect, img_name)
                    # img =cv2.imread(img_path)
                    # h,w,c=img.shape
                    h, w = 1000, 2446
                    self.val_images.append(self._image(img_path, h, w))

                    self._cp_img(img_path, 'val2017')

                    for bbox, defect_name in zip(bboxs, defect_names):
                        label = defect_name2label[defect_name]
                        annotation = self._annotation(label, bbox)
                        self.val_annotations.append(annotation)
                        self.ann_id += 1
                    self.img_id += 1

                else:
                    img_anno = anno_result[anno_result["name"] == img_name]
                    bboxs = img_anno["bbox"].tolist()
                    defect_names = img_anno["defect_name"].tolist()
                    assert img_anno["name"].unique()[0] == img_name
                    img_defect = img_name.split('.')[0]
                    img_path = os.path.join(img_dir, img_defect, img_name)
                    # img =cv2.imread(img_path)
                    # h,w,c=img.shape
                    h, w = 1000, 2446
                    self.train_images.append(self._image(img_path, h, w))

                    self._cp_img(img_path, 'train2017')

                    for bbox, defect_name in zip(bboxs, defect_names):
                        label = defect_name2label[defect_name]
                        annotation = self._annotation(label, bbox)
                        self.train_annotations.append(annotation)
                        self.ann_id += 1
                    self.img_id += 1

        train_instance = {}
        train_instance['info'] = 'fabric defect'
        train_instance['license'] = ['none']
        train_instance['images'] = self.train_images
        train_instance['annotations'] = self.train_annotations
        train_instance['categories'] = self.categories
        val_instance = {}
        val_instance['info'] = 'fabric defect'
        val_instance['license'] = ['none']
        val_instance['images'] = self.val_images
        val_instance['annotations'] = self.val_annotations
        val_instance['categories'] = self.categories
        return train_instance, val_instance

    def _init_categories(self):
        # for v in range(1, 16):
        #     print(v)
        #     category = {}
        #     category['id'] = v
        #     category['name'] = str(v)
        #     category['supercategory'] = 'defect_name'
        #     self.categories.append(category)
        for k, v in defect_name2label.items():
            category = {}
            category['id'] = v + 1
            category['name'] = k
            category['supercategory'] = 'defect_name'
            self.categories.append(category)

    def _image(self, path, h, w):
        image = {}
        image['height'] = h
        image['width'] = w
        image['id'] = self.img_id
        image['file_name'] = os.path.basename(path)
        return image

    def _annotation(self, label, bbox):
        area = (bbox[2]-bbox[0])*(bbox[3]-bbox[1])
        points = [[bbox[0], bbox[1]], [bbox[2], bbox[1]], [bbox[2], bbox[3]], [bbox[0], bbox[3]]]
        annotation = {}
        annotation['id'] = self.ann_id
        annotation['image_id'] = self.img_id
        annotation['category_id'] = label
        annotation['segmentation'] = [np.asarray(points).flatten().tolist()]
        annotation['bbox'] = self._get_box(points)
        annotation['iscrowd'] = 0
        annotation['area'] = area
        return annotation

    def _cp_img(self, img_path, mode):
        shutil.copy(img_path, os.path.join("data/coco/{}".format(mode), os.path.basename(img_path)))

    def _get_box(self, points):
        min_x = min_y = np.inf
        max_x = max_y = 0
        for x, y in points:
            min_x = min(min_x, x)
            min_y = min(min_y, y)
            max_x = max(max_x, x)
            max_y = max(max_y, y)
        '''coco,[x,y,w,h]'''
        return [min_x, min_y, max_x - min_x, max_y - min_y]

    def save_coco_json(self, instance, save_path):
        import json
        with open(save_path, 'w') as fp:
            json.dump(instance, fp, indent=1, separators=(',', ': '))

if __name__ == '__main__':
    '''转换有瑕疵的样本为coco格式'''
    img_dir = "/home/yhh/workspace/datawhale/mmdetection/data/guangdong/guangdong1_round2_train2_20191004_images/defect"
    anno_dir = "/home/yhh/workspace/datawhale/mmdetection/data/guangdong/guangdong1_round2_train2_20191004_Annotations/Annotations/anno_train.json"
    fabric2coco = Fabric2COCO()
    train_instance, val_instance = fabric2coco.to_coco(anno_dir, img_dir)
    if not os.path.exists("data/coco/annotations/"):
        os.makedirs("data/coco/annotations/")
    fabric2coco.save_coco_json(train_instance, "data/coco/annotations/"+'instances_{}.json'.format("train2017"))
    fabric2coco.save_coco_json(val_instance, "data/coco/annotations/" + 'instances_{}.json'.format("val2017"))

三、MMDetction的安装与修改

  1. 把mmdetection代码 clone到本地

git clone https://github.com/open-mmlab/mmdetection.git

  1. 修改配置文件
    我这里的是配置文件cascade_rcnn_r50_fpn_guangdong.py,代码如下
model = dict(
    type='CascadeRCNN',
    pretrained='torchvision://resnet50',
    backbone=dict(
        type='ResNet',
        depth=50,
        num_stages=4,
        out_indices=(0, 1, 2, 3),
        frozen_stages=1,
        norm_cfg=dict(type='BN', requires_grad=True),
        norm_eval=True,
        style='pytorch'),
    neck=dict(
        type='FPN',
        in_channels=[256, 512, 1024, 2048],
        out_channels=256,
        num_outs=5),
    rpn_head=dict(
        type='RPNHead',
        in_channels=256,
        feat_channels=256,
        anchor_generator=dict(
            type='AnchorGenerator',
            scales=[8],
            ratios=[0.05, 0.1, 0.2, 0.5, 0.5, 1.0, 2.0, 5.0, 10, 20],
            strides=[4, 8, 16, 32, 64]),
        bbox_coder=dict(
            type='DeltaXYWHBBoxCoder',
            target_means=[0.0, 0.0, 0.0, 0.0],
            target_stds=[1.0, 1.0, 1.0, 1.0]),
        loss_cls=dict(
            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
        loss_bbox=dict(
            type='SmoothL1Loss', beta=0.1111111111111111, loss_weight=1.0)),
    roi_head=dict(
        type='CascadeRoIHead',
        num_stages=3,
        stage_loss_weights=[1, 0.5, 0.25],
        bbox_roi_extractor=dict(
            type='SingleRoIExtractor',
            roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
            out_channels=256,
            featmap_strides=[4, 8, 16, 32]),
        bbox_head=[
            dict(
                type='SABLHead',
                num_classes=15,
                cls_in_channels=256,
                reg_in_channels=256,
                roi_feat_size=7,
                reg_feat_up_ratio=2,
                reg_pre_kernel=3,
                reg_post_kernel=3,
                reg_pre_num=2,
                reg_post_num=1,
                cls_out_channels=1024,
                reg_offset_out_channels=256,
                reg_cls_out_channels=256,
                num_cls_fcs=1,
                num_reg_fcs=0,
                reg_class_agnostic=True,
                norm_cfg=None,
                bbox_coder=dict(
                    type='BucketingBBoxCoder',
                    num_buckets=14,
                    scale_factor=1.7),
                loss_cls=dict(
                    type='CrossEntropyLoss',
                    use_sigmoid=False,
                    loss_weight=1.0),
                loss_bbox_cls=dict(
                    type='CrossEntropyLoss', use_sigmoid=True,
                    loss_weight=1.0),
                loss_bbox_reg=dict(
                    type='SmoothL1Loss', beta=0.1, loss_weight=1.0)),
            dict(
                type='SABLHead',
                num_classes=15,
                cls_in_channels=256,
                reg_in_channels=256,
                roi_feat_size=7,
                reg_feat_up_ratio=2,
                reg_pre_kernel=3,
                reg_post_kernel=3,
                reg_pre_num=2,
                reg_post_num=1,
                cls_out_channels=1024,
                reg_offset_out_channels=256,
                reg_cls_out_channels=256,
                num_cls_fcs=1,
                num_reg_fcs=0,
                reg_class_agnostic=True,
                norm_cfg=None,
                bbox_coder=dict(
                    type='BucketingBBoxCoder',
                    num_buckets=14,
                    scale_factor=1.5),
                loss_cls=dict(
                    type='CrossEntropyLoss',
                    use_sigmoid=False,
                    loss_weight=1.0),
                loss_bbox_cls=dict(
                    type='CrossEntropyLoss', use_sigmoid=True,
                    loss_weight=1.0),
                loss_bbox_reg=dict(
                    type='SmoothL1Loss', beta=0.1, loss_weight=1.0)),
            dict(
                type='SABLHead',
                num_classes=15,
                cls_in_channels=256,
                reg_in_channels=256,
                roi_feat_size=7,
                reg_feat_up_ratio=2,
                reg_pre_kernel=3,
                reg_post_kernel=3,
                reg_pre_num=2,
                reg_post_num=1,
                cls_out_channels=1024,
                reg_offset_out_channels=256,
                reg_cls_out_channels=256,
                num_cls_fcs=1,
                num_reg_fcs=0,
                reg_class_agnostic=True,
                norm_cfg=None,
                bbox_coder=dict(
                    type='BucketingBBoxCoder',
                    num_buckets=14,
                    scale_factor=1.3),
                loss_cls=dict(
                    type='CrossEntropyLoss',
                    use_sigmoid=False,
                    loss_weight=1.0),
                loss_bbox_cls=dict(
                    type='CrossEntropyLoss', use_sigmoid=True,
                    loss_weight=1.0),
                loss_bbox_reg=dict(
                    type='SmoothL1Loss', beta=0.1, loss_weight=1.0))
        ]),
    train_cfg=dict(
        rpn=dict(
            assigner=dict(
                type='MaxIoUAssigner',
                pos_iou_thr=0.7,
                neg_iou_thr=0.3,
                min_pos_iou=0.3,
                match_low_quality=True,
                ignore_iof_thr=-1),
            sampler=dict(
                type='RandomSampler',
                num=256,
                pos_fraction=0.5,
                neg_pos_ub=-1,
                add_gt_as_proposals=False),
            allowed_border=0,
            pos_weight=-1,
            debug=False),
        rpn_proposal=dict(
            nms_across_levels=False,
            nms_pre=2000,
            nms_post=2000,
            max_num=2000,
            nms_thr=0.7,
            min_bbox_size=0),
        rcnn=[
            dict(
                assigner=dict(
                    type='MaxIoUAssigner',
                    pos_iou_thr=0.3,
                    neg_iou_thr=0.3,
                    min_pos_iou=0.3,
                    match_low_quality=False,
                    ignore_iof_thr=-1),
                sampler=dict(
                    type='RandomSampler',
                    num=512,
                    pos_fraction=0.25,
                    neg_pos_ub=-1,
                    add_gt_as_proposals=True),
                pos_weight=-1,
                debug=False),
            dict(
                assigner=dict(
                    type='MaxIoUAssigner',
                    pos_iou_thr=0.4,
                    neg_iou_thr=0.4,
                    min_pos_iou=0.4,
                    match_low_quality=False,
                    ignore_iof_thr=-1),
                sampler=dict(
                    type='RandomSampler',
                    num=512,
                    pos_fraction=0.25,
                    neg_pos_ub=-1,
                    add_gt_as_proposals=True),
                pos_weight=-1,
                debug=False),
            dict(
                assigner=dict(
                    type='MaxIoUAssigner',
                    pos_iou_thr=0.5,
                    neg_iou_thr=0.5,
                    min_pos_iou=0.5,
                    match_low_quality=False,
                    ignore_iof_thr=-1),
                sampler=dict(
                    type='RandomSampler',
                    num=512,
                    pos_fraction=0.25,
                    neg_pos_ub=-1,
                    add_gt_as_proposals=True),
                pos_weight=-1,
                debug=False)
        ]),
    test_cfg=dict(
        rpn=dict(
            nms_across_levels=False,
            nms_pre=1000,
            nms_post=1000,
            max_num=1000,
            nms_thr=0.7,
            min_bbox_size=0),
        rcnn=dict(
            score_thr=0.0,
            nms=dict(type='nms', iou_threshold=0.1),
            max_per_img=300)))
dataset_type = 'CocoDataset'
data_root = 'data/round2_data/'
img_norm_cfg = dict(
    mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='LoadAnnotations', with_bbox=True),
    dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
    dict(type='RandomFlip', flip_ratio=0.5),
    dict(
        type='Normalize',
        mean=[123.675, 116.28, 103.53],
        std=[58.395, 57.12, 57.375],
        to_rgb=True),
    dict(type='Pad', size_divisor=32),
    dict(type='DefaultFormatBundle'),
    dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='MultiScaleFlipAug',
        img_scale=(1333, 800),
        flip=False,
        transforms=[
            dict(type='Resize', keep_ratio=True),
            dict(type='RandomFlip'),
            dict(
                type='Normalize',
                mean=[123.675, 116.28, 103.53],
                std=[58.395, 57.12, 57.375],
                to_rgb=True),
            dict(type='Pad', size_divisor=32),
            dict(type='ImageToTensor', keys=['img']),
            dict(type='Collect', keys=['img'])
        ])
]
data = dict(
    samples_per_gpu=2,
    workers_per_gpu=2,
    train=dict(
        type='CocoDataset',
        ann_file='data/coco_guang/annotations/train2017.json',
        img_prefix='data/coco_guang/train/',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(type='LoadAnnotations', with_bbox=True),
            dict(type='Resize', img_scale=(800, 600), keep_ratio=True),
            dict(type='RandomFlip', flip_ratio=0.5),
            dict(
                type='Normalize',
                mean=[123.675, 116.28, 103.53],
                std=[58.395, 57.12, 57.375],
                to_rgb=True),
            dict(type='Pad', size_divisor=128),
            dict(type='DefaultFormatBundle'),
            dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
        ],
        classes=('沾污', '错花', '水印', '花毛', '缝头', '缝头印', '虫粘', '破洞', '褶子', '织疵',
                 '漏印', '蜡斑', '色差', '网折', '其他')),
    val=dict(
        type='CocoDataset',
        ann_file='data/coco_guang/annotations/val2017.json',
        img_prefix='data/coco_guang/val/',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(800, 600),
                flip=False,
                transforms=[
                    dict(type='Resize', keep_ratio=True),
                    dict(type='RandomFlip'),
                    dict(
                        type='Normalize',
                        mean=[123.675, 116.28, 103.53],
                        std=[58.395, 57.12, 57.375],
                        to_rgb=True),
                    dict(type='Pad', size_divisor=128),
                    dict(type='ImageToTensor', keys=['img']),
                    dict(type='Collect', keys=['img'])
                ])
        ],
        classes=('沾污', '错花', '水印', '花毛', '缝头', '缝头印', '虫粘', '破洞', '褶子', '织疵',
                 '漏印', '蜡斑', '色差', '网折', '其他')),
    test=dict(
        type='CocoDataset',
        ann_file='data/coco_guang/annotations/val2017.json',
        img_prefix='data/coco_guang/val/',
        pipeline=[
            dict(type='LoadImageFromFile'),
            dict(
                type='MultiScaleFlipAug',
                img_scale=(2500, 1600),
                flip=False,
                transforms=[
                    dict(type='Resize', keep_ratio=True),
                    dict(type='RandomFlip'),
                    dict(
                        type='Normalize',
                        mean=[123.675, 116.28, 103.53],
                        std=[58.395, 57.12, 57.375],
                        to_rgb=True),
                    dict(type='Pad', size_divisor=128),
                    dict(type='ImageToTensor', keys=['img']),
                    dict(type='Collect', keys=['img'])
                ])
        ],
        classes=('沾污', '错花', '水印', '花毛', '缝头', '缝头印', '虫粘', '破洞', '褶子', '织疵',
                 '漏印', '蜡斑', '色差', '网折', '其他')))
evaluation = dict(interval=1, metric='bbox')
optimizer = dict(type='SGD', lr=0.003, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
    policy='step',
    warmup='linear',
    warmup_iters=500,
    warmup_ratio=0.001,
    step=[16, 19])
runner = dict(type='EpochBasedRunner', max_epochs=20)
checkpoint_config = dict(interval=1)
log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])
custom_hooks = [dict(type='NumClassCheckHook')]
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = 'checkpoints/sabl_cascade_rcnn_r50_fpn_1x_coco-e1748e5e.pth'
resume_from = None
workflow = [('train', 1)]
norm_cfg = dict(type='BN', requires_grad=True)
classes = ('沾污', '错花', '水印', '花毛', '缝头', '缝头印', '虫粘', '破洞', '褶子', '织疵', '漏印',
           '蜡斑', '色差', '网折', '其他')
work_dir = './work_dirs/guangdong_r2_101_t3'
gpu_ids = range(0, 1)

注意这里要使用预训练模型的话,需要自己从mmdetection的github页面下载模型到checkpionts中(需要自己创建一个空文件checkpionts),并且在刚刚加上这句"load_from = "checkpoints/cascade_rcnn_r50_fpn_1x_coco_20200316-3dc56deb.pth",这里我下载的是cascade_rcnn_r50_fpn_1x_coco_20200316-3dc56deb.pth,具体预训练模型自行修改。

  1. 检测文件detect.py
import time, os
import json
import mmcv
from mmdet.apis import init_detector, inference_detector


def main():
    config_file = 'configs/cascade_rcnn/cascade_rcnn_r50_fpn_guangdong.py'  # 修改成自己的配置文件
    checkpoint_file = 'checkpoints/epoch_12_2_last.pth'  # 修改成自己的训练权重

    test_path = 'data/guangdong/tcdata/guangdong1_round2_testB_20191024'  # 官方测试集图片路径

    json_name = "result.json"

    model = init_detector(config_file, checkpoint_file, device='cuda:0')

    img_list = []
    for img_name in os.listdir(test_path):
        # if img_name.endswith('.jpg'):
        img_list.append(img_name)

    result = []
    for i, img_name in enumerate(img_list, 1):
        # img_det = img_name.split('.')[0]
        full_img = os.path.join(test_path, img_name, img_name+'.jpg')
        predict = inference_detector(model, full_img)
        for i, bboxes in enumerate(predict, 1):
            if len(bboxes) > 0:
                defect_label = i
                # print(i)
                image_name = img_name + '.jpg'
                for bbox in bboxes:
                    x1, y1, x2, y2, score = bbox.tolist()
                    x1, y1, x2, y2 = round(x1, 2), round(y1, 2), round(x2, 2), round(y2, 2)  # save 0.00
                    result.append(
                        {'name': image_name, 'category': defect_label, 'bbox': [x1, y1, x2, y2], 'score': score})

    with open(json_name, 'w') as fp:
        json.dump(result, fp, indent=4, separators=(',', ': '))




if __name__ == "__main__":
    main()
  1. 制作train.sh,run.sh

1).train.sh
这里我是单GPU训练(1070)

#!/usr/bin/env bash

python tools/train.py configs/cascade_rcnn/cascade_rcnn_r50_fpn_guangdong.py   --gpus 1

2).run.sh

python detect.py

四、Dockerfile文件及docker镜像的制作

1.Dockerfile文件

先给出我的Dockerfile文件,注意创建的Dockerfile文件要放在mmdetection的当前目录下:

ARG PYTORCH="1.6.0"
ARG CUDA="10.1"
ARG CUDNN="7"

FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel

ENV TORCH_CUDA_ARCH_LIST="6.0 6.1 7.0+PTX"
ENV TORCH_NVCC_FLAGS="-Xfatbin -compress-all"
ENV CMAKE_PREFIX_PATH="$(dirname $(which conda))/../"

RUN apt-get update && apt-get install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libsm6 libxrender-dev libxext6 \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

# Install MMCV
RUN pip install mmcv-full==latest+torch1.6.0+cu101 -f https://openmmlab.oss-accelerate.aliyuncs.com/mmcv/dist/index.html

# Install MMDetection
RUN conda clean --all
#RUN git clone https://github.com/open-mmlab/mmdetection.git /mmdetection
ADD . /
WORKDIR /
ENV FORCE_CUDA="1"
RUN pip install -i https://pypi.tuna.tsinghua.edu.cn/simple -r requirements/build.txt
RUN pip install -i https://pypi.tuna.tsinghua.edu.cn/simple --no-cache-dir -e .
RUN apt-get update
RUN apt-get install curl
2.mmdetection镜像

1)build镜像

docker build -t registry.cn-shenzhen.aliyuncs.com/你阿里云的仓命名空间/你阿里云的仓库名称:1.0 .

2)训练与测试(记得先su 一下进入root)

训练(当然也可以在本地训练,不过需要自己配环境)

1、在命令行直接训练
一、查看image id :
       docker images
二、运行:
	 docker run --gpus all -v /home/yhh/workspace/datawhale/mmdetection/data:/data -it e69a22f6c832 sh train.sh
	                                   |                                        |
                                 本地数据集的路径                             映射到docker镜像里数据集的路径
     

本地数据集的路径 与 映射到docker镜像里数据集的路径 之间用 “:” 隔开,这里我都是放在当前目录的data文件里,你的请注意自行修改。

2、在进去docker容器里

一、查看image id :
       docker images
二、运行:
	 docker run --gpus all -v /home/yhh/workspace/datawhale/mmdetection/data:/data  -it e69a22f6c832  /bin/bash
	               |                       |                                 |                |
                 用GPU                本地数据集的路径            映射到docker镜像里数据集的路径   image_id
     

这时候就跟你进入一个新的linux系统一样,不过环境已经配好了,训练操作就跟你在本地的linux的操作一样,例如我这里的训练命令:python tools/train.py configs/cascade_rcnn/cascade_rcnn_r50_fpn_guangdong.py --gpus 1

测试 测试与训练类似,就不述了,略。。。。

五、提交

推送镜像

(1)查看镜像 id
docker images
(2)推送镜像
docker push registry.cn-shenzhen.aliyuncs.com/test_for_tianchi/test_for_tianchi_submit:1.0(自己替换仓
库地址以及指定版本号)

提交
在左侧【提交结果】中填写推送的镜像路径、用户名和密码,即可提交。根据【我的成绩】中的分数和日志可以查看运行情况。
在这里插入图片描述
改了一下配置,但是分数还是不是很高,炼丹之路漫漫啊:
在这里插入图片描述

  • 2
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 6
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值