mmdetection检测训练,分割训练,mask r_cnn

一、环境准备

在训练之前,首先是配置mmdetection环境,详细内容参考我的上一篇内容:http://t.csdnimg.cn/YDTpr

二、数据准备

然后是数据集的准备,我使用的是VOC数据集,链接:

这个VOC好像不全,不过不影响检测的训练,下载之后需要将VOC转为coco数据集的格式,在mmdetection路径下新建data文件夹存放数据集,具体格式为:

当然VOC并不是这个格式,所以使用一下代码voc2coco.py(该代码是借鉴:http://t.csdnimg.cn/IQnh6,对大佬的付出表示感谢)转换一下,并生成json文件,注意VOC数据集和voc2coco.py在同一目录下:

import os
import random
import shutil
import json
import glob
import xml.etree.ElementTree as ET


def get(root, name):
    vars = root.findall(name)
    return vars


def get_and_check(root, name, length):
    vars = root.findall(name)
    if len(vars) == 0:
        raise ValueError("Can not find %s in %s." % (name, root.tag))
    if length > 0 and len(vars) != length:
        raise ValueError(
            "The size of %s is supposed to be %d, but is %d."
            % (name, length, len(vars))
        )
    if length == 1:
        vars = vars[0]
    return vars


def get_filename_as_int(filename):
    try:
        filename = filename.replace("\\", "/")
        filename = os.path.splitext(os.path.basename(filename))[0]
        return int(filename)
    except:
        raise ValueError("Filename %s is supposed to be an integer." % (filename))

# 获取数据集中类别的名字
def get_categories(xml_files):
    classes_names = []
    for xml_file in xml_files:
        tree = ET.parse(xml_file)
        root = tree.getroot()
        for member in root.findall("object"):
            classes_names.append(member[0].text)
    classes_names = list(set(classes_names))
    classes_names.sort()
    print(f"类别名字为{classes_names}")
    return {name: i for i, name in enumerate(classes_names)}


def convert(xml_files, json_file):
    json_dict = {"images": [], "type": "instances", "annotations": [], "categories": []}
    if PRE_DEFINE_CATEGORIES is not None:
        categories = PRE_DEFINE_CATEGORIES
    else:
        categories = get_categories(xml_files)
    bnd_id = START_BOUNDING_BOX_ID
    for xml_file in xml_files:
        tree = ET.parse(xml_file)
        root = tree.getroot()
        path = get(root, "path")
        if len(path) == 1:
            filename = os.path.basename(path[0].text)
        elif len(path) == 0:
            filename = get_and_check(root, "filename", 1).text
        else:
            raise ValueError("%d paths found in %s" % (len(path), xml_file))
        ## The filename must be a number
        image_id = get_filename_as_int(filename)
        size = get_and_check(root, "size", 1)
        width = int(get_and_check(size, "width", 1).text)
        height = int(get_and_check(size, "height", 1).text)
        image = {
            "file_name": filename,
            "height": height,
            "width": width,
            "id": image_id,
        }
        json_dict["images"].append(image)
        ## Currently we do not support segmentation.
        #  segmented = get_and_check(root, 'segmented', 1).text
        #  assert segmented == '0'
        for obj in get(root, "object"):
            category = get_and_check(obj, "name", 1).text
            if category not in categories:
                new_id = len(categories)
                categories[category] = new_id
            category_id = categories[category]
            bndbox = get_and_check(obj, "bndbox", 1)
            xmin = int(get_and_check(bndbox, "xmin", 1).text) - 1
            ymin = int(get_and_check(bndbox, "ymin", 1).text) - 1
            xmax = int(get_and_check(bndbox, "xmax", 1).text)
            ymax = int(get_and_check(bndbox, "ymax", 1).text)
            assert xmax > xmin
            assert ymax > ymin
            o_width = abs(xmax - xmin)
            o_height = abs(ymax - ymin)
            ann = {
                "area": o_width * o_height,
                "iscrowd": 0,
                "image_id": image_id,
                "bbox": [xmin, ymin, o_width, o_height],
                "category_id": category_id,
                "id": bnd_id,
                "ignore": 0,
                "segmentation": [],
            }
            json_dict["annotations"].append(ann)
            bnd_id = bnd_id + 1

    for cate, cid in categories.items():
        cat = {"supercategory": "none", "id": cid, "name": cate}
        json_dict["categories"].append(cat)

    os.makedirs(os.path.dirname(json_file), exist_ok=True)
    json_fp = open(json_file, "w")
    json_str = json.dumps(json_dict)
    json_fp.write(json_str)
    json_fp.close()


# 新建文件夹
def mkdir(path):
    path = path.strip()
    path = path.rstrip("\\")
    isExists = os.path.exists(path)
    if not isExists:
        os.makedirs(path)
        print(path + ' ----- folder created')
        return True
    else:
        print(path + ' ----- folder existed')
        return False


if __name__ == '__main__':
    # 验证集比例
    valRatio = 0.2
    # 测试集比例
    testRatio = 0
    # 获取当前脚本路径
    main_path = os.getcwd()
    # voc格式的图片和xml存放路径
    voc_images = os.path.join(main_path, 'VOC', 'JPEGImages')
    voc_annotations = os.path.join(main_path, 'VOC', 'Annotations')
    # 获取xml数量
    xmlNum = len(os.listdir(voc_annotations))

    val_files_num = int(xmlNum * valRatio)
    test_files_num = int(xmlNum * testRatio)

    coco_path = os.path.join(main_path, 'COCO')
    # coco_images = os.path.join(main_path, 'COCO', 'images')
    coco_json_annotations = os.path.join(main_path, 'COCO', 'annotations')
    coco_train2017 = os.path.join(main_path, 'COCO', 'train2017')
    coco_val2017 = os.path.join(main_path, 'COCO', 'val2017')
    coco_test2017 = os.path.join(main_path, 'COCO', 'test2017')
    xml_val = os.path.join(main_path, 'xml', 'xml_val')
    xml_test = os.path.join(main_path, 'xml', 'xml_test')
    xml_train = os.path.join(main_path, 'xml', 'xml_train')

    mkdir(coco_path)
    # mkdir(coco_images)
    mkdir(coco_json_annotations)
    mkdir(xml_val)
    mkdir(xml_test)
    mkdir(xml_train)
    mkdir(coco_train2017)
    mkdir(coco_val2017)
    if testRatio:
        mkdir(coco_test2017)


    for i in os.listdir(voc_images):
        img_path = os.path.join(voc_images, i)
        shutil.copy(img_path, coco_train2017)

        # voc images copy to coco images
    for i in os.listdir(voc_annotations):
        img_path = os.path.join(voc_annotations, i)
        shutil.copy(img_path, xml_train)

    print("\n\n %s files copied to %s" % (val_files_num, xml_val))

    for i in range(val_files_num):
        if len(os.listdir(xml_train)) > 0:

            random_file = random.choice(os.listdir(xml_train))
            #         print("%d) %s"%(i+1,random_file))
            source_file = "%s/%s" % (xml_train, random_file)
            # 分离文件名
            font, ext = random_file.split('.')
            valJpgPathList = [j for j in os.listdir(coco_train2017) if j.startswith(font)]
            if random_file not in os.listdir(xml_val):
                shutil.move(source_file, xml_val)
                shutil.move(os.path.join(coco_train2017, valJpgPathList[0]), coco_val2017)

            else:
                random_file = random.choice(os.listdir(xml_train))
                source_file = "%s/%s" % (xml_train, random_file)
                shutil.move(source_file, xml_val)
                # 分离文件名
                font, ext = random_file.split('.')
                valJpgPathList = [j for j in os.listdir(coco_train2017) if j.startswith(font)]
                shutil.move(os.path.join(coco_train2017, valJpgPathList[0]), coco_val2017)
        else:
            print('The folders are empty, please make sure there are enough %d file to move' % (val_files_num))
            break

    for i in range(test_files_num):
        if len(os.listdir(xml_train)) > 0:

            random_file = random.choice(os.listdir(xml_train))
            #         print("%d) %s"%(i+1,random_file))
            source_file = "%s/%s" % (xml_train, random_file)
            # 分离文件名
            font, ext = random_file.split('.')
            testJpgPathList = [j for j in os.listdir(coco_train2017) if j.startswith(font)]
            if random_file not in os.listdir(xml_test):
                shutil.move(source_file, xml_test)
                shutil.move(os.path.join(coco_train2017, testJpgPathList[0]), coco_test2017)
            else:
                random_file = random.choice(os.listdir(xml_train))
                source_file = "%s/%s" % (xml_train, random_file)
                shutil.move(source_file, xml_test)
                # 分离文件名
                font, ext = random_file.split('.')
                testJpgPathList = [j for j in os.listdir(coco_train2017) if j.startswith(font)]
                shutil.move(os.path.join(coco_train2017, testJpgPathList[0]), coco_test2017)
        else:
            print('The folders are empty, please make sure there are enough %d file to move' % (val_files_num))
            break

    print("\n\n" + "*" * 27 + "[ Done ! Go check your file ]" + "*" * 28)


    START_BOUNDING_BOX_ID = 1
    PRE_DEFINE_CATEGORIES = None

    xml_val_files = glob.glob(os.path.join(xml_val, "*.xml"))
    xml_test_files = glob.glob(os.path.join(xml_test, "*.xml"))
    xml_train_files = glob.glob(os.path.join(xml_train, "*.xml"))

    convert(xml_val_files, os.path.join(coco_json_annotations, 'val2017.json'))

    convert(xml_train_files, os.path.join(coco_json_annotations, 'train2017.json'))
    if testRatio:
        convert(xml_test_files, os.path.join(coco_json_annotations, 'test2017.json'))

    # 删除文件夹
    try:
        shutil.rmtree(xml_train)
        shutil.rmtree(xml_val)
        shutil.rmtree(xml_test)
        shutil.rmtree(os.path.join(main_path, 'xml'))
    except:
        print(f'xml文件删除失败,请手动删除{xml_train, xml_val, xml_test}')

运行只有会成coco文件,就是coco格式的数据集。

然后可以在终端进行训练了,比如选用ssd300:

python tools/train.py configs/ssd/ssd300_coco.py

如果你出现报错:ModuleNotFoundError: No module named ‘mmdet

这里不是引用路径的问题,而是在这之前需要编译一下:

python setup.py develop

 然后再运行变可以了:

三、mask_rcnn训练

本来在终端更改配置文件即:

python tools/train.py configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py

但是根据voc2coco.py文件生成的json文件缺少分割标注信息比如,

备注!:如果你的json文件完整就不会出现以上错误,如果你也是按照我以上的方法的话那么使用以上代码进行完善即可:

import json
from pprint import pprint
def convert_bbox_to_polygon(bbox):
    x = bbox[0]
    y = bbox[1]
    w = bbox[2]
    h = bbox[3]
    polygon = [x,y,(x+w),y,(x+w),(y+h),x,(y+h)]
    return([polygon])
def main():
    file_path = r"your_path\instances_val2017.json"
    f = open(file_path)
    data = json.load(f)
    for line in data["annotations"]:
        segmentation = convert_bbox_to_polygon(line["bbox"])
        line["segmentation"] = segmentation
    with open("name.json", 'w') as f:
        f.write(json.dumps(data))
    print('DONE')
main()

 最后获得的两个json文件替换原来的即可,然后终端运行指令:

python tools/train.py configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py

 

当然还需要修改一些参数,比如学习率,在schedule_1x下修改,默认0.02,但是会出现梯度爆炸,其他参数根据实际修改。

如果想直接运行train.py文件,就运行tools下的train.py,同时将配置文件,也就是configs/ssd/ssd300_coco.py,可以在train.py里面添加也可以设置运行配置:

添加配置文件,如果出现:

则修改

 修改为绝对路径即可

  • 3
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
mask_r_cnn_r50_fpn_3x和mask_r_cnn_r50_fpn_1x是两种不同的模型配置。其中,3x和1x表示模型训练时长,分别对应训练3个epoch和1个epoch。这两个模型都是基于ResNet-50和FPN的Mask R-CNN模型,用于目标检测和实例分割任务。 如果你想使用mask_r_cnn_r50_fpn_3x模型进行训练,你可以使用以下命令: ``` !python /home/jyt/mmdetection/tools/train.py /home/jyt/mmdetection/configs/mask_rcnn/mask_rcnn_r50_fpn_3x_coco.py --work-dir /500/checkpoints/ ``` 其中,`/home/jyt/mmdetection/configs/mask_rcnn/mask_rcnn_r50_fpn_3x_coco.py`是mask_r_cnn_r50_fpn_3x模型的配置文件路径,`/500/checkpoints/`是模型保存的路径。 如果你想使用mask_r_cnn_r50_fpn_1x模型进行训练,你可以使用以下命令: ``` !python /home/jyt/mmdetection/tools/train.py /home/jyt/mmdetection/configs/mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py --work-dir /500/checkpoints/ ``` 其中,`/home/jyt/mmdetection/configs/mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py`是mask_r_cnn_r50_fpn_1x模型的配置文件路径,`/500/checkpoints/`是模型保存的路径。 请根据你的需求选择适合的模型进行训练。 #### 引用[.reference_title] - *1* *2* *3* [【Ubuntu机器学习实战】MMdetection训练自己的数据集并预测(使用mask_rcnn_r50_fpn_1x_coco完美走个流程)](https://blog.csdn.net/weixin_44227405/article/details/126181170)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insert_down28v1,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值