【detectron2】总结并完成一个道路积水的分割任务带数据集 cpu+gpu版本

原来向简单一点,找一个maskrcnn的pytorch版本开搞,但都是很久没维护的了,torch版本都是1.0,或者1.4的
在这里插入图片描述
facebook也有一个maskrcnn的分支不维护了,它推荐detecron2

这次要做一个pipline,完成从数据集集到训练的任务

detectron2

文档:https://detectron2.readthedocs.io/en/latest/
git:https://github.com/facebookresearch/detectron2

安装指引

https://detectron2.readthedocs.io/en/latest/tutorials/install.html

所使用的pytorch框架
https://pytorch.org/get-started/previous-versions/

CUDA 11.1

conda install pytorch1.8.0 torchvision0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge

python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.8/index.html
在这里插入图片描述

用的预编译的,源码安装安装了半天不成功

启动:
python demo.py --config-file …/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml --input 5.png

下载好的模型权重在哪?
/home/jianming_ge/.torch/iopath_cache/detectron2/ImageNetPretrained/MSRA/R-50.pkl

这个好用,上面那个可能是配置文件和模型权重不匹配
python demo.py --config-file …/configs/COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml --input 6.png --output a.jpg --opts MODEL.WEIGHTS ./model_final_2d9806.pkl

在这里插入图片描述

modelzoo:https://github.com/facebookresearch/detectron2/blob/main/MODEL_ZOO.md

敲黑板

用了以上方式,可以推理,但是不能训练
先遇到第一个问题:
issue说升级1.8.1或者降级1.7可以搞定,但是我下的是detectron2是适配1.8的所以就升级了
然后遇到了第二个问题:
https://github.com/pytorch/pytorch/issues/55027
搞不定了,

用cpu训练

# -*- coding: UTF-8 -*-
from detectron2.data.datasets import register_coco_instances
from detectron2.data import MetadataCatalog
import os
from detectron2.engine import DefaultTrainer
from detectron2.config import get_cfg
from detectron2.utils.logger import setup_logger
import os
setup_logger()

#声明类别,尽量保持
CLASS_NAMES =["water"]
# 数据集路径
DATASET_ROOT = '/yifeinfs/data_share/city_manager_20221017/water_street_coco_version20221019/'
#标注文件夹路径
ANN_ROOT = os.path.join(DATASET_ROOT, 'annotations')
#训练图片路径
TRAIN_PATH = os.path.join(DATASET_ROOT,'images', 'train')
#测试图片路径
VAL_PATH = os.path.join(DATASET_ROOT, 'images','val')
#训练集的标注文件
TRAIN_JSON = os.path.join(ANN_ROOT, 'instances_train.json')
#验证集的标注文件
# VAL_JSON = os.path.join(ANN_ROOT, 'val.json')
#测试集的标注文件
VAL_JSON = os.path.join(ANN_ROOT, 'instances_val.json')
 
register_coco_instances("my_train", {}, TRAIN_JSON, TRAIN_PATH)
MetadataCatalog.get("my_train").set(thing_classes=CLASS_NAMES,  # 可以选择开启,但是不能显示中文,这里需要注意,中文的话最好关闭
                                                    evaluator_type='coco', # 指定评估方式
                                                    json_file=TRAIN_JSON,
                                                    image_root=TRAIN_PATH)
register_coco_instances("my_val", {}, VAL_JSON, VAL_PATH)
MetadataCatalog.get("my_val").set(thing_classes=CLASS_NAMES,  # 可以选择开启,但是不能显示中文,这里需要注意,中文的话最好关闭
                                                    evaluator_type='coco', # 指定评估方式
                                                    json_file=VAL_JSON,
                                                    image_root=VAL_PATH)
if __name__ == "__main__":
    cfg = get_cfg()
    cfg.merge_from_file(
        "/home/jianming_ge/code/city_manager_20221017/detectron2-main/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"
    )
    cfg.DATASETS.TRAIN = ("my_train",)
    cfg.DATASETS.TEST = ("my_val",)  # 没有不用填
    cfg.DATALOADER.NUM_WORKERS = 0
    # 设置GPU
    # cfg.NUM_GPUS = 1
    #预训练模型文件
    #没有可以下载
    cfg.MODEL.WEIGHTS = "detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl"
    #或者使用自己的预训练模型
    # cfg.MODEL.WEIGHTS = "../tools/output/model_0003191.pth"
    cfg.SOLVER.IMS_PER_BATCH = 2
    cfg.SOLVER.BASE_LR = 0.0025
    # 用gpu报错啊,所以改成cpu
    cfg.MODEL.DEVICE = 'cpu'
    #最大迭代次数
    cfg.SOLVER.MAX_ITER = (2500)
    cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = (128)  # faster, and good enough for this toy dataset
    cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1  # 3 classes (data, fig, hazelnut)
    
    os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
    trainer = DefaultTrainer(cfg)
    trainer.resume_or_load(resume=False)
    trainer.train()

现在可以训练了

然后又报错, image size和 coco的annotation 不一致,是数据问题,然后回来有改数据。
做的是道路积水的实例分割,一共550张数据。标注格式是labelme的json 我转成了coco格式
需要数据集的同学可以私信我,有偿转,因为我也是高价钱买的。
转换脚本

import os
import json
import numpy as np
import glob
import shutil
import cv2
from sklearn.model_selection import train_test_split

np.random.seed(41)


# 0为背景
classname_to_id = {
    "water": 1,
}


# 注意这里:yxf
# 需要从1开始把对应的Label名字写入:这里根据自己的Lable名字修改

class Lableme2CoCo:

    def __init__(self):
        self.images = []
        self.annotations = []
        self.categories = []
        self.img_id = 0
        self.ann_id = 0

    def save_coco_json(self, instance, save_path):
        json.dump(instance, open(save_path, 'w', encoding='utf-8'), ensure_ascii=False, indent=1)  # indent=2 更加美观显示

    # 由json文件构建COCO
    def to_coco(self, json_path_list):
        self._init_categories()
        for json_path in json_path_list:
            obj = self.read_jsonfile(json_path)
            self.images.append(self._image(obj, json_path))
            shapes = obj['shapes']
            for shape in shapes:
                annotation = self._annotation(shape)
                self.annotations.append(annotation)
                self.ann_id += 1
            self.img_id += 1
        instance = {}
        instance['info'] = 'spytensor created'
        instance['license'] = ['license']
        instance['images'] = self.images
        instance['annotations'] = self.annotations
        instance['categories'] = self.categories
        return instance

    # 构建类别
    def _init_categories(self):
        for k, v in classname_to_id.items():
            category = {}
            category['id'] = v
            category['name'] = k
            self.categories.append(category)

    # 构建COCO的image字段
    def _image(self, obj, path):
        image = {}
        from labelme import utils
        img_x = utils.img_b64_to_arr(obj['imageData'])
        h, w = img_x.shape[:-1]
        image['height'] = h
        image['width'] = w
        image['id'] = self.img_id
        image['file_name'] = os.path.basename(path).replace(".json", ".jpg")
        return image

    # 构建COCO的annotation字段
    def _annotation(self, shape):
        # print('shape', shape)
        label = shape['label']
        points = shape['points']
        annotation = {}
        annotation['id'] = self.ann_id
        annotation['image_id'] = self.img_id
        annotation['category_id'] = int(classname_to_id[label])
        annotation['segmentation'] = [np.asarray(points).flatten().tolist()]
        annotation['bbox'] = self._get_box(points)
        annotation['iscrowd'] = 0
        annotation['area'] = 1.0
        return annotation

    # 读取json文件,返回一个json对象
    def read_jsonfile(self, path):
        with open(path, "r", encoding='utf-8') as f:
            return json.load(f)

    # COCO的格式: [x1,y1,w,h] 对应COCO的bbox格式
    def _get_box(self, points):
        min_x = min_y = np.inf
        max_x = max_y = 0
        for x, y in points:
            min_x = min(min_x, x)
            min_y = min(min_y, y)
            max_x = max(max_x, x)
            max_y = max(max_y, y)
        return [min_x, min_y, max_x - min_x, max_y - min_y]
if __name__ == '__main__':
    # 这里是原来作者的路径
    # labelme_path = "../../../xianjin_data-3/"

    # 这里注意:yxf
    # 需要把labelme_path修改为自己放images和json文件的路径
    labelme_path = "./water_street"
    saved_coco_path = "./"
    # 要把saved_coco_path修改为自己放生成COCO的路径,这里会在我当前COCO的文件夹下建立生成coco文件夹。
    print('reading...')
    # 创建文件
    if not os.path.exists("%scoco/annotations/" % saved_coco_path):
        os.makedirs("%scoco/annotations/" % saved_coco_path)
    if not os.path.exists("%scoco/images/train/" % saved_coco_path):
        os.makedirs("%scoco/images/train" % saved_coco_path)
    if not os.path.exists("%scoco/images/val/" % saved_coco_path):
        os.makedirs("%scoco/images/val" % saved_coco_path)
    # 获取images目录下所有的joson文件列表
    print(labelme_path + "/*.json")
    json_list_path = glob.glob(labelme_path + "/*.json")
    print('json_list_path: ', len(json_list_path))
    # 数据划分,这里没有区分val2017和tran2017目录,所有图片都放在images目录下
    train_path, val_path = train_test_split(json_list_path, test_size=0.1, train_size=0.9)
    # 这里yxf:将训练集和验证集的比例是9:1,可以根据自己想要的比例修改。
    print("train_n:", len(train_path), 'val_n:', len(val_path))
    # 把训练集转化为COCO的json格式
    l2c_train = Lableme2CoCo()
    train_instance = l2c_train.to_coco(train_path)
    l2c_train.save_coco_json(train_instance, '%scoco/annotations/instances_train.json' % saved_coco_path)
    for file in train_path:
        img_name = file.replace('json', 'jpg')
        temp_img = cv2.imread(img_name)
        if temp_img is None:
            print(img_name + "is none!!!!!")
        try:
            img_name_jpg = img_name
            filenames = img_name_jpg.split("\\")[-1]
            cv2.imwrite("./coco/images/train/{}".format(filenames), temp_img)
            # 这句写入语句,是将 X.jpg 写入到指定路径./COCO/coco/images/train2017/X.jpg
        except Exception as e:
            print(e)
            print('Wrong Image:', img_name)
        # print("yxf"+img_name)

    for file in val_path:
        # shutil.copy(file.replace("json", "jpg"), "%scoco/images/val2017/" % saved_coco_path)
        img_name = file.replace('json', 'jpg')
        temp_img = cv2.imread(img_name)
        try:
            # cv2.imwrite("{}coco/images/val2017/{}".format(saved_coco_path, img_name.replace('png', 'jpg')), temp_img)
            img_name_jpg = img_name  # 将png文件替换成jpg文件。
            print("jpg测试:" + img_name_jpg)
            filenames = img_name_jpg.split("\\")[-1]
            print(filenames)
            cv2.imwrite("./coco/images/val/{}".format(filenames), temp_img)
        except Exception as e:
            print(e)
            print('Wrong Image1111:', img_name)
            continue

    # 把验证集转化为COCO的json格式
    l2c_val = Lableme2CoCo()
    val_instance = l2c_val.to_coco(val_path)
    l2c_val.save_coco_json(val_instance, '%scoco/annotations/instances_val.json' % saved_coco_path)

训练结果展示

训练过程中cpu被打满了,

(mydet) [jianming_ge@localhost detectron2-main]$ tensorboard --logdir output/

查看模型随迭代次数的指数
在这里插入图片描述
必须要夸一下vscode,远程接入服务器,他会帮你做端口映射,http://localhost:6006/#timeseries 本机一点问题没有
可以看到2000轮一次损失下降很慢了,因为我套用的代码没有验证集的表现,只能先训练完获取模型,再在验证集上看看效果。

感想

好像在cpu上也并非慢的离谱,也可能是用的IMS_PER_BATCH = 2 的原因,一个批次送入两张

关于数据集

有兴趣的可以联系我,已经是coco的实例分割数据集了

如何在GPU上 训练

git:https://github.com/Okery/PyTorch-Simple-MaskRCNN
看一下这个好用不
down下来后训练命令

(py38_18) [jianming_ge@localhost PyTorch-Simple-MaskRCNN-master]$ python train.py --use-cuda --dataset coco --data-dir /home/kevin_xie/yifeinfs/data_share/city_manager_20221017/water_street_coco_version20221020_32/

数据集json文件要改一下名字

(py38_18) [jianming_ge@localhost PyTorch-Simple-MaskRCNN-master]$ python train.py --use-cuda --dataset coco --data-dir /home/kevin_xie/yifeinfs/data_share/city_manager_20221017/water_street_coco_version20221020_32/
cuda: True
available GPU(s): 1
0: {'name': 'NVIDIA GeForce RTX 2080 Ti', 'capability': [7, 5], 'total_momory': 10.76, 'sm_count': 68}

device: cuda
loading annotations into memory...
Done (t=0.04s)
creating index...
index created!
Checking the dataset...
checked id file: /home/kevin_xie/yifeinfs/data_share/city_manager_20221017/water_street_coco_version20221020_32/checked_train2017.txt
450 samples are OK; 1.0 seconds
loading annotations into memory...
Done (t=0.01s)
creating index...
index created!
Checking the dataset...
checked id file: /home/kevin_xie/yifeinfs/data_share/city_manager_20221017/water_street_coco_version20221020_32/checked_val2017.txt
51 samples are OK; 0.1 seconds
Namespace(ckpt_path='./maskrcnn_coco.pth', data_dir='/home/kevin_xie/yifeinfs/data_share/city_manager_20221017/water_street_coco_version20221020_32/', dataset='coco', epochs=3, iters=10, lr=0.00125, lr_steps=[6, 7], momentum=0.9, print_freq=100, results='./maskrcnn_results.pth', seed=3, use_cuda=True, warmup_iters=1000, weight_decay=0.0001)
Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /home/jianming_ge/.cache/torch/hub/checkpoints/resnet50-19c8e357.pth
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 97.8M/97.8M [01:28<00:00, 1.16MB/s]
Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /home/jianming_ge/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 170M/170M [02:36<00:00, 1.14MB/s]

already trained: 0 epochs; to 3 epochs

epoch: 1
lr_epoch: 0.00125, factor: 1.00000
Traceback (most recent call last):
  File "train.py", line 122, in <module>
    main(args)
  File "train.py", line 64, in main
    iter_train = pmr.train_one_epoch(model, optimizer, d_train, device, epoch, args)
  File "/home/kevin_xie/yifeinfs/jianming_ge/code/city_manage_20221017/PyTorch-Simple-MaskRCNN-master/pytorch_mask_rcnn/engine.py", line 24, in train_one_epoch
    for i, (image, target) in enumerate(data_loader):
  File "/home/kevin_xie/yifeinfs/jianming_ge/miniconda3/envs/py38_18/lib/python3.8/site-packages/torch/utils/data/dataset.py", line 330, in __getitem__
    return self.dataset[self.indices[idx]]
  File "/home/kevin_xie/yifeinfs/jianming_ge/code/city_manage_20221017/PyTorch-Simple-MaskRCNN-master/pytorch_mask_rcnn/datasets/generalized_dataset.py", line 20, in __getitem__
    image = self.get_image(img_id)
  File "/home/kevin_xie/yifeinfs/jianming_ge/code/city_manage_20221017/PyTorch-Simple-MaskRCNN-master/pytorch_mask_rcnn/datasets/coco_dataset.py", line 33, in get_image
    image = Image.open(os.path.join(self.data_dir, "{}".format(self.split), img_info["file_name"]))
  File "/home/kevin_xie/yifeinfs/jianming_ge/miniconda3/envs/py38_18/lib/python3.8/site-packages/PIL/Image.py", line 2968, in open
    fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: '/home/kevin_xie/yifeinfs/data_share/city_manager_20221017/water_street_coco_version20221020_32/train2017/369.jpg'

(py38_18) [jianming_ge@localhost PyTorch-Simple-MaskRCNN-master]$ python train.py --use-cuda --dataset coco --data-dir /home/kevin_xie/yifeinfs/data_share/city_manager_20221017/water_street_coco_version20221020_32/ --epochs 100 --iters -1

换一把超参数试一下,能跑起来了,但是貌似并不收敛
···bash
(py38_18) [jianming_ge@localhost PyTorch-Simple-MaskRCNN-master]$ python train.py --use-cuda --dataset coco --data-dir /home/kevin_xie/yifeinfs/data_share/city_manager_20221017/water_street_coco_version20221020_32/ --epochs 100 --iters -1
cuda: True
available GPU(s): 1
0: {‘name’: ‘NVIDIA GeForce RTX 2080 Ti’, ‘capability’: [7, 5], ‘total_momory’: 10.76, ‘sm_count’: 68}

device: cuda
loading annotations into memory…
Done (t=0.04s)
creating index…
index created!
loading annotations into memory…
Done (t=0.00s)
creating index…
index created!
Namespace(ckpt_path=‘./maskrcnn_coco.pth’, data_dir=‘/home/kevin_xie/yifeinfs/data_share/city_manager_20221017/water_street_coco_version20221020_32/’, dataset=‘coco’, epochs=100, iters=-1, lr=0.00125, lr_steps=[6, 7], momentum=0.9, print_freq=100, results=‘./maskrcnn_results.pth’, seed=3, use_cuda=True, warmup_iters=1000, weight_decay=0.0001)
2

already trained: 14 epochs; to 100 epochs

epoch: 15
lr_epoch: 0.00001, factor: 0.01000
6300 0.691 0.480 nan nan nan
6400 0.683 0.018 nan nan nan
6500 0.683 0.387 nan nan nan
6600 0.683 0.270 nan nan nan
6700 0.688 0.315 nan nan nan
iter: 76.8, total: 57.3, model: 32.7, backward: 10.1
iter: 45.2, total: 30.6, model: 28.5
accumulate: 0.0s
training: 34.6 s, evaluation: 2.3 s
{‘bbox AP’: 0.0, ‘mask AP’: 0.0}

epoch: 20
lr_epoch: 0.00001, factor: 0.01000
8600 0.678 0.079 nan nan nan
8700 0.678 0.241 nan nan nan
8800 0.679 0.142 nan nan nan
8900 0.677 0.125 nan nan nan
iter: 78.3, total: 57.6, model: 32.2, backward: 10.7
iter: 51.8, total: 30.0, model: 28.3
accumulate: 0.0s
training: 35.2 s, evaluation: 2.7 s
{‘bbox AP’: 0.0, ‘mask AP’: 0.0}

epoch: 24
lr_epoch: 0.00001, factor: 0.01000
10400 0.674 0.073 nan nan nan
10500 0.674 0.226 nan nan nan
10600 0.676 0.142 nan nan nan
10700 0.674 0.116 nan nan nan
iter: 78.0, total: 58.0, model: 32.4, backward: 10.4
iter: 49.0, total: 29.6, model: 28.2
accumulate: 0.0s
training: 35.1 s, evaluation: 2.5 s
{‘bbox AP’: 0.0, ‘mask AP’: 0.0}

epoch: 26
lr_epoch: 0.00001, factor: 0.01000
11300 0.673 0.070 nan nan nan
11400 0.673 0.219 nan nan nan
11500 0.675 0.142 nan nan nan
11600 0.671 0.111 nan nan nan
iter: 77.1, total: 58.9, model: 32.3, backward: 10.4
iter: 44.5, total: 29.7, model: 28.2
accumulate: 0.0s
training: 34.7 s, evaluation: 2.3 s
{‘bbox AP’: 0.0, ‘mask AP’: 0.0}

epoch: 27
lr_epoch: 0.00001, factor: 0.01000
11700 0.690 0.470 nan nan nan
11800 0.671 0.016 nan nan nan
11900 0.672 0.329 nan nan nan
···
参考指引:
https://blog.csdn.net/jiaoyangwm/article/details/114845483?spm=1001.2101.3001.6661.1&utm_medium=distribute.pc_relevant_t0.none-task-blog-2%7Edefault%7ECTRLIST%7ERate-1-114845483-blog-104021823.pc_relevant_default&depth_1-utm_source=distribute.pc_relevant_t0.none-task-blog-2%7Edefault%7ECTRLIST%7ERate-1-114845483-blog-104021823.pc_relevant_default&utm_relevant_index=1

  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 18
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 18
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值