前言
本文面向小白,从零开始在linux服务器上配置mmdetection,并在自己数据集上运行yolov3,详细描述了环境配置、数据集的准备及训练过程,有任何问题欢迎在评论区留言。
一. 环境配置
参考mmdetection官网 ,跟着官网教程一步步来
- 关于pytorch安装问题
首先在终端运行nvidia-smi
查看右上角CUDA Version,比如CUDA Version:12.2
,然后进入pytorch官网下载对应的pytorch版本,直接复制下面的command到终端运行,注意CUDA版本要小于等于你的CUDA Version,不然可能会不兼容,没找到适合自己CUDA版本的到previous version下载
- 检验pytorch是否正确下载
在建好的miniconda新环境中运行python
进入python环境,分别运行下面三个语句,输入exit()退出python环境
import torch
print(torch.__version__)
print(torch.cuda.is_available())
- 关于git clone下载不了
把git clone https://github.com/open-mmlab/mmdetection.git
中的https改成git
二. 验证mmdeteciton正确安装
在mmdetedction文件夹下运行mim download mmdet --config rtmdet_tiny_8xb32-300e_coco --dest .
下载配置文件和模型权重文件,下载完成后运行如下命令就可以在/output/vis/demo.jpg
中看到检测结果
python demo/image_demo.py demo/demo.jpg yolov3_mobilenetv2_320_300e_coco.py yolov3_mobilenetv2_320_300e_coco_20210719_215349-d18dff72.pth --device cpu --out-file result.jpg
三. 训练
现在我们可以开始训练自己的数据集,建议使用vscode ssh远程连接服务器
建议(小白可以选择性跳过)用screen命令生成窗口进行训练,避免断网后需要重新训练,还可以多窗口运行
- 新建窗口:
screen -S name
- 查看所有窗口:
screen -ls
- 进入窗口:
screen -r name
- 退出窗口:
Ctrl+A+D
- 删除窗口:
screen -ls找到对应pid,输入kill pid号
3.1 数据集准备
默认你现在有.jpg文件和.xml文件,没有.xml文件的用labelimg或者labelme自行标注产生 ,或者下载现成数据集 ;
说明:annotations中放.json文件,其他三个文件夹放划分好的.jpg文件,没有测试机的话没有test2017文件夹
最终的data文件夹如下:
mmdetection
├── mmdet
├── tools
├── configs
├── data
│ ├── coco
│ │ ├── annotations
│ │ │ ├── instances_train2017.json
│ │ │ ├── instances_val2017.json
│ │ │ ├── instances_test2017.json(本文没有)
│ │ ├── train2017
│ │ │ ├── train1.jpg
│ │ │ ├── train2.jpg
│ │ ├── val2017
│ │ │ ├──val1.jpg
│ │ │ ├──val2.jpg
│ │ ├── test2017(本文没有)
原始数据:
如果jpg数量多于xml的可以见这篇文件
|—— jpg
| |—— jpg1.jpg
| |—— jpg2.jpg
|—— xml
| |—— xml1.xml
| |—— xml2.xml
3.1.1 划分数据集(train val)
本文仅仅将原始数据按9:1划分为train和val,需要test的自行修改代码
import os
import shutil
import random
random.seed(0)
def split_data(file_path, xml_path, new_file_path, train_rate, val_rate):
each_class_image = []
each_class_label = []
for image in os.listdir(file_path):
each_class_image.append(image)
for label in os.listdir(xml_path):
each_class_label.append(label)
data = list(zip(each_class_image, each_class_label))
total = len(each_class_image)
random.shuffle(data)
each_class_image, each_class_label = zip(*data)
train_images = each_class_image[0:int(train_rate * total)]
val_images = each_class_image[int(train_rate * total):]
# test_images = each_class_image[int((train_rate + val_rate) * total):]
train_labels = each_class_label[0:int(train_rate * total)]
val_labels = each_class_label[int(train_rate * total):]
# test_labels = each_class_label[int((train_rate + val_rate) * total):]
for image in train_images:
print(image)
old_path = file_path + '/' + image
new_path1 = new_file_path + '/' + 'train2017'
if not os.path.exists(new_path1):
os.makedirs(new_path1)
new_path = new_path1 + '/' + image
shutil.copy(old_path, new_path)
for label in train_labels:
print(label)
old_path = xml_path + '/' + label
new_path1 = new_file_path + '/' + 'train2017'
if not os.path.exists(new_path1):
os.makedirs(new_path1)
new_path = new_path1 + '/' + label
shutil.copy(old_path, new_path)
for image in val_images:
old_path = file_path + '/' + image
new_path1 = new_file_path + '/' + 'val2017'
if not os.path.exists(new_path1):
os.makedirs(new_path1)
new_path = new_path1 + '/' + image
shutil.copy(old_path, new_path)
for label in val_labels:
old_path = xml_path + '/' + label
new_path1 = new_file_path + '/' + 'val2017'
if not os.path.exists(new_path1):
os.makedirs(new_path1)
new_path = new_path1 + '/' + label
shutil.copy(old_path, new_path)
# for image in test_images:
# old_path = file_path + '/' + image
# new_path1 = new_file_path + '/' + 'test' + '/' + 'images'
# if not os.path.exists(new_path1):
# os.makedirs(new_path1)
# new_path = new_path1 + '/' + image
# shutil.copy(old_path, new_path)
# for label in test_labels:
# old_path = xml_path + '/' + label
# new_path1 = new_file_path + '/' + 'test' + '/' + 'labels'
# if not os.path.exists(new_path1):
# os.makedirs(new_path1)
# new_path = new_path1 + '/' + label
# shutil.copy(old_path, new_path)
if __name__ == '__main__':
file_path = "jpg" # .jpg文件路径(相对于此py文件)
xml_path = 'xml' # .xml文件路径(相对于此py文件)
new_file_path = "data/coco"
split_data(file_path,xml_path, new_file_path, train_rate=0.9, val_rate=0.1)
3.1.2 生成.json文件
注意:trian和val均需要.json文件,运行一次代码后,需要修改代码最下方部分再运行一次
import xml.etree.ElementTree as ET
import os
import json
coco = dict()
coco['images'] = []
coco['type'] = 'instances'
coco['annotations'] = []
coco['categories'] = []
category_set = dict()
image_set = set()
category_item_id = 0
image_id = 20180000000
annotation_id = 0
def addCatItem(name):
global category_item_id
category_item = dict()
category_item['supercategory'] = 'none'
category_item_id += 1
category_item['id'] = category_item_id
category_item['name'] = name
coco['categories'].append(category_item)
category_set[name] = category_item_id
return category_item_id
def addImgItem(file_name, size):
global image_id
if file_name is None:
raise Exception('Could not find filename tag in xml file.')
if size['width'] is None:
raise Exception('Could not find width tag in xml file.')
if size['height'] is None:
raise Exception('Could not find height tag in xml file.')
image_id += 1
image_item = dict()
image_item['id'] = image_id
image_item['file_name'] = file_name
image_item['width'] = size['width']
image_item['height'] = size['height']
coco['images'].append(image_item)
image_set.add(file_name)
return image_id
def addAnnoItem(object_name, image_id, category_id, bbox):
global annotation_id
annotation_item = dict()
annotation_item['segmentation'] = []
seg = []
#bbox[] is x,y,w,h
#left_top
seg.append(bbox[0])
seg.append(bbox[1])
#left_bottom
seg.append(bbox[0])
seg.append(bbox[1] + bbox[3])
#right_bottom
seg.append(bbox[0] + bbox[2])
seg.append(bbox[1] + bbox[3])
#right_top
seg.append(bbox[0] + bbox[2])
seg.append(bbox[1])
annotation_item['segmentation'].append(seg)
annotation_item['area'] = bbox[2] * bbox[3]
annotation_item['iscrowd'] = 0
annotation_item['ignore'] = 0
annotation_item['image_id'] = image_id
annotation_item['bbox'] = bbox
annotation_item['category_id'] = category_id
annotation_id += 1
annotation_item['id'] = annotation_id
coco['annotations'].append(annotation_item)
def parseXmlFiles(xml_path):
for f in os.listdir(xml_path):
if not f.endswith('.xml'):
continue
bndbox = dict()
size = dict()
current_image_id = None
current_category_id = None
file_name = None
size['width'] = None
size['height'] = None
size['depth'] = None
xml_file = os.path.join(xml_path, f)
print(xml_file)
tree = ET.parse(xml_file)
root = tree.getroot()
if root.tag != 'annotation':
raise Exception('pascal voc xml root element should be annotation, rather than {}'.format(root.tag))
#elem is <folder>, <filename>, <size>, <object>
for elem in root:
current_parent = elem.tag
current_sub = None
object_name = None
if elem.tag == 'folder':
continue
if elem.tag == 'filename':
file_name = elem.text
if file_name in category_set:
raise Exception('file_name duplicated')
#add img item only after parse <size> tag
elif current_image_id is None and file_name is not None and size['width'] is not None:
if file_name not in image_set:
current_image_id = addImgItem(file_name, size)
print('add image with {} and {}'.format(file_name, size))
else:
raise Exception('duplicated image: {}'.format(file_name))
#subelem is <width>, <height>, <depth>, <name>, <bndbox>
for subelem in elem:
bndbox ['xmin'] = None
bndbox ['xmax'] = None
bndbox ['ymin'] = None
bndbox ['ymax'] = None
current_sub = subelem.tag
if current_parent == 'object' and subelem.tag == 'name':
object_name = subelem.text
if object_name not in category_set:
current_category_id = addCatItem(object_name)
else:
current_category_id = category_set[object_name]
elif current_parent == 'size':
if size[subelem.tag] is not None:
raise Exception('xml structure broken at size tag.')
size[subelem.tag] = int(subelem.text)
#option is <xmin>, <ymin>, <xmax>, <ymax>, when subelem is <bndbox>
for option in subelem:
if current_sub == 'bndbox':
if bndbox[option.tag] is not None:
raise Exception('xml structure corrupted at bndbox tag.')
bndbox[option.tag] = int(option.text)
#only after parse the <object> tag
if bndbox['xmin'] is not None:
if object_name is None:
raise Exception('xml structure broken at bndbox tag')
if current_image_id is None:
raise Exception('xml structure broken at bndbox tag')
if current_category_id is None:
raise Exception('xml structure broken at bndbox tag')
bbox = []
#x
bbox.append(bndbox['xmin'])
#y
bbox.append(bndbox['ymin'])
#w
bbox.append(bndbox['xmax'] - bndbox['xmin'])
#h
bbox.append(bndbox['ymax'] - bndbox['ymin'])
print('add annotation with {},{},{},{}'.format(object_name, current_image_id, current_category_id, bbox))
addAnnoItem(object_name, current_image_id, current_category_id, bbox )
if __name__ == '__main__':
xml_path = 'data/coco/train2017' #train修改成val再运行一次
json_file = 'data/coco/annotations/instances_train2017.json' #train修改成val再运行一次
parseXmlFiles(xml_path)
json.dump(coco, open(json_file, 'w'))
Windows本地传文件到服务器可以用scp命令
3.2 修改文件配置
mmdetection中有很多模型,本文采用的是yolov3_d53_8xb8-ms-608-273e_coco.py,后面的文件修改按着你自己选定的模型的来改就行。
3.2.1 mmdetection\configs\yolo\yolov3_d53_8xb8-ms-608-273e_coco.py
第24行修改num_classes(类别数),第148行修改max_epoch
3.2.2 mmdetection\mmdet\datasets\coco.py
第18行修改classes类别名称
3.2.3 mmdet/evaluation/fuctional/class_name.py
第72行修改了coco_classes的类别名称
3.3 GPU训练
3.3.1 单GPU训练
在Linux终端进入mmdetection文件夹内,运行
python tools/train.py configs/yolo/yolov3_d53_8xb8-ms-608-273e_coco.py
3.3.2 多GPU训练
./tools/dist_train.sh configs/yolo/yolov3_d53_8xb8-ms-608-273e_coco.py 2
遇到CUDA报错无法多GPU训练的话:
在Linux终端进入mmdetection文件夹内,运行vim tools/dist_train.sh
,按a/i进入修改模式
在第11行python前面加上CUDA_VISIBLE_DEVICE = "0,1"
,0,1是可用GPU编号,可以通过终端运行nvidia-smi
查看,改好后依次按下Esc,shift+:,输入wq!保存修改并退出,修改好以后执行
3.4 使用coco预训练模型加快训练速度
到/mmdetection/config/yolo/README.md文件中下载.pth文件,放入/mmdetection/checkpoints/下即可
如果报错可运行如下代码修改预训练权重文件
import torch
num_classes = 21
model_coco = torch.load("cascade_rcnn_x101_32x4d_fpn_2x_20181218-28f73c4c.pth")
#weight
model_coco["state_dict"]["bbox_head.0.fc_cls.weight"] = model_coco["state_dict"]["bbox_head.0.fc_cls.weight"][:num_classes, :]
model_coco["state_dict"]["bbox_head.1.fc_cls.weight"] = model_coco["state_dict"]["bbox_head.1.fc_cls.weight"][:num_classes, :]
model_coco["state_dict"]["bbox_head.2.fc_cls.weight"] = model_coco["state_dict"]["bbox_head.2.fc_cls.weight"][:num_classes, :]
#bias
model_coco["state_dict"]["bbox_head.0.fc_cls.bias"] = model_coco["state_dict"]["bbox_head.0.fc_cls.bias"][:num_classes]
model_coco["state_dict"]["bbox_head.1.fc_cls.bias"] = model_coco["state_dict"]["bbox_head.1.fc_cls.bias"][:num_classes]
model_coco["state_dict"]["bbox_head.2.fc_cls.bias"] = model_coco["state_dict"]["bbox_head.2.fc_cls.bias"][:num_classes]
#save new model
torch.save(model_coco,"coco_pretrained_weights_classes_%d.pth"%num_classes)
然后在mmdetection/configs/base/default_runtime.py中修改load_from
load_from = '/...../coco_pretrained_weights_classes_21.pth'
#load_from = None
四. 验证
4.1 GPU验证
4.1.1 单GPU验证
epoch_21.pth是训练完后work_dirs文件下产生的.
python3 tools/test.py ./configs/yolo/yolov3_d53_8xb8-ms-608-273e_coco.py ./work_dirs/yolov3_d53_8xb8-ms-608-273e_coco/epoch_21.pth --show-dir ./result
4.1.2 多GPU验证
./tools/dist_test.sh ./configs/yolo/yolov3_d53_8xb8-ms-608-273e_coco.py work_dirs/yolov3_d53_8xb8-ms-608-273e_coco/epoch_21.pth 4 --show-dir ./result/
五. 其他资料
结语
有任何问题都可以评论区交流,希望这篇文章对你有帮助!