一个多月没写博客了,这一个月一直在忙论文的大修,个人确实是很不适合写文章这件事,想起来从大四到现在研二开学,入门视觉这个坑已经两年有余,想着整理一番,也有助于后来人。视觉是一个对老师接横向很友好的方向,我相信很多视觉研究生都在做横向工作,这次整理主要也是帮助新入学的同学们快速入门,掌握视觉基础。
数据集是视觉研究的基础,所以先从数据集开始讲起,视觉项目应用方面主要就分三大块数据集,图像分类、目标检测、图像分割,当然还有其他方面的比如表格数据集、点云数据集等等。从横向角度看,通常需要我们自己去拍摄图像建立数据集,所以我们先把这三类数据集整明白。
一、图像分类
图像分类数据集是最简单的,我们采集了一批图像,里面有各种类别的样本,那么我们建类别文件夹,将每个类别的图像放进去即可
比如工业设备漏水图像就分为正常的和漏水的两类,然后把图像放进去。
再接着就是划分训练集、测试集、验证集了,划分的方法有很多,我直接贴上可以用的脚本
import os
import random
import shutil
from shutil import copy2
def data_set_split(src_data_folder, target_data_folder, train_scale=0.8, val_scale=0.2, test_scale=0.0):
'''
读取源数据文件夹,生成划分好的文件夹,分为trian、val、test三个文件夹进行
:param src_data_folder: 源文件夹 E:/biye/gogogo/note_book/torch_note/data/utils_test/data_split/src_data
:param target_data_folder: 目标文件夹 E:/biye/gogogo/note_book/torch_note/data/utils_test/data_split/target_data
:param train_scale: 训练集比例
:param val_scale: 验证集比例
:param test_scale: 测试集比例
:return:
'''
print("开始数据集划分")
class_names = os.listdir(src_data_folder)
# 在目标目录下创建文件夹
split_names = ['train', 'val', 'test']
for split_name in split_names:
split_path = os.path.join(target_data_folder, split_name)
if os.path.isdir(split_path):
pass
else:
os.mkdir(split_path)
# 然后在split_path的目录下创建类别文件夹
for class_name in class_names:
class_split_path = os.path.join(split_path, class_name)
if os.path.isdir(class_split_path):
pass
else:
os.mkdir(class_split_path)
# 按照比例划分数据集,并进行数据图片的复制
# 首先进行分类遍历
for class_name in class_names:
current_class_data_path = os.path.join(src_data_folder, class_name)
current_all_data = os.listdir(current_class_data_path)
current_data_length = len(current_all_data)
current_data_index_list = list(range(current_data_length))
random.shuffle(current_data_index_list)
train_folder = os.path.join(os.path.join(target_data_folder, 'train'), class_name)
val_folder = os.path.join(os.path.join(target_data_folder, 'val'), class_name)
test_folder = os.path.join(os.path.join(target_data_folder, 'test'), class_name)
train_stop_flag = current_data_length * train_scale
val_stop_flag = current_data_length * (train_scale + val_scale)
current_idx = 0
train_num = 0
val_num = 0
test_num = 0
for i in current_data_index_list:
src_img_path = os.path.join(current_class_data_path, current_all_data[i])
if current_idx <= train_stop_flag:
copy2(src_img_path, train_folder)
# print("{}复制到了{}".format(src_img_path, train_folder))
train_num = train_num + 1
elif (current_idx > train_stop_flag) and (current_idx <= val_stop_flag):
copy2(src_img_path, val_folder)
# print("{}复制到了{}".format(src_img_path, val_folder))
val_num = val_num + 1
else:
copy2(src_img_path, test_folder)
# print("{}复制到了{}".format(src_img_path, test_folder))
test_num = test_num + 1
current_idx = current_idx + 1
print("*********************************{}*************************************".format(class_name))
print(
"{}类按照{}:{}:{}的比例划分完成,一共{}张图片".format(class_name, train_scale, val_scale, test_scale, current_data_length))
print("训练集{}:{}张".format(train_folder, train_num))
print("验证集{}:{}张".format(val_folder, val_num))
print("测试集{}:{}张".format(test_folder, test_num))
if __name__ == '__main__':
src_data_folder = ""
target_data_folder = ""
data_set_split(src_data_folder, target_data_folder)
切分完以后在目标文件夹下就会出现train、test、val三个文件夹,train是训练集,val是训练过程中的测试集,是为了让你在边训练边看到训练的结果,及时判断学习状态。test就是训练模型结束后,用于评价模型结果的测试集。只有train就可以训练,val不是必须的,比例也可以设置很小。test对于model训练也不是必须的,但是一般都要预留一些用来检测,通常推荐比例是8:1:1。工业上来说,一般是设置train和val比例9:1,test由需求方给出。
这样子一个分类数据集就基本上制作好啦,以MMCLS为例,会在这个基础上生成txt文件记录图像类别和路径,我这里也直接贴出可以用的脚本
# -*- coding: utf-8 -*-
import os
from glob import glob
from pathlib import Path
def generate_mmcls_ann(data_dir, img_type='.jpeg'):
data_dir = str(Path(data_dir)) + '/'
classes = ['dry', 'rainy']
class2id = dict(zip(classes, range(len(classes))))
data_dir = str(Path(data_dir)) + '/'
dir_types = ['train', 'val']
sub_dirs = os.listdir(data_dir)
ann_dir = data_dir + 'meta/'
if not os.path.exists(ann_dir):
os.makedirs(ann_dir)
for sd in sub_dirs:
if sd not in dir_types:
continue
annotations = []
target_dir = data_dir + sd + '/'
for d in os.listdir(target_dir):
class_id = str(class2id[d])
images = glob(target_dir + d + '/*' + img_type)
for img in images:
img = d + '/' + os.path.basename(img)
annotations.append(img + ' ' + class_id + '\n')
annotations[-1] = annotations[-1].strip()
with open(ann_dir + sd + '.txt', 'w') as f:
f.writelines(annotations)
if __name__ == '__main__':
data_dir = '/home/mk/mmclassification-master/data/shuita'
generate_mmcls_ann(data_dir)
其作用就是生成meta文件夹,用txt记录图像的路径和类别
最后展示一下工业分类的完整图像数据集
二、目标检测
2.1图像标注Labelme,coco,yolo
目标检测听起来没有图像分类那么简单易懂,所以先介绍一下目标检测是干什么,图像分类是给一批图像划分类别,目标检测则是针对一张图像中的内容,将感兴趣的类别用检测框标记出来,如下图所示
通常仅仅需要发现目标进行预警时,我们采用分类即可,我们需要定位目标,指示目标或者反应目标大小的时候就需要目标检测了,目标检测对使用者更加友好。
目标检测的标注比图像分类麻烦得多,需要用到labelme工具,labelme在网上都能下载到,我这里分享一下我使用的1.8.1版本
链接:https://pan.baidu.com/s/1eLxt7-oEY7OYAo_QYvARNA
提取码:nmit
--来自百度网盘超级会员V2的分享
下载以后直接打开LabelImg的exe即可,具体标注就是打开一个图像,在图像上的感兴趣区域画框,然后打上标签,然后保存即可
这里保存的时候会注意到有两种格式,分别是VOC和YOLO,这对应着两种数据集的格式
VOC保存的是XML文件,会记录文件路径、图像大小、标注框的标签的坐标
YOLO就是YOLO系列的标注格式,是txt文件,打开以后数据如下,会把每一个标注框转换为一行数据,每一行开头的第一个数字比如0是类别,后面四个值分别是[x_center, y_center, w, h],值得一提的是标注完以后还会对坐标进行自动归一化,也就是对应的wh值除以图像的wh值,得到 0-1之间的坐标值
最新的labelme已经不保存这两个格式了,因为要考虑到分割等情况,labelme5.2.1版本直接保存的是坐标点的json文件,json文件对应的是coco数据集,其保存的是坐标点和标签类别
初学者看到这里会有点懵,但本质上这只是标注格式的差异,以工业应用而言,掌握YOLO就可以应付绝大多数情况,数据量较少时我们自己标注,我们标注成yolo格式即可,数据量较多时,会让淘宝店或者数据提供方标注,标签格式转换的问题可以参考知乎大佬的文章,比较详细
目标检测中数据集格式之间的相互转换--coco、voc、yolo - 知乎 (zhihu.com)
2.2COCO数据集构建
这里以最常用的COCO和YOLO两种格式来讲后续数据集构建,以新版本的Labelme为例,其标注一张图像,输出一张图像标注以后的json标签,那么效果就是下图所示
我们要在此基础之上划分训练集测试集验证集,最终合并生成xxx_train.json,xxx_test.json,xxx_val.json,xxx就是你数据集的名字。
json里面的内容包含图像信息“images”,包含图像的大小index路径
类别信息 categories,包含标签的id名字
最后是每张图像的标注annotations包含对应的image_id,box坐标点,标签类别id
首先使用labelme2coco脚本将分开的json整合成一个
from pycocotools.coco import COCO
import cv2
import os
import json
import numpy as np
import matplotlib.pyplot as plt
import random
import json
import glob
import os
import json
import PIL.Image
import PIL.ImageDraw
import numpy as np
class labelme2coco(object):
def __init__(self, labelme_folder='', save_json_path='./new.json'):
"""
Args:
labelme_folder: folder that contains labelme annotations and image files
save_json_path: path for coco json to be saved
"""
self.save_json_path = save_json_path
self.images = []
self.categories = []
self.annotations = []
self.label = []
self.annID = 0
self.height = 0
self.width = 0
# create save dir
save_json_dir = os.path.dirname(save_json_path)
if not os.path.exists(save_json_dir):
os.makedirs(save_json_dir)
# get json list
labelme_json = []
json_list = glob.glob(os.path.join(labelme_folder, "*.json"))
json_list = sorted(json_list,key=lambda name:int(name[-10:-5]))
for var in json_list:
labelme_json.append(var)
self.labelme_json = labelme_json
self.save_json()
def data_transfer(self):
for num, json_path in enumerate(self.labelme_json):
with open(json_path, 'r') as fp:
# load json
data = json.load(fp)
# (prefix, res) = os.path.split(json_path)
# (file_name, extension) = os.path.splitext(res)
print(json_path)
self.images.append(self.image(data, num, json_path))
for shapes in data['shapes']:
label = shapes['label']
if label not in self.label:
self.categories.append(self.category(label))
self.label.append(label)
points = shapes['points']
self.annotations.append(self.annotation(points, label, num))
self.annID += 1
def image(self, data, num, json_path):
image = {}
# get image path
_, img_extension = os.path.splitext(data["imagePath"])
image_path = json_path.replace(".json", img_extension)
im = cv2.imread(image_path)
height, width, c = im.shape
image['height'] = height
image['width'] = width
image['id'] = int(num)
image['file_name'] = image_path
self.height = height
self.width = width
return image
def category(self, label):
category = {}
category['supercategory'] = label
category['id'] = int(len(self.label))
category['name'] = label
return category
def annotation(self, points, label, num):
annotation = {}
annotation['iscrowd'] = 0
annotation['image_id'] = int(num)
annotation['bbox'] = list(map(float, self.getbbox(points)))
# coarsely from bbox to segmentation
x = annotation['bbox'][0]
y = annotation['bbox'][1]
w = annotation['bbox'][2]
h = annotation['bbox'][3]
annotation['segmentation'] = [np.asarray(points).flatten().tolist()]
annotation['category_id'] = self.getcatid(label)
annotation['id'] = int(self.annID)
# add area info
annotation['area'] = self.height * self.width # the area is not used for detection
return annotation
def getcatid(self, label):
for categorie in self.categories:
if label == categorie['name']:
return categorie['id']
# if label[1]==categorie['name']:
# return categorie['id']
return -1
def getbbox(self,points):
# img = np.zeros([self.height,self.width],np.uint8)
# cv2.polylines(img, [np.asarray(points)], True, 1, lineType=cv2.LINE_AA)
# cv2.fillPoly(img, [np.asarray(points)], 1)
polygons = points
mask = self.polygons_to_mask([self.height, self.width], polygons)
return self.mask2box(mask)
def mask2box(self, mask):
# np.where(mask==1)
index = np.argwhere(mask == 1)
rows = index[:, 0]
clos = index[:, 1]
left_top_r = np.min(rows) # y
left_top_c = np.min(clos) # x
right_bottom_r = np.max(rows)
right_bottom_c = np.max(clos)
return [left_top_c, left_top_r, right_bottom_c-left_top_c, right_bottom_r-left_top_r] # [x1,y1,w,h] for coco box format
def polygons_to_mask(self, img_shape, polygons):
mask = np.zeros(img_shape, dtype=np.uint8)
mask = PIL.Image.fromarray(mask)
xy = list(map(tuple, polygons))
PIL.ImageDraw.Draw(mask).polygon(xy=xy, outline=1, fill=1)
mask = np.array(mask, dtype=bool)
return mask
def data2coco(self):
data_coco = {}
data_coco['images'] = self.images
data_coco['categories'] = self.categories
data_coco['annotations'] = self.annotations
return data_coco
def save_json(self):
self.data_transfer()
self.data_coco = self.data2coco()
json.dump(self.data_coco, open(self.save_json_path, 'w', encoding='utf-8'), indent=4, separators=(',', ': '), cls=MyEncoder)
# type check when save json files
class MyEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.integer):
return int(obj)
elif isinstance(obj, np.floating):
return float(obj)
elif isinstance(obj, np.ndarray):
return obj.tolist()
else:
return super(MyEncoder, self).default(obj)
if __name__ == "__main__":
#labelme_folder 你的标注图片和标签所在的文件夹
labelme_folder = r"/home/mk/nanodet/data/dog"
#save_json_path 转换生成的coco格式的标签文件的保存路径
save_json_path = r"/home/mk/nanodet/data/dog.json"
labelme2coco(labelme_folder, save_json_path)
每张图像的json整合在一起
然后再用coco_split拆分成训练集和测试集,这里拆分是训练集:测试集=8:2,然后就可以得到dog_train.json和dog_test.json就可以训练了,使用的时候记得调整相关文件路径和类别
from pycocotools.coco import COCO
import cv2
import os
import json
import numpy as np
import matplotlib.pyplot as plt
import random
class MyEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.integer):
return int(obj)
elif isinstance(obj, np.floating):
return float(obj)
elif isinstance(obj, np.ndarray):
return obj.tolist()
else:
return super(MyEncoder, self).default(obj)
//标签类别数,可以自行添加
cate = [{
"supercategory": "dog",
"id": 0,
"name": "dog"
}]
//数据集名称
dataset = ["dog"]
for mode in dataset:
dataset_path = os.path.join("/home/mk/nanodet/data", mode+".json")
coco = COCO(dataset_path)
img_ids = coco.getImgIds()
#random.shuffle(img_ids)//是否要随机分布
img_len = len(img_ids)
img_train_len = int(0.8*img_len)//划分比例
images_train = coco.dataset["images"][:img_train_len]
images_test = coco.dataset["images"][img_train_len:]
nimages_test = []
for image in images_test:
image["id"] = image["id"] - img_train_len
nimages_test.append(image)
train_ann_ids = coco.getAnnIds(imgIds=img_ids[:img_train_len])
train_anns = coco.loadAnns(ids=train_ann_ids)
test_ann_ids = coco.getAnnIds(imgIds=img_ids[img_train_len:])
test_anns = coco.loadAnns(ids=test_ann_ids)
ntrain_anns = []
for train_ann in train_anns:
train_ann["category_id"] = 0
ntrain_anns.append(train_ann)
ntest_anns = []
count = 0
for test_ann in test_anns:
test_ann["id"] = count
test_ann["image_id"] = test_ann["image_id"] - img_train_len
test_ann["category_id"] = 0
ntest_anns.append(test_ann)
count += 1
ntrain_dict = {"images": images_train, "categories": cate, "annotations": ntrain_anns}
ntest_dict = {"images": nimages_test, "categories": cate, "annotations": ntest_anns}
json.dump(ntrain_dict, open(os.path.join("/home/mk/data", f"{mode}_train.json"), 'w', encoding='utf-8'), indent=4, separators=(',', ': '),
cls=MyEncoder)
json.dump(ntest_dict, open(os.path.join("/home/mk/data", f"{mode}_test.json"), 'w', encoding='utf-8'),
indent=4, separators=(',', ': '),
cls=MyEncoder)
还有一些实用脚本,比如要修改标签类别
import json
import os
json_in = "/home/mk/nanodet/data1" # 原json文件路径
json_out = "/home/mk/nanodet/data2" # 修改后json文件路径
def label_update(json_in, json_out):
filelist = os.listdir(json_in) # 获取文件路径
for item in filelist:
if item.endswith('.json'): # 初始的图片的格式为json格式的(转换格式就可以调整为自己需要的格式)
# print(item)
src = os.path.join(os.path.abspath(json_in), item)
# print(src)
j = open(src).read() # json文件读入成字符串格式
jj = json.loads(j) # 载入字符串,json格式转python格式
for i in range(len(jj['shapes'])):
if(jj["shapes"][i]["label"] != 'dog'):
print(jj["imagePath"])
jj["shapes"][i]["label"] = 'dog' # 把所有label的值都改成‘dog’
# print(jj["shapes"][i]["label"])
out = os.path.join(os.path.abspath(json_out), item)
#print(out)
with open(out, 'w') as f:
json.dump(jj, f, indent=2) # 缩进保存json文件
label_update(json_in, json_out)
合并多个文件夹的train和test的json文件
from pycocotools.coco import COCO
import cv2
import os
import json
import numpy as np
import matplotlib.pyplot as plt
import random
class MyEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.integer):
return int(obj)
elif isinstance(obj, np.floating):
return float(obj)
elif isinstance(obj, np.ndarray):
return obj.tolist()
else:
return super(MyEncoder, self).default(obj)
dataset = ["dog1","dog2"]
for mod in ["train", "test"]:
ntrain_dict = {"images":[], "categories":[], "annotations":[]}
for mode in dataset:
dataset_path = os.path.join("/home/mk/nanodet/data", mode+"_"+mod+".json")
coco = COCO(dataset_path)
images = coco.dataset["images"]
nimages = []
for image in images:
image["id"] = image["id"] + len(ntrain_dict["images"])
nimages.append(image)
anns = coco.dataset["annotations"]
nanns = []
for ann in anns:
ann["id"] = ann["id"] + len(ntrain_dict["annotations"])
ann["image_id"] = ann["image_id"] + len(ntrain_dict["images"])
nanns.append(ann)
ntrain_dict["images"] += nimages
ntrain_dict["annotations"] += nanns
ntrain_dict["categories"] = coco.dataset["categories"]
json.dump(ntrain_dict, open(os.path.join("/home/mk/nanodet/data", f"all_{mod}.json"), 'w', encoding='utf-8'), indent=4, separators=(',', ': '),
cls=MyEncoder)
2.3YOLO数据集构建
YOLO数据集则简单很多,我们一开始标注好的和coco一样,一张图像对应一个标签,不同的是YOLO格式的标签是txt文件,
而YOLO数据集格式如下,我们甚至可以手工分类好数据集,分别把图像和标签放进对应的文件夹即可
自动化的做法也很简单,我们把图像放一个文件夹images,标签放一个文件夹labels
然后用下面的脚本就搞定了
import splitfolders
splitfolders.ratio('/home/mk/yhyolov5/newgarbage', output="/home/mk/yhyolov5/newgarbage", seed=1337, ratio=(0.8, 0.2,0))
2.4YOLO和COCO数据集之间的转换
之前提到的是YOLO和COCO格式标签的转换,有时候我们已经分好了一个数据集,想把它转换成另一种格式对比模型间的效果,而通常分训练集测试集是随机的,数据量太多了就需要直接进行数据集的转换
2.4.1YOLO2COCO数据集
这里要注意的是YOLO格式是[x_center, y_center, w, h],而coco直接是标注框的四个点,所以需要转换一下
import os
import cv2
#需要准备:
#labels:yolo格式的标签,是txt格式,名字为图片名
#images:原始标签对应的图片,需要有序号
'''
# 原始标签路径
originLabelsDir = '/home/mk/yhyolov5/data/val/labels'
# 原始标签对应的图片路径
originImagesDir = '/home/mk/yhyolov5/data/val/images'
# 转换后的文件保存路径
saveDir = '/home/mk/yhyolov5/hongwai/data/label.txt'
txtFileList = os.listdir(originLabelsDir)
with open(saveDir, 'w') as fw:
for txtFile in txtFileList:
with open(os.path.join(originLabelsDir, txtFile), 'r') as fr:
labelList = fr.readlines()
for label in labelList:
label = label.strip().split()
x = float(label[1])
y = float(label[2])
w = float(label[3])
h = float(label[4])
# convert x,y,w,h to x1,y1,x2,y2
imagePath = os.path.join(originImagesDir,
txtFile.replace('txt', 'jpg'))
image = cv2.imread(imagePath)
H, W, _ = image.shape
x1 = (x - w / 2) * W
y1 = (y - h / 2) * H
x2 = (x + w / 2) * W
y2 = (y + h / 2) * H
# 为了与coco标签方式对,标签序号从1开始计算
fw.write(txtFile.replace('txt', 'jpg') + ' {} {} {} {} {}\n'.format(int(label[0]) + 1, x1, y1, x2, y2))
print('{} done'.format(txtFile))
然后YOLO格式其实是用0,1,2来指代标签类别的,所以我们要把标签名字加上去,我们新建一个classes.txt文件夹,把类别加上去
再用下面这个脚本即可实现转换,注意train和val需要分开来转换
import json
import os
import cv2
#-------------------可用-----------------------------------
#需要准备:
#classes.txt:一行就是一个类,不需要数字,只要类名称
#annos.txt:由上一个.py文件生成
#images:与annos.txt对应的图片,需要有序号
生成.json文件,在annotations文件下
'''
# ------------用os提取images文件夹中的图片名称,并且将BBox都读进去------------
# 根路径,里面包含images(图片文件夹),annos.txt(bbox标注),classes.txt(类别标签),
# 以及annotations文件夹(如果没有则会自动创建,用于保存最后的json)
root_path = '/home/mk/yhyolov5/data'
# 用于创建训练集或验证集
phase = 'val' # 需要修正,保存后的json文件名
# dataset用于保存所有数据的图片信息和标注信息
dataset = {'categories': [], 'annotations': [], 'images': []}
# 打开类别标签
with open(os.path.join(root_path, 'classes.txt')) as f:
classes = f.read().strip().split()
# 建立类别标签和数字id的对应关系
for i, cls in enumerate(classes, 1):
dataset['categories'].append({'id': 0, 'name': cls, 'supercategory': 'mark'})
# 读取images文件夹的图片名称
indexes = os.listdir(os.path.join(root_path, './val/images'))
# 统计处理图片的数量
global count
count = 0
# 读取Bbox信息
with open(os.path.join(root_path, './label.txt')) as tr:
annos = tr.readlines()
# ---------------接着将,以上数据转换为COCO所需要的格式---------------
for k, index in enumerate(indexes):
count += 1
# 用opencv读取图片,得到图像的宽和高
im = cv2.imread(os.path.join(root_path, './val/images/') + index)
height, width, _ = im.shape
# 添加图像的信息到dataset中
dataset['images'].append({'file_name': index.replace("\\", "/"),
'id': count, #必须是int类型,不能是str
'width': width,
'height': height})
for ii, anno in enumerate(annos):
parts = anno.strip().split()
# 如果图像的名称和标记的名称对上,则添加标记
if parts[0] == index:
# 类别
cls_id = parts[1]
# x_min
x1 = float(parts[2])
# y_min
y1 = float(parts[3])
# x_max
x2 = float(parts[4])
# y_max
y2 = float(parts[5])
width = max(0, x2 - x1)
height = max(0, y2 - y1)
dataset['annotations'].append({
'area': width * height,
'bbox': [x1, y1, width, height],
'category_id': 0,
'id': ii,
'image_id': count, # 必须是int类型,不能是str
'iscrowd': 0,
# mask, 矩形是从左上角点按顺时针的四个顶点
'segmentation': [[x1, y1, x2, y1, x2, y2, x1, y2]]
})
print('{} images handled'.format(count))
# 保存结果的文件夹
folder = os.path.join(root_path, './annotations')
if not os.path.exists(folder):
os.makedirs(folder)
json_name = os.path.join(root_path, './annotations/{}.json'.format(phase))
with open(json_name, 'w') as f:
json.dump(dataset, f, ensure_ascii=False, indent=1)
最终得到的就是coco数据集里的xxx_train.json和xxx_val.json了,需要注意的是图像路径要和生成的json文件保持一致
2.4.2COCO2YOLO数据集
COCO2YOLO就相对简单很多,我们按照xxx_train.json和xxx_val.json把图像和标签复制进对应文件夹即可
import json
import os
import shutil
import tqdm
filepath = '/home/mk/yhyolov5'
newpath = '/home/mk/yhyolov5/data/val/images'//新的yoloval图像路径
with open('/home/mk/yhyolov5/data/dog_val.json') as f://打开coco的val.json
Json = json.load(f)
annotations = Json['annotations']
images = Json['images']
image_id_name_dict = {}
image_id_width_dict = {}
image_id_height_dict = {}
count = 0
for image in images:
image_id_name_dict[image['id']] = image['file_name']
imagepath = filepath + '/' + image['file_name']
imagename = imagepath.split('/')[-1]
newimagepath = newpath + '/' + imagename
shutil.copy(imagepath, newimagepath)//复制图像到新路径
image_id_height_dict[image['id']] = image['height']
image_id_width_dict[image['id']] = image['width']
count=count+1
print(count)
anncount = 0
for annotation in annotations://转换坐标
bbox = annotation['bbox']
x, y, w, h = bbox
x = x + w / 2
y = y + h / 2
width = image_id_width_dict[annotation['image_id']]
height = image_id_height_dict[annotation['image_id']]
x = str(x / width)
y = str(y / height)
w = str(w / width)
h = str(h / height)
with open('/home/mk/data/val/labels/{}.txt'.format(//写入坐标
os.path.basename(image_id_name_dict[annotation['image_id']]).split('.')[0]), 'w') as f:
annotation['category_id'] = annotation['category_id']
category = str(annotation['category_id'])
f.write(category + ' ' + x + ' ' + y + ' ' + w + ' ' + h + '\n')
anncount=anncount+1
print(anncount)
三、图像分割
图像分割则是在目标检测的基础之上,用更准确的mask来标注图像中的感兴趣对象,分为语义分割(同一类目标用相同的颜色mask标出)和实例分割(不同目标用不同的颜色mask标出)
笔者目前做过两个图像分割相关的项目,分别是医学图像的语义分割用的是U-Net,还有红外图像中动态气体的语义分割用的是YOLOv8-Seg
图像分割的标注和目标检测类似,都是用labelme即可
3.1Unet分割VOC数据集
早期用U-Net的分割不需要额外的标注文件,以一个医学图像数据集为例,我们只需要原图
以及标注图
数据集划分方面就是将图像名称写进对应的txt文件里
这里赋上从labelme到标注图的脚本,这个脚本如果不好用,就去搜新版本的labelmeseg可视化吧
import base64
import json
import os
import os.path as osp
import numpy as np
import PIL.Image
from labelme import utils
'''
制作自己的语义分割数据集需要注意以下几点:
1、我使用的labelme版本是3.16.7,建议使用该版本的labelme,有些版本的labelme会发生错误,
具体错误为:Too many dimensions: 3 > 2
安装方式为命令行pip install labelme==3.16.7
2、此处生成的标签图是8位彩色图,此时每个像素点的值就是这个像素点所属的种类。
'''
if __name__ == '__main__':
jpgs_path = "datasets/JPEGImages"
pngs_path = "datasets/SegmentationClass"
classes = ["_background_","aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"]
# classes = ["_background_","cat","dog"]
count = os.listdir("./datasets/before/")
for i in range(0, len(count)):
path = os.path.join("./datasets/before", count[i])
if os.path.isfile(path) and path.endswith('json'):
data = json.load(open(path))
if data['imageData']:
imageData = data['imageData']
else:
imagePath = os.path.join(os.path.dirname(path), data['imagePath'])
with open(imagePath, 'rb') as f:
imageData = f.read()
imageData = base64.b64encode(imageData).decode('utf-8')
img = utils.img_b64_to_arr(imageData)
label_name_to_value = {'_background_': 0}
for shape in data['shapes']:
label_name = shape['label']
if label_name in label_name_to_value:
label_value = label_name_to_value[label_name]
else:
label_value = len(label_name_to_value)
label_name_to_value[label_name] = label_value
# label_values must be dense
label_values, label_names = [], []
for ln, lv in sorted(label_name_to_value.items(), key=lambda x: x[1]):
label_values.append(lv)
label_names.append(ln)
assert label_values == list(range(len(label_values)))
lbl = utils.shapes_to_label(img.shape, data['shapes'], label_name_to_value)
PIL.Image.fromarray(img).save(osp.join(jpgs_path, count[i].split(".")[0]+'.jpg'))
new = np.zeros([np.shape(img)[0],np.shape(img)[1]])
for name in label_names:
index_json = label_names.index(name)
index_all = classes.index(name)
new = new + index_all*(np.array(lbl) == index_json)
utils.lblsave(osp.join(pngs_path, count[i].split(".")[0]+'.png'), new)
print('Saved ' + count[i].split(".")[0] + '.jpg and ' + count[i].split(".")[0] + '.png')
划分数据集生成对应的txt
import os
import random
#----------------------------------------------------------------------#
# 想要增加测试集修改trainval_percent
# 修改train_percent用于改变验证集的比例 9:1
#
# 当前该库将测试集当作验证集使用,不单独划分测试集
#----------------------------------------------------------------------#
trainval_percent = 1
train_percent = 0.9
#-------------------------------------------------------#
# 指向VOC数据集所在的文件夹
# 默认指向根目录下的VOC数据集
#-------------------------------------------------------#
VOCdevkit_path = 'VOCdevkit'
if __name__ == "__main__":
random.seed(0)
print("Generate txt in ImageSets.")
segfilepath = os.path.join(VOCdevkit_path, 'VOC2007/SegmentationClass')
saveBasePath = os.path.join(VOCdevkit_path, 'VOC2007/ImageSets/Segmentation')
temp_seg = os.listdir(segfilepath)
total_seg = []
for seg in temp_seg:
if seg.endswith(".png"):
total_seg.append(seg)
num = len(total_seg)
list = range(num)
tv = int(num*trainval_percent)
tr = int(tv*train_percent)
trainval= random.sample(list,tv)
train = random.sample(trainval,tr)
print("train and val size",tv)
print("traub suze",tr)
ftrainval = open(os.path.join(saveBasePath,'trainval.txt'), 'w')
ftest = open(os.path.join(saveBasePath,'test.txt'), 'w')
ftrain = open(os.path.join(saveBasePath,'train.txt'), 'w')
fval = open(os.path.join(saveBasePath,'val.txt'), 'w')
for i in list:
name=total_seg[i][:-4]+'\n'
if i in trainval:
ftrainval.write(name)
if i in train:
ftrain.write(name)
else:
fval.write(name)
else:
ftest.write(name)
ftrainval.close()
ftrain.close()
fval.close()
ftest.close()
print("Generate txt in ImageSets done.")
3.2YOLOv8seg分割数据集
labelme保存的是json数据集,这里借助大佬的工具进行转换
GitHub - KdaiP/labelme2YoloV8-segment: 将labelme数据标注格式转换为YoloV8语义分割数据集,并可自动划分训练集和验证集
import json
import random
import yaml
import argparse
import shutil
from pathlib import Path
from collections import defaultdict
from tqdm import tqdm
# 设定随机种子以确保可重复性
random.seed(114514)
# yoloV8支持的图像格式
# https://docs.ultralytics.com/modes/predict/?h=format+image#images
image_formats = ["jpg", "jpeg", "png", "bmp", "webp", "tif", ".dng", ".mpo", ".pfm"]
def copy_labled_img(json_path: Path, target_folder: Path, task: str):
# 遍历支持的图像格式,查找并复制图像文件
for format in image_formats:
image_path = json_path.with_suffix("." + format)
if image_path.exists():
# 构建目标文件夹中的目标路径
target_path = target_folder / "images" / task / image_path.name
shutil.copy(image_path, target_path)
def json_to_yolo(json_path: Path, sorted_keys: list):
with open(json_path, "r") as f:
labelme_data = json.load(f)
width = labelme_data["imageWidth"]
height = labelme_data["imageHeight"]
yolo_lines = []
for shape in labelme_data["shapes"]:
label = shape["label"]
points = shape["points"]
class_idx = sorted_keys.index(label)
txt_string = f"{class_idx} "
for x, y in points:
x /= width
y /= height
txt_string += f"{x} {y} "
yolo_lines.append(txt_string.strip() + "\n")
return yolo_lines
def create_directory_if_not_exists(directory_path):
# 使用 exist_ok=True 可以避免重复检查目录是否存在
directory_path.mkdir(parents=True, exist_ok=True)
# 创建训练使用的yaml文件
def create_yaml(output_folder: Path, sorted_keys: list):
train_img_path = Path("images") / "train"
val_img_path = Path("images") / "val"
train_label_path = Path("labels") / "train"
val_label_path = Path("labels") / "val"
# 创建所需目录
for path in [train_img_path, val_img_path, train_label_path, val_label_path]:
create_directory_if_not_exists(output_folder / path)
names_dict = {idx: name for idx, name in enumerate(sorted_keys)}
yaml_dict = {
"path": output_folder.as_posix(),
"train": train_img_path.as_posix(),
"val": val_img_path.as_posix(),
"names": names_dict,
}
yaml_file_path = output_folder / "yolo.yaml"
with open(yaml_file_path, "w") as yaml_file:
yaml.dump(yaml_dict, yaml_file, default_flow_style=False, sort_keys=False)
print(f"yaml created in {yaml_file_path.as_posix()}")
# Convert label to idx
def get_labels_and_json_path(input_folder: Path):
json_file_paths = list(input_folder.rglob("*.json"))
label_counts = defaultdict(int)
for json_file_path in json_file_paths:
with open(json_file_path, "r") as f:
labelme_data = json.load(f)
for shape in labelme_data["shapes"]:
label = shape["label"]
label_counts[label] += 1
# 根据标签出现次数排序标签
sorted_keys = sorted(label_counts, key=lambda k: label_counts[k], reverse=True)
return sorted_keys, json_file_paths
def labelme_to_yolo(
json_file_paths: list, output_folder: Path, sorted_keys: list, split_rate: float
):
# 随机打乱 JSON 文件路径列表
random.shuffle(json_file_paths)
# 计算训练集和验证集的分割点
split_point = int(split_rate * len(json_file_paths))
train_set = json_file_paths[:split_point]
val_set = json_file_paths[split_point:]
for json_file_path in tqdm(train_set):
txt_name = json_file_path.with_suffix(".txt").name
yolo_lines = json_to_yolo(json_file_path, sorted_keys)
output_json_path = Path(output_folder / "labels" / "train" / txt_name)
with open(output_json_path, "w") as f:
f.writelines(yolo_lines)
copy_labled_img(json_file_path, output_folder, task="train")
for json_file_path in tqdm(val_set):
txt_name = json_file_path.with_suffix(".txt").name
yolo_lines = json_to_yolo(json_file_path, sorted_keys)
output_json_path = Path(output_folder / "labels" / "val" / txt_name)
with open(output_json_path, "w") as f:
f.writelines(yolo_lines)
copy_labled_img(json_file_path, output_folder, task="val")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="labelme2yolo")
parser.add_argument("input_folder", help="输入LabelMe格式文件的文件夹")
parser.add_argument("output_folder", help="输出YOLO格式文件的文件夹")
parser.add_argument("split_rate", help="调整训练集和测试集的比重")
args = parser.parse_args()
input_folder = Path(args.input_folder)
output_folder = Path(args.output_folder)
split_rate = float(args.split_rate)
sorted_keys, json_file_paths = get_labels_and_json_path(input_folder)
create_yaml(output_folder, sorted_keys)
labelme_to_yolo(json_file_paths, output_folder, sorted_keys, split_rate)
和目标检测中的YOLO数据集格式是一样的,唯一的不同就是txt是
class x0 y0 x1 y1 x2 y2 ....
顺便再赋上如果是淘宝标注返回数据集标注,需要我们自己划分数据集的脚本
# -*- coding: utf-8 -*-
"""
将数据集划分为训练集,验证集,测试集
"""
import os
import random
import shutil
# 创建保存图像的文件夹
def makedir(new_dir):
if not os.path.exists(new_dir):
os.makedirs(new_dir)
random.seed(1) # 随机种子
# 1.确定原图像数据集路径
dataset_dir = "data/" ##原始数据集路径
# 2.确定数据集划分后保存的路径
split_dir = "dataset/images"
label_dir = 'dataset/labels'##划分后保存路径
train_dir = os.path.join(split_dir, "train")
train_label_dir = os.path.join(label_dir, "train")
valid_dir = os.path.join(split_dir, "val")
valid_label_dir = os.path.join(label_dir, "val")
test_dir = os.path.join(split_dir, "test")
test_label_dir = os.path.join(label_dir, "test")
# 3.确定将数据集划分为训练集,验证集,测试集的比例
train_pct = 0.8
valid_pct = 0.2
test_pct = 0
for root, dirs, files in os.walk(dataset_dir):
for sub_dir in dirs: # 遍历0,1,2,3,4,5...9文件夹
labs = os.listdir(os.path.join(root, sub_dir)) # 展示目标文件夹下所有的文件名
labs = list(filter(lambda x: x.endswith('.txt'), labs)) # 取到所有以.png结尾的文件,如果改了图片格式,这里需要修改
random.shuffle(labs) # 乱序图片路径
labs_count = len(labs) # 计算图片数量
train_point = int(labs_count * train_pct) # 0:train_pct
valid_point = int(labs_count * (train_pct + valid_pct)) # train_pct:valid_pct
for i in range(labs_count):
if i < train_point: # 保存0-train_point的图片到训练集
out_dir = train_dir
out_label_dir = train_label_dir
elif i < valid_point: # 保存train_point-valid_point的图片到验证集
out_dir = valid_dir
out_label_dir = valid_label_dir
else: # 保存valid_point-结束的图片到测试集
out_dir = test_dir
out_label_dir = test_label_dir
if not os.path.exists(out_dir):
os.makedirs(out_dir)
if not os.path.exists(out_label_dir):
os.makedirs(out_label_dir)
# target_path = os.path.join(out_label_dir, labs[i]) # 指定目标保存路径
src_path = os.path.join(dataset_dir, sub_dir, labs[i])#指定目标原图像路径
img_name = os.path.splitext(labs[i])[0]+'.png'
img_path = os.path.join(dataset_dir, sub_dir, img_name)
target_img_path = os.path.join(out_dir, img_name)
shutil.copy(src_path, out_label_dir)
shutil.copy(img_path, target_img_path)
print('Class:{}, train:{}, valid:{}, test:{}'.format(sub_dir, train_point, valid_point-train_point,
labs_count-valid_point))
四、总结
这篇文章的主要目的是帮助刚入门视觉做横向的同学知道要如何采集数据、标注数据、选择任务并构建任务需要的数据集,这些活在研究生阶段又叫杂活,一般会分配给本科生去干,但是对于0基础的同学来说构建数据集反而是非常痛苦的过程,会接触到json、xml等以前没有接触过的数据格式,还要对它们进行处理,我一开始的时候也是非常头大,后续也不知道能不能脱离视觉苦海,但是祝各位研究生涯顺利快乐吧!