使用INRIA数据集的Yolov5目标检测

一、前期准备

1.1 Yolov5

Yolov5算法的原理介绍可参考:yolov5原理详解 (涉及内容:Yolov5框架,各组件分析,特征融合是怎么实现的?yolov5的具体特征融合方式等)-CSDN博客

Yolov5可使用v6.2版本,下载地址: 

https://github.com/ultralytics/yolov5/tree/v6.2 

yolov5s.pt下载链接:https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt

1.2  INRIA数据集

INRIA数据集的介绍可参考:

【计算机视觉】INRIA 行人数据集 (INRIA Person Dataset)_inria数据集-CSDN博客

原始下载INRIA数据集的网站现在无法跳转,推荐在这里下载:

INRIA Person Dataset 行人检测数据集 / 数据集 / 超神经

下载得到的是.torrent格式的数据集,因此可以通过百度网盘或者迅雷将种子解析并下载。

如果选择使用百度网盘的话,步骤可以参考如下:

1. 打开网盘,找到种子页面;

2. 将下载好的数据集文件上传到你指定的文件夹中,并选择该种子进行离线下载;

3. 在新建离线BT任务重选择要下载的内容,勾选下载,完成后即可得到指定数据集;

 

需要注意的是,该数据集解压时候需要使用管理员身份,使用WinRAR解压的话可以找到该应用程序,右键以管理员身份运行即可。得到最终数据集如下:

二、模型的训练

使用pycharm进行,使用最新python版本创建conda环境: conda create -n yolov5

安装一系列依赖包: pip install -r requirements.txt将INRIA数据集转换成yolo需要的格式(在此,使用train和test文件的图像,下面第一个是image,第二个是label,记得改路径)

import os
import shutil

# 定义路径
train_annotations_lst = r'../INRIAPerson/Train/annotations.lst'
train_neg_lst = r'../INRIAPerson/Train/neg.lst'
train_pos_lst = r'../INRIAPerson/Train/pos.lst'

test_annotations_lst = r'../INRIAPerson/Test/annotations.lst'
test_neg_lst = r'../INRIAPerson/Test/neg.lst'
test_pos_lst = r'../INRIAPerson/Test/pos.lst'

# 目标路径
images_train_path = 'data/inria_yolo/images/train/'
labels_train_path = 'data/inria_yolo/labels/train/'
images_val_path = 'data/inria_yolo/images/val/'
labels_val_path = 'data/inria_yolo/labels/val/'

# 解析列表文件并复制图像
def process_lst_file(lst_file, src_dir, images_path):
    with open(lst_file, 'r') as f:
        lines = f.readlines()
        for line in lines:
            line = line.strip()
            img_src_path = os.path.join(src_dir, line)
            img_filename = os.path.basename(line)
            img_dst_path = os.path.join(images_path, img_filename)
            shutil.copy(img_src_path, img_dst_path)

# 处理 train 和 test 列表文件
process_lst_file(train_neg_lst, r'../INRIAPerson/', images_train_path)
process_lst_file(train_pos_lst, r'../INRIAPerson/', images_train_path)
process_lst_file(test_neg_lst, r'../INRIAPerson/', images_val_path)
process_lst_file(test_pos_lst, r'../INRIAPerson/', images_val_path)
import os
import re
import glob

def pascal_to_yolo(pascal_file_path, yolo_file_path, class_name_to_index):
    try:
        with open(pascal_file_path, 'r', encoding='utf-8') as file:
            content = file.read()
    except UnicodeDecodeError:
        # Try reading with a different encoding
        with open(pascal_file_path, 'r', encoding='iso-8859-1') as file:
            content = file.read()
        
    # Extract image dimensions
    img_width = int(re.search(r'Image size \(X x Y x C\) : (\d+) x (\d+) x (\d+)', content).group(1))
    img_height = int(re.search(r'Image size \(X x Y x C\) : (\d+) x (\d+) x (\d+)', content).group(2))
    
    # Extract object details
    object_details = re.findall(r'Bounding box for object \d+ "([A-Za-z0-9_]+)" \(Xmin, Ymin\) - \(Xmax, Ymax\) : \((\d+), (\d+)\) - \((\d+), (\d+)\)', content)
    
    yolo_annotations = []
    for obj in object_details:
        class_name = obj[0]
        xmin, ymin, xmax, ymax = map(int, obj[1:])
        
        # Calculate YOLO format values
        x_center = (xmin + xmax) / 2 / img_width
        y_center = (ymin + ymax) / 2 / img_height
        width = (xmax - xmin) / img_width
        height = (ymax - ymin) / img_height
        
        # Get class index
        try:
            class_index = class_name_to_index[class_name]
        except Exception as e:
            max_index = max(class_name_to_index.values()) 
            class_index=max_index+1
            new_key=class_name
            class_name_to_index[class_name]=class_index
        # Append to YOLO annotations
        yolo_annotations.append(f"{class_index} {x_center:.6f} {y_center:.6f} {width:.6f} {height:.6f}")
    
    # Write to YOLO format file
    with open(yolo_file_path, 'w') as file:
        for annotation in yolo_annotations:
            file.write(annotation + '\n')

def batch_convert_pascal_to_yolo(pascal_dir, yolo_dir, class_name_to_index):
    if not os.path.exists(yolo_dir):
        os.makedirs(yolo_dir)
        
    for filename in os.listdir(pascal_dir):
        if filename.endswith('.txt'):
            pascal_file_path = os.path.join(pascal_dir, filename)
            yolo_file_path = os.path.join(yolo_dir, filename)
            pascal_to_yolo(pascal_file_path, yolo_file_path, class_name_to_index)
            print(f"Converted {filename} to YOLO format.")

# 示例使用
class_name_to_index = {
    "bg": 0  # 假设 "UprightPerson" 类别的索引为 0
}

pascal_dir = r'INRIAPerson\Test\annotations'
yolo_dir = r'INRIAPerson\yolov5-6.2\inria_yolo\labels\val'
# for file in glob.glob(pascal_dir):
#     print(file)
#     filename=file.split('/')[-1]
#     yolo_dir=os.path.join(yolo_dir,filename)
batch_convert_pascal_to_yolo(pascal_dir, yolo_dir, class_name_to_index)


 最终数据格式如下:

训练代码
python train.py --img 640 --batch 16 --epochs 50 --data inria.yaml --weights yolov5s.pt
测试代码
python detect.py --weights runs/train/exp/weights/best.pt --img 640 --source inria_yolo/images/val
评估代码
python val.py --weights runs/train/exp/weights/best.pt --data inria.yaml --img 640

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值