YOLOV5 布料瑕疵检测

1.前言

1.YOLOV5程序:https://github.com/ultralytics/yolov5

2.数据集convertor程序(后面附了网盘链接下载)https://github.com/datawhalechina/team-learning-cv/tree/master/DefectDetection

天池官方就是用的YOLOV5,但是里面yolov5版本老,这里只引用项目里的convertTrainLabel.py程序,下文叙述这个程序的修改。

import numpy as np # linear algebra
import os
import json
from tqdm.auto import tqdm
import shutil as sh
import cv2

josn_path = "./train_data/guangdong1_round2_train2_20191004_Annotations/Annotations/anno_train.json"
image_path = "./train_data/guangdong1_round2_train2_20191004_images/defect/"

name_list = []
image_h_list = []
image_w_list = []
c_list = []
w_list = []
h_list = []
x_center_list = []
y_center_list = []

with open(josn_path, 'r') as f:
    temps = tqdm(json.loads(f.read()))
    for temp in temps:
        # image_w = temp["image_width"]
        # image_h = temp["image_height"]
        name = temp["name"].split('.')[0]
        path = os.path.join(image_path, name, temp["name"])
        # print('path: ',path)
        im = cv2.imread(path)
        sp = im.shape
        image_h, image_w = sp[0], sp[1]
        # print("image_h, image_w: ", image_h, image_w)
        # print("defect_name: ",temp["defect_name"])
        #bboxs
        x_l, y_l, x_r, y_r = temp["bbox"]
        # print(temp["name"], temp["bbox"])
        if temp["defect_name"]=="沾污":
            defect_name = '0'
        elif temp["defect_name"]=="错花":
            defect_name = '1'
        elif temp["defect_name"] == "水印":
            defect_name = '2'
        elif temp["defect_name"] == "花毛":
            defect_name = '3'
        elif temp["defect_name"] == "缝头":
            defect_name = '4'
        elif temp["defect_name"] == "缝头印":
            defect_name = '5'
        elif temp["defect_name"] == "虫粘":
            defect_name = '6'
        elif temp["defect_name"] == "破洞":
            defect_name = '7'
        elif temp["defect_name"] == "褶子":
            defect_name = '8'
        elif temp["defect_name"] == "织疵":
            defect_name = '9'
        elif temp["defect_name"] == "漏印":
            defect_name = '10'
        elif temp["defect_name"] == "蜡斑":
            defect_name = '11'
        elif temp["defect_name"] == "色差":
            defect_name = '12'
        elif temp["defect_name"] == "网折":
            defect_name = '13'
        elif temp["defect_name"] == "其他":
            defect_name = '14'
        else:
            defect_name = '15'
            print("----------------------------------error---------------------------")
            raise("erro")
        # print(image_w, image_h)
        # print(defect_name)
        x_center = (x_l + x_r)/(2*image_w)
        y_center = (y_l + y_r)/(2*image_h)
        w = (x_r - x_l)/(image_w)
        h = (y_r - y_l)/(image_h)
        # print(x_center, y_center, w, h)
        name_list.append(temp["name"])
        c_list.append(defect_name)
        image_h_list.append(image_w)
        image_w_list.append(image_h)
        x_center_list.append(x_center)
        y_center_list.append(y_center)
        w_list.append(w)
        h_list.append(h)

    index = list(set(name_list))
    print(len(index))
    for fold in [0]:
        val_index = index[len(index) * fold // 5:len(index) * (fold + 1) // 5]
        print(len(val_index))
        for num, name in enumerate(name_list):
            print(c_list[num], x_center_list[num], y_center_list[num], w_list[num], h_list[num])
            row = [c_list[num], x_center_list[num], y_center_list[num], w_list[num], h_list[num]]
            if name in val_index:
                path2save = 'val/'
            else:
                path2save = 'train/'
            # print('convertor\\fold{}\\labels\\'.format(fold) + path2save)
            # print('convertor\\fold{}/labels\\'.format(fold) + path2save + name.split('.')[0] + ".txt")
            # print("{}/{}".format(image_path, name))
            # print('convertor\\fold{}\\images\\{}\\{}'.format(fold, path2save, name))
            if not os.path.exists('convertor/fold{}/labels/'.format(fold) + path2save):
                os.makedirs('convertor/fold{}/labels/'.format(fold) + path2save)
            with open('convertor/fold{}/labels/'.format(fold) + path2save + name.split('.')[0] + ".txt", 'a+') as f:
                for data in row:
                    f.write('{} '.format(data))
                f.write('\n')
                if not os.path.exists('convertor/fold{}/images/{}'.format(fold, path2save)):
                    os.makedirs('convertor/fold{}/images/{}'.format(fold, path2save))
                sh.copy(os.path.join(image_path, name.split('.')[0], name),
                        'convertor/fold{}/images/{}/{}'.format(fold, path2save, name))


3.数据集:数据集来源于天池2019广东工业智造创新大赛【赛场一】

数据集-阿里云天池

4.借鉴了这篇文章从零开始手把手教你利用yolov5训练自己的数据集(含coco128数据集/yolov5权重文件国内下载)更新于20200728_orangezs的博客-CSDN博客_coco128数据集

天池官方的的数据集和代码

链接:https://pan.baidu.com/s/1OgTERkCMCpVCkW5Doux2gg?pwd=onmo 
提取码:onmo 
 

2.数据处理

1.这里数据集只用到了天池提供的train2数据集,在yolov5项目文件夹中新建train_data文件夹,将train2数据集放在train_data文件夹内。

2.数据集转换为coco格式:

这里需要将天池官方程序中的convertTrainLabel.py放入yolov5文件夹中,并做以下修改:

(1).8,9行:path位置修改为:(这里我修改了数据集文件夹的名字)

josn_path = "./train_data/round2_train2/Annotations/anno_train.json"
image_path = "./train_data/round2_train2/defect_Images/"

(2)26行:

       path = os.path.join(image_path,temp["name"])

(2).36-69

        if temp["defect_name"]=="无疵点":
            defect_name = '0'
        elif temp["defect_name"]=="破洞":
            defect_name = '1'
        elif temp["defect_name"] == "水渍":
            defect_name = '2'
        elif temp["defect_name"] == "油渍":
            defect_name = '2'
        elif temp["defect_name"] == "污渍":
            defect_name = '2'
        elif temp["defect_name"] == "三丝":
            defect_name = '3'
        elif temp["defect_name"] == "结头":
            defect_name = '4'
        elif temp["defect_name"] == "花板跳":
            defect_name = '5'
        elif temp["defect_name"] == "百脚":
            defect_name = '6'
        elif temp["defect_name"] == "毛粒":
            defect_name = '7'
        elif temp["defect_name"] == "粗经":
            defect_name = '8'
        elif temp["defect_name"] == "松经":
            defect_name = '9'
        elif temp["defect_name"] == "断经":
            defect_name = '10'
        elif temp["defect_name"] == "吊经":
            defect_name = '11'
        elif temp["defect_name"] == "粗维":
            defect_name = '12'
        elif temp["defect_name"] == "纬缩":
            defect_name = '13'
        elif temp["defect_name"] == "浆斑":
            defect_name = '14'
        elif temp["defect_name"] == "整经结":
            defect_name = '15'
        elif temp["defect_name"] == "星跳":
            defect_name = '16'
        elif temp["defect_name"] == "跳花":
            defect_name = '16'
        elif temp["defect_name"] == "断氨纶":
            defect_name = '17'
        elif temp["defect_name"] == "稀密档":
            defect_name = '18'
        elif temp["defect_name"] == "浪纹档":
            defect_name = '18'
        elif temp["defect_name"] == "色差档":
            defect_name = '18'
        elif temp["defect_name"] == "磨痕":
            defect_name = '19'
        elif temp["defect_name"] == "轧痕":
            defect_name = '19'
        elif temp["defect_name"] == "修痕":
            defect_name = '19'
        elif temp["defect_name"] == "烧毛痕":
            defect_name = '19'
        elif temp["defect_name"] == "死皱":
            defect_name = '20'
        elif temp["defect_name"] == "云织":
            defect_name = '20'
        elif temp["defect_name"] == "双维":
            defect_name = '20'
        elif temp["defect_name"] == "双经":
            defect_name = '20'
        elif temp["defect_name"] == "跳纱":
            defect_name = '20'
        elif temp["defect_name"] == "筘路":
            defect_name = '20'
        elif temp["defect_name"] == "纬纱不良":
            defect_name = '20'

147行

                sh.copy(os.path.join(image_path, name),
                        'convertor/fold{}/images/{}/{}'.format(fold, path2save, name))

4.终端运行:python convertTrainLabel.py

5.把

3.对coco128.yaml的修改:

1)训练图像目录的路径(或带有训练图像列表的* .txt文件的路径)

2)与我们的验证图像相同的路径

3)类数

4)类列表名称:这里用21个数字代表了不同的瑕疵,后续再改。

path: ../coco128  # dataset root dir  /home/csim/HL/yolov5/coco
train: images/train  # train images (relative to 'path') 128 images
val: images/train  # val images (relative to 'path') 128 images
test:  # test images (optional)

# Classes
nc: 21  # number of classes
names: ['0','1', '2', '3', '4', '5', '6', '7', '8', '9', '10',
        '11', '12', '13', '14', '15', '16', '17', '18', '19', '20']  # class names

3.requirements

安装包:pip install -U -r requirements.txt

4.模型

      从./models文件夹中,选择模型。yolov5提供了五个模型,这里我们以yolov5s.yaml为例子,双击打开,更新其中参数适应2.1中定义的类别。这里一般不用动更新nc就可以了。就是匹配你自己数据集的类别就可以了,其他的参数一般不做优化不需要动。

#parameters
nc: 21
#number of classes 更新匹配自己的数据集
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple

5.训练

        可以从0开始训练也可以加载–cfg yolov5s.yaml --weights 通过传递匹配的权重文件从预训练的检查点进行训练:–cfg yolov5s.yaml --weights yolov5s.pt

$ python train.py --img 640 --batch 16 --epochs 5 --data ./data/coco128.yaml --cfg ./models/yolov5s.yaml --weights ''

        train.py:

def parse_opt(known=False):
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='initial weights path')
    parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
    parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
    parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path')
    parser.add_argument('--epochs', type=int, default=300)
    parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs, -1 for autobatch')
    parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)')
    parser.add_argument('--rect', action='store_true', help='rectangular training')
    parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
    parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
    parser.add_argument('--noval', action='store_true', help='only validate final epoch')
    parser.add_argument('--noautoanchor', action='store_true', help='disable AutoAnchor')
    parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')
    parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
    parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in "ram" (default) or "disk"')
    parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
    parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
    parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer')
    parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
    parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')
    parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name')
    parser.add_argument('--name', default='exp', help='save to project/name')
    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
    parser.add_argument('--quad', action='store_true', help='quad dataloader')
    parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler')
    parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
    parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)')
    parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone=10, first3=0 1 2')
    parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)')
    parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')

    # Weights & Biases arguments
    parser.add_argument('--entity', default=None, help='W&B: Entity')
    parser.add_argument('--upload_dataset', nargs='?', const=True, default=False, help='W&B: Upload data, "val" option')
    parser.add_argument('--bbox_interval', type=int, default=-1, help='W&B: Set bounding-box image logging interval')
    parser.add_argument('--artifact_alias', type=str, default='latest', help='W&B: Version of dataset artifact to use')

    opt = parser.parse_known_args()[0] if known else parser.parse_args()
    return opt

epochs:指的就是训练过程中整个数据集将被迭代多少次,显卡不行你就调小点。
batch-size:一次看完多少张图片才进行权重更新,梯度下降的mini-batch,显卡不行你就调小点。
cfg:存储模型结构的配置文件
data:存储训练、测试数据的文件
img-size:输入图片宽高,显卡不行你就调小点。
rect:进行矩形训练
resume:恢复最近保存的模型开始训练
nosave:仅保存最终checkpoint
notest:仅测试最后的epoch
evolve:进化超参数
bucket:gsutil bucket
cache-images:缓存图像以加快训练速度
weights:权重文件路径
name: 重命名results.txt to results_name.txt
device:cuda device, i.e. 0 or 0,1,2,3 or cpu
adam:使用adam优化
multi-scale:多尺度训练,img-size +/- 50%
single-cls:单类别的训练集
 

6.可视化

       待更新。

【资源说明】 【博主环境】 *可以在此检测项目基础上增加计数功能,统计当前画面目标总数,或者增加追踪功能,实现追踪计数! python==3.8 pytorch==1.8.1 torchvision==0.9.1 1、搭建环境 建议在anaconda中新建虚拟环境配置,然后在pycharm打开工程,再导入anaconda环境 确保正确安装requirements.txt中的包,可用清华源,下载块! 2、训练好的模型+评估指标曲线+数据集可视化图存放在“ultralytics\yolo\v8\detect\runs\detect”文件夹 3、开始检测识别 a.打开predict.py修改34行模型路径,照葫芦画瓢修改; b.需要检测的图片或视频预先存放在“\ultralytics\assets”文件夹 c.运行predict.py,开始检测检测结果会保存在ultralytics/yolo/v8/detect/runs/detect文件夹下 4、训练自己的模型 a.准备数据集,可参考YOLOv5,拆分为train、val即可,标签为txt b.在yolo\v8\detect\data文件夹下新建.yaml文件,照葫芦画瓢,仿照coco128.yaml c.修改tarin.py中的238行,改成自己新建yaml的路径 d.GPU训练(注释掉241行,修改device参数为0),若CPU训练(注释掉242行即可) e.运行train.py开始训练,当精度不在增加时,会自动停止训练。模型保存在ultralytics\yolo\v8\detect\runs\detect文件夹 【备注】 1、该资源内项目代码都经过测试运行成功,功能ok的情况下才上传的,请放心下载使用!有问题请及时沟通交流。 2、适用人群:计算机相关专业(如计科、信息安全、数据科学与大数据技术、人工智能、通信、物联网、自动化、电子信息等)在校学生、专业老师或者企业员工下载使用。 3、用途:项目具有较高的学习借鉴价值,也适用于小白学习入门进阶。当然也可作为毕设项目、课程设计、大作业、初期项目立项演示等。 4、如果基础还行,或者热爱钻研,亦可在此项目代码基础上进行修改添加,实现其他不同功能。 欢迎下载,沟通交流,互相学习,共同进步!
基于Yolov5的织物疵点检测是一种利用计算机视觉技术,通过使用Yolov5目标检测算法来识别和检测织物上的疵点。Yolov5是目前较为常用且有效的深度学习算法之一,结合其快速的目标检测速度和较高的准确性,可以为织物疵点检测提供有力的支持。 在这个应用中,首先需要收集和标注一定数量的织物疵点样本。这些样本可以包括各种类型的织物疵点,如污渍、断线、杂质等。然后,将这些样本输入到Yolov5模型进行训练,通过不断调整和优化网络参数,使得网络能够准确地识别和定位织物上的疵点。 在实际应用中,当拍摄到一张织物图像时,可以将该图像输入训练好的Yolov5模型进行目标检测Yolov5算法能够输出检测结果,包括织物疵点的类别和位置信息。通过分析这些结果,可以对织物表面的疵点进行有效的检测和判定。同时,可以结合其他图像处理技术,如图像增强和降噪等方法,进一步提高检测的准确性和效果。 基于Yolov5的织物疵点检测具有以下优点:检测速度快、准确性高、能够实时检测大量疵点。此外,该方法可以有效降低人工检测的成本和工作量,提高织物生产线的效率和质量。然而,应注意到该方法的精度与标注数据的质量和训练样本的多样性密切相关,因此需要足够的训练样本和精细的标注过程来提高检测的效果。
评论 25
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值