HyperLPR 高性能中文车牌识别系统分析(七)

本文深入解析了pipline.py文件中的SimpleRecognizePlate()函数,涉及车牌识别过程中的图像预处理、滑动窗口评估、字符分割及识别等步骤。通过对图像灰度化、直方图均衡化、模型预测等操作,确定车牌位置并绘制边界框。最终,根据置信度筛选出可信的车牌识别结果。
摘要由CSDN通过智能技术生成

2021SC@SDUSC

概述

本篇依旧是对pipline.py文件中的SimpleRecognizePlate()函数展开解析,逐渐深挖至底层

此次文章主要分析SimpleRecognizePlate()函数部分代码,如下:

        if len(val)==3:
            blocks, res, confidence = val
            if confidence/7>0.7:
                image =  drawRectBox(image,rect,res)
                res_set.append(res)
                for i,block in enumerate(blocks):

                    block_ = cv2.resize(block,(25,25))
                    block_ = cv2.cvtColor(block_,cv2.COLOR_GRAY2BGR)
                    image[j * 25:(j * 25) + 25, i * 25:(i * 25) + 25] = block_
                    if image[j*25:(j*25)+25,i*25:(i*25)+25].shape == block_.shape:
                        pass


            if confidence>0:
                print("车牌:",res,"置信度:",confidence/7)
            else:
                pass

                # print "不确定的车牌:", res, "置信度:", confidence

    print(time.time() - t0,"s")
    return image,res_set

前提

在经过以上处理之前,如下代码描述出车牌的置信度:

val = segmentation.slidingWindowsEval(image_gray)

即segmentation文件的slidingWindowsEval()函数:

def slidingWindowsEval(image):
    windows_size = 16;
    stride = 1
    height= image.shape[0]
    t0 = time.time()
    data_sets = []

    for i in range(0,image.shape[1]-windows_size+1,stride):
        data = image[0:height,i:i+windows_size]
        data = cv2.resize(data,(23,23))
        # cv2.imshow("image",data)
        data = cv2.equalizeHist(data)
        data = data.astype(np.float)/255
        data=  np.expand_dims(data,3)
        data_sets.append(data)

    res = model2.predict(np.array(data_sets))
    print("分割",time.time() - t0)

    pin = res
    p = 1 -  (res.T)[1]
    p = f.gaussian_filter1d(np.array(p,dtype=np.float),3)
    lmin = l.argrelmax(np.array(p),order = 3)[0]
    interval = []
    for i in range(len(lmin)-1):
        interval.append(lmin[i+1]-lmin[i])

    if(len(interval)>3):
        mid  = get_median(interval)
    else:
        return []
    pin = np.array(pin)
    res =  searchOptimalCuttingPoint(image,pin,0,mid,3)

    cutting_pts = res[1]
    last =  cutting_pts[-1] + mid
    if last < image.shape[1]:
        cutting_pts.append(last)
    else:
        cutting_pts.append(image.shape[1]-1)
    name = ""
    confidence =0.00
    seg_block = []
    for x in range(1,len(cutting_pts)):
        if x != len(cutting_pts)-1 and x!=1:
            section = image[0:36,cutting_pts[x-1]-2:cutting_pts[x]+2]
        elif  x==1:
            c_head = cutting_pts[x - 1]- 2
            if c_head<0:
                c_head=0
            c_tail = cutting_pts[x] + 2
            section = image[0:36, c_head:c_tail]
        elif x==len(cutting_pts)-1:
            end = cutting_pts[x]
            diff = image.shape[1]-end
            c_head = cutting_pts[x - 1]
            c_tail = cutting_pts[x]
            if diff<7 :
                section = image[0:36, c_head-5:c_tail+5]
            else:
                diff-=1
                section = image[0:36, c_head - diff:c_tail + diff]
        elif  x==2:
            section = image[0:36, cutting_pts[x - 1] - 3:cutting_pts[x-1]+ mid]
        else:
            section = image[0:36,cutting_pts[x-1]:cutting_pts[x]]
        seg_block.append(section)
    refined = refineCrop(seg_block,mid-1)

    t0 = time.time()
    for i,one in enumerate(refined):
        res_pre = cRP.SimplePredict(one, i )
        # cv2.imshow(str(i),one)
        # cv2.waitKey(0)
        confidence+=res_pre[0]
        name+= res_pre[1]
    print("字符识别",time.time() - t0)

    return refined,name,confidence

详情见该文章基于HyperLPR的车牌识别(八)_sdu_qrt的博客-CSDN博客

代码分析

        if len(val)==3:
            blocks, res, confidence = val

若 val 数组的长度为3,则将 val 的元素分别附给以下三个变量blocks、res、confidence

        if len(val)==3:
            blocks, res, confidence = val
            if confidence/7>0.7:
                image =  drawRectBox(image,rect,res)
                res_set.append(res)
                for i,block in enumerate(blocks):

                    block_ = cv2.resize(block,(25,25))
                    block_ = cv2.cvtColor(block_,cv2.COLOR_GRAY2BGR)
                    image[j * 25:(j * 25) + 25, i * 25:(i * 25) + 25] = block_
                    if image[j*25:(j*25)+25,i*25:(i*25)+25].shape == block_.shape:
                        pass


            if confidence>0:
                print("车牌:",res,"置信度:",confidence/7)
            else:
                pass

                # print "不确定的车牌:", res, "置信度:", confidence

    print(time.time() - t0,"s")
    return image,res_set

若confidennce / 7 的结果大于 7, 则调用drawRectBox函数对图片做如此处理

drawRectBox()函数

该函数位于 pipline 文件内

def drawRectBox(image,rect,addText):
    cv2.rectangle(image, (int(rect[0]), int(rect[1])), (int(rect[0] + rect[2]), int(rect[1] + rect[3])), (0,0, 255), 2, cv2.LINE_AA)
    cv2.rectangle(image, (int(rect[0]-1), int(rect[1])-16), (int(rect[0] + 115), int(rect[1])), (0, 0, 255), -1, cv2.LINE_AA)

    img = Image.fromarray(image)
    draw = ImageDraw.Draw(img)
    #draw.text((int(rect[0]+1), int(rect[1]-16)), addText.decode("utf-8"), (255, 255, 255), font=fontC)
    draw.text((int(rect[0]+1), int(rect[1]-16)), addText, (255, 255, 255), font=fontC)
    imagex = np.array(img)

    return imagex

该函数用于打上boundingbox和标签(为之前verticalMappingToFolder(...)函数生成的字符串)。当处理好的文件可信度大于0.7时,将对一开始输入进行识别的图片中,框出识别出的车牌显示车牌号码。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值