2021SC@SDUSC
源码配置的详情见第一篇分析
这一次分析以下代码的一部分:
val = segmentation.slidingWindowsEval(image_gray)
这行代码是pipline.py的217行的方法:
def slidingWindowsEval(image):
windows_size = 16;
stride = 1
height= image.shape[0]
t0 = time.time()
data_sets = []
for i in range(0,image.shape[1]-windows_size+1,stride):
data = image[0:height,i:i+windows_size]
data = cv2.resize(data,(23,23))
# cv2.imshow("image",data)
data = cv2.equalizeHist(data)
data = data.astype(np.float)/255
data= np.expand_dims(data,3)
data_sets.append(data)
res = model2.predict(np.array(data_sets))
print("分割",time.time() - t0)
pin = res
p = 1 - (res.T)[1]
p = f.gaussian_filter1d(np.array(p,dtype=np.float),3)
lmin = l.argrelmax(np.array(p),order = 3)[0]
interval = []
for i in range(len(lmin)-1):
interval.append(lmin[i+1]-lmin[i])
if(len(interval)>3):
mid = get_median(interval)
else:
return []
pin = np.array(pin)
res = searchOptimalCuttingPoint(image,pin,0,mid,3)
cutting_pts = res[1]
last = cutting_pts[-1] + mid
if last < image.shape[1]:
cutting_pts.append(last)
else:
cutting_pts.append(image.shape[1]-1)
name = ""
confidence =0.00
seg_block = []
for x in range(1,len(cutting_pts)):
if x != len(cutting_pts)-1 and x!=1:
section = image[0:36,cutting_pts[x-1]-2:cutting_pts[x]+2]
elif x==1:
c_head = cutting_pts[x - 1]- 2
if c_head<0:
c_head=0
c_tail = cutting_pts[x] + 2
section = image[0:36, c_head:c_tail]
elif x==len(cutting_pts)-1:
end = cutting_pts[x]
diff = image.shape[1]-end
c_head = cutting_pts[x - 1]
c_tail = cutting_pts[x]
if diff<7 :
section = image[0:36, c_head-5:c_tail+5]
else:
diff-=1
section = image[0:36, c_head - diff:c_tail + diff]
elif x==2:
section = image[0:36, cutting_pts[x - 1] - 3:cutting_pts[x-1]+ mid]
else:
section = image[0:36,cutting_pts[x-1]:cutting_pts[x]]
seg_block.append(section)
refined = refineCrop(seg_block,mid-1)
t0 = time.time()
for i,one in enumerate(refined):
res_pre = cRP.SimplePredict(one, i )
# cv2.imshow(str(i),one)
# cv2.waitKey(0)
confidence+=res_pre[0]
name+= res_pre[1]
print("字符识别",time.time() - t0)
return refined,name,confidence
以上代码使用了模板匹配法进行分析。
模板匹配法根据车牌的标准样式,结合字符的结构、尺寸和间距特征,设计出一个固定的模板和一个用来度量模板匹配程度的评价函数,然后将该模板在归一化后的图像中从左至右滑动,每滑动一次计算对应的评价值,最终选出匹配程度最高的模板滑动位置作为字符分割的位置。模板匹配法的缺点是它对车牌图像要求较高,当车牌倾斜角度很小、边框去除干净、图像噪声较少,字符尺寸和字符间距比较标准时,该算法速度快、分割效果好;而当车牌图像质量较为一般时,该算法的分割效果常常不尽人意。
HyperLPR采用了模板匹配法,大致思路如下:
1.对车牌进行一次Guassian Filter(去除噪声)
2.设定模板匹配的模板宽度
3.进行模板匹配
recognizeOne和slidingWindowsEval两种方式识别出来的置信度是不一样的,当置信度较低的时候会也就有可能出现两种方式识别的车牌结果不一样。