好吧,我还是开始写了,今年这个题不难,就是被赛制搞得心态爆炸
其实也是我的个人问题,毕竟第一次正儿八经打电赛,两个队友,一个考试,一个是硬件的,帮我把供电搭出来之后,就开始准备写论文了,而且他也有考试,没有办法,机械上没有考虑到要大规模的改变线上,就很难受,所以大概率电赛就沉没了,吐了
我重申一句:线上电赛,狗都不打!!!!!!
赛题分析
懒得分析了,直接放我的pdf文档吧
基于互联网的摄像测量系统(D题)(1).pdf
其中的l的长度可以使用单摆周期去测得:L=gT2 /4 Pi2
对于角度来说,因为是正交,就直接用象素差去分析就可以了,像素差之比(都是720X480的图像)
原理分析图等明天放上来(角度是有系统误差的)
方案解析
整体思路分析
使用了两个USB摄像头和两个树莓派,树莓派的用途在于对摄像头传输的画面,并且接入搭建的局域网当中来
通信图:
供电图省略
树莓派当中使用的是开源的MJPG推流方案
摄像头视频格式介绍建议自己去找,这里省略
给树莓派固定静态IP,我这里的固定IP是:192.168.2.22和192.168.2.23两个
访问视频流的话,使用的是访问:http://192.168.2.22:8080/?action=stream
直接使用终端进行获取视频流处理,在这个地方估计对树莓派的性能要求不大,但是还是要有一定的性能进行推流的
终端代码思路分析
为了速度,使用了Ubuntu(我个人觉得Ubuntu比Windows快的多),为了方便使用的是python(主要是懒)
追踪
这个地方的追踪其实不应该叫做追踪,应该叫做识别;追踪的情况下背景图是会 发生改变的,在题干当中要求了背景是一般实验室静止背景,所以在这个地方我们要使用的是识别激光笔(其实就是检测谁在动)
检测谁在动,再简化点,就是对比上下两帧图片,是哪个地方发生了改变(问题发生了:就算背景是静止的,也会出现问题,图也是再改变的,这个地方要进行滤波,但是每一帧图像之间的改变其实也不多,滤波很有可能也会把这个改变也滤掉)所以我使用了对比初始图,初始图就是在激光笔在出现之前的图,然后把这个图当作对比图(记得一定要先滤波,在进行对比,不然还会出现上述的问题,背景图就算是静止的,因为摄像头视频流的问题,每一个像素点的值在一定范围内波动)
对比出来相对应的改变值(在激光笔移动的时候,其实就是找到激光笔的位置)然后对图像进行膨胀,联通,最后使用opencv自带的一个函数去找出对应的图像findContours
然后去找到最小的一个图像去框选,这个地方就是画框框了;对应的函数的意义和函数原型自己去网上找吧
grabbed, frame_lwpCV = camera_zero.read()
# 对帧进行预处理,先转灰度图,再进行高斯滤波。
# 用高斯滤波进行模糊处理,进行处理的原因:每个输入的视频都会因自然震动、光照变化或者摄像头本身等原因而产生噪声。对噪声进行平滑是为了避免在运动和跟踪时将其检测出来。
frame_lwpCV=undistort(frame_lwpCV,k_1,d_1)
gray_lwpCV = cv2.cvtColor(frame_lwpCV, cv2.COLOR_BGR2GRAY)
gray_lwpCV = cv2.GaussianBlur(gray_lwpCV, (21, 21), 0)
# 对于每个从背景之后读取的帧都会计算其与背景之间的差异,并得到一个差分图(different map)。
# 还需要应用阈值来得到一幅黑白图像,并通过下面代码来膨胀(dilate)图像,从而对孔(hole)和缺陷(imperfection)进行归一化处理
diff = cv2.absdiff(background, gray_lwpCV)
diff = cv2.threshold(diff, 25, 255, cv2.THRESH_BINARY)[1] # 二值化阈值处理
diff = cv2.dilate(diff, es, iterations=0) # 形态学膨胀
image, contours, hierarchy = cv2.findContours(diff.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # 该函数计算一幅图像中目标的轮廓
for c in contours:
if cv2.contourArea(c) < 300: # 对于矩形区域,只显示大于给定阈值的轮廓,所以一些微小的变化不会显示。对于光照不变和噪声低的摄像头可不设定轮廓最小尺寸的阈值
continue
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(frame_lwpCV, [box], 0, (0, 0, 255), 2)
cv2.imshow('contours', frame_lwpCV)
周期
对于周期来说,其实有好多的方式去测
先说最简单的两个吧,一个是测y的变化值,一个是测x的变化值
但是最终我把测y的变化值的抛弃了,因为变化量太小(个人感觉)所以得到的数据噪声太多,不好滤波;选择了测x的变化值的
测x的变化还有一点要考虑到,它不是理想的,每次都会损失一定的高度,最终达到的是一个阈值,而不是一个固定的值,要考虑到阈值的大小,而且每一次到最高点都要对上一次记录的最高点和阈值进行修正。
具体代码
下面呢,自己看吧
角度
测角度的原理,就是正交之后,把一个摄像头的图像投影到背板上,测得对应的x的变化值;两个得到的x的值进行相比,求arctan,然后除以pi*180就是角度了;但是这里是有问题的(要求这俩摄像头的参数是一样的,不然误差会更大)消除畸变对摄像头的影响之后,还是有一定的误差(具体原因见上图)
具体代码
懒得写了
多个摄像头图像合并与校时
对于这种不是双目的两个或者多个单目摄像头,两个摄像头之间是有一定的时差的(其实使用MJPG推流的话,这个时差是很小的);还有就是对两个图进行合并
合并
使用opencv自带的函数或者在python当中numpy带的函数就可以进行合并了,我这里使用的numpy自带的函数,C++的话就使用opencv自带的函数吧
grabbed, frame_zero = camera_zero.read()
grabbed, frame_ninty = camera_ninty.read()
frame_lwpCV = np.hstack([frame_zero,frame_ninty])
# 这样就完成了合并
校时
校时其实很重要的,具体等我有空再写吧
代码开源
我使用的是ubuntu+python3.8.10+opencv_contrib-python 3.4.13.47版本(貌似是这个来着)
写的确实一坨屎,懒得优化框架,将就着看吧
import math
import time
import numpy as np
import cv2
import multiprocessing as mp
import os
# 由周期计算长度
def num(num):
return (num*num*9.79/4/math.pi/math.pi*100-9.5)
# 由像素差计算角度
def get_angel(x,y):
return math.atan(x/y)*180/math.pi
# 基本移动框选
def camera_move(camera_zero,camera_ninty,background,angle,lengh):
es = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9, 4))
background = cv2.cvtColor(background,cv2.COLOR_BGR2GRAY)
background = cv2.GaussianBlur(background, (21, 21), 0)
if angle==-1000 and lengh== 0:
while True:
# 读取视频流
grabbed, frame_zero = camera_zero.read()
grabbed, frame_ninty = camera_ninty.read()
frame_lwpCV = np.hstack([frame_zero,frame_ninty])
# 对帧进行预处理,先转灰度图,再进行高斯滤波。
# 用高斯滤波进行模糊处理,进行处理的原因:每个输入的视频都会因自然震动、光照变化或者摄像头本身等原因而产生噪声。对噪声进行平滑是为了避免在运动和跟踪时将其检测出来。
# frame_lwpCV=undistort(frame_lwpCV,k_1,d_1)
gray_lwpCV = cv2.cvtColor(frame_lwpCV, cv2.COLOR_BGR2GRAY)
gray_lwpCV = cv2.GaussianBlur(gray_lwpCV, (21, 21), 0)
# 对于每个从背景之后读取的帧都会计算其与背景之间的差异,并得到一个差分图(different map)。
# 还需要应用阈值来得到一幅黑白图像,并通过下面代码来膨胀(dilate)图像,从而对孔(hole)和缺陷(imperfection)进行归一化处理
diff = cv2.absdiff(background, gray_lwpCV)
diff = cv2.threshold(diff, 25, 255, cv2.THRESH_BINARY)[1] # 二值化阈值处理
diff = cv2.dilate(diff, es, iterations=0) # 形态学膨胀
image, contours, hierarchy = cv2.findContours(diff.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # 该函数计算一幅图像中目标的轮廓
for c in contours:
if cv2.contourArea(c) < 300: # 对于矩形区域,只显示大于给定阈值的轮廓,所以一些微小的变化不会显示。对于光照不变和噪声低的摄像头可不设定轮廓最小尺寸的阈值
continue
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(frame_lwpCV, [box], 0, (0, 0, 255), 2)
cv2.imshow('contours', frame_lwpCV)
# cv2.imshow('dis', diff)
key = cv2.waitKey(1) & 0xFF
# 按'q'健退出循环
if key == ord('q'):
break
elif lengh != 0 and angle!=-1000:
while True:
# 读取视频流
grabbed, frame_zero = camera_zero.read()
grabbed, frame_ninty = camera_ninty.read()
frame_lwpCV = np.hstack([frame_zero,frame_ninty])
# 对帧进行预处理,先转灰度图,再进行高斯滤波。
# 用高斯滤波进行模糊处理,进行处理的原因:每个输入的视频都会因自然震动、光照变化或者摄像头本身等原因而产生噪声。对噪声进行平滑是为了避免在运动和跟踪时将其检测出来。
# frame_lwpCV=undistort(frame_lwpCV,k_1,d_1)
gray_lwpCV = cv2.cvtColor(frame_lwpCV, cv2.COLOR_BGR2GRAY)
gray_lwpCV = cv2.GaussianBlur(gray_lwpCV, (21, 21), 0)
# 对于每个从背景之后读取的帧都会计算其与背景之间的差异,并得到一个差分图(different map)。
# 还需要应用阈值来得到一幅黑白图像,并通过下面代码来膨胀(dilate)图像,从而对孔(hole)和缺陷(imperfection)进行归一化处理
diff = cv2.absdiff(background, gray_lwpCV)
diff = cv2.threshold(diff, 25, 255, cv2.THRESH_BINARY)[1] # 二值化阈值处理
diff = cv2.dilate(diff, es, iterations=0) # 形态学膨胀
image, contours, hierarchy = cv2.findContours(diff.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # 该函数计算一幅图像中目标的轮廓
for c in contours:
if cv2.contourArea(c) < 300: # 对于矩形区域,只显示大于给定阈值的轮廓,所以一些微小的变化不会显示。对于光照不变和噪声低的摄像头可不设定轮廓最小尺寸的阈值
continue
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(frame_lwpCV, [box], 0, (0, 0, 255), 2)
cv2.putText(frame_lwpCV,"Angle:"+str(angle),(200,100),cv2.FONT_HERSHEY_COMPLEX,2.0,(0,200,200),5)
cv2.putText(frame_lwpCV,"Lengh:"+str(lengh),(200,200),cv2.FONT_HERSHEY_COMPLEX,2.0,(0,200,200),5)
cv2.imshow('contours', frame_lwpCV)
# cv2.imshow('dis', diff)
key = cv2.waitKey(1) & 0xFF
# 按'q'健退出循环
if key == ord('q'):
exit()
# When everything done, release the capture
# camera_zero.release()
# camera_ninty.release()
cv2.destroyAllWindows()
# 初始化背景图
def Back_init(background_zero,background_ninty):
cv2.imwrite("Background_zero.png", background_zero)
cv2.imwrite("Background_ninty.png", background_ninty)
img_all = np.hstack((background_zero, background_ninty))
cv2.imwrite("Background_All.png", img_all)
return img_all,background_zero,background_ninty
# 计算长度算法
def get_length(cmera_ip_zero, cmera_ip_ninty,back_zero,back_ninty):
# 左边摄像头背景
back_zero = cv2.cvtColor(back_zero, cv2.COLOR_BGR2GRAY)
back_zero = cv2.GaussianBlur(back_zero, (21, 21), 0)
# 左边摄像头背景
back_ninty = cv2.cvtColor(back_ninty, cv2.COLOR_BGR2GRAY)
back_ninty = cv2.GaussianBlur(back_ninty, (21, 21), 0)
# 初始化所有需要的数据
num_T_left = 0
num_T_right = 0
es_length = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9, 4))
kernel = np.ones((5, 5), np.uint8)
x_left_left = 640
x_left_right = 640
time_start_left = time.time()
time_start_right = time.time()
x_last_left = 0
x_last_right = 0
x_left_arr_left = 0
x_arr_all_left = 0
x_change_left = 0
arr_num_time_left = 0
x_left_arr_right = 0
x_arr_all_right = 0
x_change_right = 0
arr_num_time_right = 0
lengh_right= 0
lengh_left =0
flag_angel = 0
flag_zero = 0
flag_ninty =0
time_change_arr_left = []
time_change_arr_right = []
while True:
# 读取视频流
grabbed, frame_lwpCV_left = cmera_ip_zero.read()
grabbed, frame_lwpCV_right = cmera_ip_ninty.read()
# 对帧进行预处理,先转灰度图,再进行高斯滤波。
# 用高斯滤波进行模糊处理,进行处理的原因:每个输入的视频都会因自然震动、光照变化或者摄像头本身等原因而产生噪声。对噪声进行平滑是为了避免在运动和跟踪时将其检测出来。
gray_lwpCV_left = cv2.cvtColor(frame_lwpCV_left, cv2.COLOR_BGR2GRAY)
gray_lwpCV_left = cv2.GaussianBlur(gray_lwpCV_left, (21, 21), 0)
gray_lwpCV_right = cv2.cvtColor(frame_lwpCV_right, cv2.COLOR_BGR2GRAY)
gray_lwpCV_right = cv2.GaussianBlur(gray_lwpCV_right, (21, 21), 0)
# 对于每个从背景之后读取的帧都会计算其与背景之间的差异,并得到一个差分图(different map)。
# 还需要应用阈值来得到一幅黑白图像,并通过下面代码来膨胀(dilate)图像,从而对孔(hole)和缺陷(imperfection)进行归一化处理
diff_left = cv2.absdiff(back_zero, gray_lwpCV_left)
diff_left = cv2.threshold(diff_left, 25, 255, cv2.THRESH_BINARY)[1] # 二值化阈值处理
diff_left = cv2.dilate(diff_left, es_length, iterations=0) # 形态学膨胀
diff_right = cv2.absdiff(back_ninty, gray_lwpCV_right)
diff_right = cv2.threshold(diff_right, 25, 255, cv2.THRESH_BINARY)[1] # 二值化阈值处理
diff_right = cv2.dilate(diff_right, es_length, iterations=0) # 形态学膨胀
image_left, contours_left, hierarchy_left = cv2.findContours(diff_left.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # 该函数计算一幅图像中目标的轮廓
image_right, contours_right, hierarchy_right = cv2.findContours(diff_right.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE) # 该函数计算一幅图像中目标的轮廓
for c_left in contours_left:
if cv2.contourArea(c_left) < 300: # 对于矩形区域,只显示大于给定阈值的轮廓,所以一些微小的变化不会显示。对于光照不变和噪声低的摄像头可不设定轮廓最小尺寸的阈值
continue
rect_left = cv2.minAreaRect(c_left)
box_left = cv2.boxPoints(rect_left)
arr_x_left = rect_left[0][0]
if arr_x_left != x_last_left:
if num_T_left >=1:
x_arr_all_left += arr_x_left
x_change_left += 1
if x_last_left >= arr_x_left * 0.8 + x_last_left * 0.2: # 左移
if x_left_left >= arr_x_left:
x_left_left = arr_x_left
else:
time_end_left = time.time()
time_change_left = time_end_left - time_start_left
if 1.35 <= time_change_left <= 2.52:
if num_T_left == 0:
num_T_left += 1
continue
x_left_arr_left += x_left_left
print(time_change_left)
print("left")
time_change_left = round(time_change_left, 2)
time_change_arr_left.append(time_change_left)
arr_num_time_left += time_change_left
num_T_left += 1
time_start_left = time.time()
x_left_left = 640
x_last_left = arr_x_left
box_left = np.int0(box_left)
cv2.drawContours(frame_lwpCV_left, [box_left], 0, (0, 0, 255), 2)
for c_right in contours_right:
if cv2.contourArea(c_right) < 300: # 对于矩形区域,只显示大于给定阈值的轮廓,所以一些微小的变化不会显示。对于光照不变和噪声低的摄像头可不设定轮廓最小尺寸的阈值
continue
rect_right = cv2.minAreaRect(c_right)
box_right = cv2.boxPoints(rect_right)
arr_x_right = rect_right[0][0]
if arr_x_right != x_last_right:
if num_T_right >= 1:
x_arr_all_right += arr_x_right
x_change_right += 1
if x_last_right >= arr_x_right * 0.8 + x_last_right * 0.2: # 左移
if x_left_right >= arr_x_right:
x_left_right = arr_x_right
else:
time_end_right = time.time()
time_change_right = time_end_right - time_start_right
# print(time_change)
if 1.42 <= time_change_right <= 2.46:
if num_T_right == 0:
num_T_right += 1
continue
x_left_arr_right += x_left_right
print(time_change_right)
print("right")
time_change_right = round(time_change_right, 2)
time_change_arr_right.append(time_change_right)
arr_num_time_right += time_change_right
num_T_right += 1
time_start_right = time.time()
x_left_right = 640
x_last_right = arr_x_right
box_right = np.int0(box_right)
cv2.drawContours(frame_lwpCV_right, [box_right], 0, (0, 0, 255), 2)
if num_T_left>4 and num_T_right>4 and flag_angel ==0:
# 计算像素点
flag_angel =1
x_arr_all_left1 = x_arr_all_left / x_change_left
x_left_arr_left1 = x_left_arr_left / (num_T_left-1)
x_arr_all_right1 = x_arr_all_right / x_change_right
x_left_arr_right1 = x_left_arr_right / (num_T_right-1)
angle_first = get_angel(x_arr_all_right1 - x_left_arr_right1,x_arr_all_left1 - x_left_arr_left1)
if angle_first >=67.5:
flag_ninty = 1
elif angle_first<=23.5:
flag_zero = 1
if (num_T_left>15 and num_T_right<4):
arr_num_time_left = round(arr_num_time_left, 3)
num_T_left -= 1
print("总时长(left):")
print(arr_num_time_left)
print("平均时长(修正前left):")
print(arr_num_time_left / num_T_left)
time_change_arr_left.sort()
time_arr_left = (time_change_arr_left[4]+time_change_arr_left[2])/2
arr_num_time_left= 0
num_T_left= 0
for num_time_left in time_change_arr_left:
if num_time_left>time_arr_left*1.05 or time_arr_left*0.95 > num_time_left:
continue
num_T_left = num_T_left + 1
arr_num_time_left= arr_num_time_left+num_time_left
print("平均时长(修正后letf):")
print(arr_num_time_left/num_T_left)
print("计算得到的长度(修正后letf):")
print(num(arr_num_time_left / num_T_left))
lengh_left = num(arr_num_time_left / num_T_left)
angle_fin = 0.0
flag_ninty = 0
flag_zero = 1
break
elif (num_T_right>15 and num_T_left<4):
arr_num_time_right = round(arr_num_time_right, 3)
print("总时长(right):")
print(arr_num_time_right)
print("平均时长(修正前right):")
print(arr_num_time_right / num_T_right)
time_change_arr_right.sort()
time_arr_right = (time_change_arr_right[4]+time_change_arr_right[2])/2
arr_num_time_right= 0
num_T_right= 0
for num_time_right in time_change_arr_right:
if num_time_right>time_arr_right*1.05 or time_arr_right*0.95> num_time_right:
continue
num_T_right = num_T_right + 1
arr_num_time_right= arr_num_time_right+num_time_right
print("平均时长(修正后letf):")
print(arr_num_time_right/num_T_right)
print("计算得到的长度(修正后right):")
print(num(arr_num_time_right / num_T_right))
lengh_right = num(arr_num_time_right / num_T_right)
angle_fin = 90.0
flag_zero = 0
flag_ninty =1
break
elif (num_T_left > 10 and num_T_right > 10) or (flag_ninty == 1 and num_T_left>10) or (flag_zero==1 and num_T_right>10):
# 除去小数点后两位的数据
arr_num_time_left = round(arr_num_time_left, 3)
arr_num_time_right = round(arr_num_time_right, 3)
# 之前加一了,这个地方减掉
num_T_left -= 1
num_T_right -= 1
# 计算像素点
x_arr_all_left = x_arr_all_left / x_change_left
x_left_arr_left = x_left_arr_left / num_T_left
x_arr_all_right = x_arr_all_right / x_change_right
x_left_arr_right = x_left_arr_right /num_T_right
print("最大像素点的X值(left):")
print(x_left_arr_left)
print("平均像素点的X值(left):")
print(x_arr_all_left)
print("像素差(left):")
print(x_arr_all_left - x_left_arr_left)
print("总时长(left):")
print(arr_num_time_left)
print("平均时长(修正前left):")
print(arr_num_time_left / num_T_left)
time_change_arr_left.sort()
time_arr_left = (time_change_arr_left[4]+time_change_arr_left[2])/2
arr_num_time_left= 0
num_T_left= 0
for num_time_left in time_change_arr_left:
if num_time_left>time_arr_left*1.05 or time_arr_left*0.95 > num_time_left:
continue
num_T_left = num_T_left + 1
arr_num_time_left= arr_num_time_left+num_time_left
print("平均时长(修正后letf):")
print(arr_num_time_left/num_T_left)
print("计算得到的长度(修正后letf):")
print(num(arr_num_time_left / num_T_left))
lengh_left = num(arr_num_time_left / num_T_left)
print("最大像素点的X值(right):")
print(x_left_arr_right)
print("平均像素点的X值(right):")
print(x_arr_all_right)
print("像素差(right):")
print(x_arr_all_right - x_left_arr_right)
print("总时长(right):")
print(arr_num_time_right)
print("平均时长(修正前right):")
print(arr_num_time_right / num_T_right)
time_change_arr_right.sort()
time_arr_right = (time_change_arr_right[4]+time_change_arr_right[2])/2
arr_num_time_right= 0
num_T_right= 0
for num_time_right in time_change_arr_right:
if num_time_right>time_arr_right*1.05 or time_arr_right*0.95> num_time_right:
continue
num_T_right = num_T_right + 1
arr_num_time_right= arr_num_time_right+num_time_right
print("平均时长(修正后letf):")
print(arr_num_time_right/num_T_right)
print("计算得到的长度(修正后right):")
print(num(arr_num_time_right / num_T_right))
lengh_right = num(arr_num_time_right / num_T_right)
angle_second = get_angel(x_arr_all_right - x_left_arr_right,x_arr_all_left - x_left_arr_left)
angle_fin = angle_first*0.4 + angle_second*0.6
# lengh_fin = lengh_left*0.5+lengh_right*0.5
break
frameall = np.hstack((frame_lwpCV_left, frame_lwpCV_right))
# diff_all = np.hstack((diff_left, diff_right))
cv2.imshow('contours', frameall)
# cv2.imshow('dis', diff_all)
key = cv2.waitKey(1) & 0xFF
# 按'q'健退出循环
if key == ord('q'):
break
return angle_fin,lengh_left,lengh_right,flag_zero,flag_ninty
# 声音提示
def tishi():
os.system(' spd-say "The lengh and angle has get"')
if __name__ =='__main__':
angle,lengh_fin = -1000,0
camera_zero =cv2.VideoCapture("http://192.168.2.23:8080/?action=stream")
camera_ninty =cv2.VideoCapture("http://192.168.2.22:8080/?action=stream")
if (camera_zero.isOpened() and camera_ninty.isOpened()):
print('Open')
else:
print('摄像头未打开')
es = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9, 4))
flag = 0
flag_long = 0
while True:
# 读取视频流
grabbed, frame_zero = camera_zero.read()
grabbed, frame_ninty = camera_ninty.read()
frame_lwpCV = np.hstack([frame_zero,frame_ninty])
cv2.imshow('contours', frame_lwpCV)
key = cv2.waitKey(1) & 0xFF
# 按'q'健退出循环
if key == ord('q'):
break
elif key == ord('s'):
back,back_zero,back_ninty = Back_init(frame_zero,frame_ninty)
flag = 1
elif key == ord('r'):
flag_long = 1
if flag == 1:
angle, lengh_left,lengh_right,flag_zero,flag_ninty = get_length(camera_zero,camera_ninty,back_zero,back_ninty)
if flag_zero == 1:
lengh_fin = lengh_left
elif flag_ninty ==1:
lengh_fin = lengh_right
else:
lengh_fin = lengh_left*0.5+lengh_right*0.5
else:
back_zero = cv2.imread("Background_zero.png")
back_ninty = cv2.imread("Background_ninty.png")
angle, lengh_left,lengh_right,flag_zero,flag_ninty = get_length(camera_zero,camera_ninty,back_zero,back_ninty)
if flag_zero == 1:
lengh_fin = lengh_left
elif flag_ninty ==1:
lengh_fin = lengh_right
else:
lengh_fin = lengh_left*0.5+lengh_right*0.5
tishi()
if flag_long==1:
if flag == 1:
camera_move(camera_zero,camera_ninty,back,angle,lengh_fin)
else:
back = cv2.imread("Background_All.png")
camera_move(camera_zero,camera_ninty,back,angle,lengh_fin)
# When everything done, release the capture
camera_zero.release()
camera_ninty.release()
cv2.destroyAllWindows()
代码分析
具体效果
长度基本在0.8cm之内
角度基本都在2.3°之内
(不放图了)