使用深度学习进行目标检测

      在上一篇单目视觉测距中我测试了行人测距算法,其中最主要的一个问题就是传统的HOG+SVM行人检测算法的检测效果不好,存在的问题就是:

1. 对人的检测框不够精准,在人站立状态下,头顶和脚底都有很大的空余部分,因为对行人测距我们要得到目标在画面中的像素身高,所以检测框不准确对我们进行单目行人测距有很大的影响,在程序运行过程我们更不可能采用手工标定的方法,所以一种精准的目标检测程序十分重要。

2. HOG+SVM的行人检测结果准确度比较低。HOG+SVM的方法对图片输入的检测效果比较好,对视频输入检测效果一般,在我的实际测试中,对实验室的一些类似人体的物体存在误检的情况(例如相机三脚架等物体)。算法的检测的准确度在视频输入下不能保证。

所以我们需要一种新的目标检测算法来对目标人进行检测,得到目标在画面中的像素身高。这就是我要记录的:使用深度学习神经网络进行目标检测,使用的是SSD+Caffe的目标检测的方法。

     需要注意的是我的环境是Ubuntu14.04 + OpenCV3.4, OpenCV在3.3以后才集成神经网络的模块,所以在这里我使用的是opencv3.4版本,程序在Win7 + OpenCV3.4的环境下也同样使用。

程序如下:

# USAGE
# python real_time_object_detection.py --prototxt MobileNetSSD_deploy.prototxt.txt --model MobileNetSSD_deploy.caffemodel

# import the necessary packages
from imutils.video import VideoStream
from imutils.video import FPS
import numpy as np
import argparse
import imutils
import time
import cv2


# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-p", "--prototxt",default="C:\\Users\\MY\\Desktop\\personDetect\\MobileNetSSD_deploy_0.prototxt",
	help="path to Caffe 'deploy' prototxt file")
ap.add_argument("-m", "--model",default="C:\\Users\\MY\\Desktop\\personDetect\\MobileNetSSD_deploy_0.caffemodel",
	help="path to Caffe pre-trained model")
ap.add_argument("-v", "--video", default="E:\pose-estimation\object-detection-master\test",
	help="path to Caffe video file")
ap.add_argument("-c", "--confidence", type=float, default=0.2,
	help="minimum probability to filter weak detections")
args = vars(ap.parse_args())



# initialize the list of class labels MobileNet SSD was trained to
# detect, then generate a set of bounding box colors for each class
'''
CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
	"bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
	"dog", "horse", "motorbike", "person", "pottedplant", "sheep",
	"sofa", "train", "tvmonitor"]
'''
CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat",
	"bottle", "bus", "car", "cat", "chair", "cow", "diningtable",
	"dog", "horse", "motorbike", "person", "pottedplant", "sheep",
	"sofa", "train", "tvmonitor"]
COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3))

# load our serialized model from disk
print("[INFO] loading model...")
net2 = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])
# net2=cv2.dnn.readNetFromCaffe("VGG_SSD_300.prototxt","VGG_SSD_300.caffemodel")
# net2=cv2.dnn.readNetFromTensorflow("face.pb")
# initialize the video stream, allow the cammera sensor to warmup,
# and initialize the FPS counter
print("[INFO] starting video stream...")
#vs = VideoStream(src=0).start()
# vs =cv2.VideoCapture('C:\\Users\\voidking\\Desktop\\real-time-object-detection\\test_video.flv')
# vs =cv2.VideoCapture('./test_video.flv')
# vs =cv2.VideoCapture("video1.mp4")
vs =cv2.VideoCapture(0)
time.sleep(2.0)
fps = FPS().start()

# loop over the frames from the video stream
while True:
	# grab the frame from the threaded video stream and resize it
	# to have a maximum width of 400 pixels
	#frame = vs.read()
	#frame = imutils.resize(frame, width=400)

	# grab the frame from the threaded video file stream
	(grabbed,frame) = vs.read()
	# if the frame was not grabbed, then we have reached the end
	# of the stream
	if not grabbed:
		break
	frame = imutils.resize(frame, width=800)
	# grab the frame dimensions and convert it to a blob
	(h, w) = frame.shape[:2]
	blob = cv2.dnn.blobFromImage(cv2.resize(frame, (300, 300)),
		0.007843, (300, 300), 127.5)

	# pass the blob through the network and obtain the detections and
	# predictions
	net2.setInput(blob)
	detections = net2.forward()
	# print(np.max(detections[0]))

	# print(detections)
	# loop over the detections
	for i in np.arange(0, detections.shape[2]):
		# extract the confidence (i.e., probability) associated with
		# the prediction
		confidence = detections[0, 0, i, 2]

		# filter out weak detections by ensuring the `confidence` is
		# greater than the minimum confidence
		idx = int(detections[0, 0, i, 1])
		label = "{}: {:.2f}%".format(CLASSES[idx],
									 confidence * 100)

		if confidence > args["confidence"]:
                        if True: 
			#if CLASSES[idx]=="person":
				# extract the index of the class label from the
				# `detections`, then compute the (x, y)-coordinates of
				# the bounding box for the object
				# idx = int(detections[0, 0, i, 1])
				box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
				(startX, startY, endX, endY) = box.astype("int")

				# draw the prediction on the frame

				cv2.rectangle(frame, (startX, startY), (endX, endY),
							  COLORS[idx], 2)
				y = startY - 15 if startY - 15 > 15 else startY + 15
				pix_person_height = endY - startY
				print ('pix_person_height = ', pix_person_height)
				print ('distance = ' , 174724 / pix_person_height)
				cv2.putText(frame, label, (startX, y),
							cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)


	# show the output frame
	cv2.imshow("Frame", frame)
	key = cv2.waitKey(1) & 0xFF

	# if the `q` key was pressed, break from the loop
	if key == ord("q"):
		break

	# update the FPS counter
	fps.update()

# stop the timer and display FPS information
fps.stop()
print("[INFO] elapsed time: {:.2f}".format(fps.elapsed()))
print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))

# do a bit of cleanup
cv2.destroyAllWindows()

检测的效果图如下:


     在这里我主要将目标检测用于检测人,所以在目标检测过程中,只将检测的出的人标记出来,将97行的 if True: 改为:if CLASSES[idx]=="person":  这样就可以使目标检测程序结果只显示检测出的人,通过效果图可以看到检测人的效果是远好于传统的HOG+SVM方法的。对目标人框的比较准确,这一点对我们的单目测距程序至关重要。这里也不由得感叹一下深度学习技术的强大,该目标检测程序在cpu环境下可以运行,我的笔记本上,配置为I5+8G, FPS速度在12左右,速度已经满足要求。在后期可以添加跟踪算法,如KCF等,可以使速度进一步提高。

      目标检测程序主要参考了该链接,程序总体并不难,只有150行,基本都可以看懂,需要注意的地方是模型加载的路径要写正确,最好使用绝对路径,程序的总体效果是可以的,主要在于模型的加载,模型可以在我的下载页面下载

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值