Diego 1# 机器视觉-人脸检测

34 篇文章 11 订阅
21 篇文章 0 订阅

更多创客作品,请关注笔者网站园丁鸟,搜集全球极具创意,且有价值的创客作品
ROS机器人知识请关注,diegorobot
业余时间完成的一款在线统计过程分析工具SPC,及SPC知识分享网站qdo


机器视觉是一个非常复杂的主题,需要比较专业的计算机图形学相关知识,在ROS By Example Volume 1这本书中提供了比较好的入门范例,所以我们将按照此书中所介绍的例子开启我们Diego的机器视觉之旅,后面逐步增加比较复杂的内容。
我们首先来实现实现书中所讲到的人脸检测、特征值获取、人脸跟踪,机器人跟随等功能,由于此书中所使用的代码依赖于opencv2.4,但在Diego我们安装的是ROS kinetic,默认安装了Opencv3,所以本人对其源代码进行了移植,使其可以在Opencv3环境中正确编译,移植后的代码请见github Diego1 plus下的diego_vision目录。
1.关于图像格式
ROS有自己图像格式,但比较流行的都是使用opencv来对图像视频进行处理,因为其封装了大量先进的图像算法,而且还是开源的,所以首先就需要将ROS的图像格式转换为opencv的图像格式,无奇不有的ROS已经提供了cv_bridge这个包来完成转换的任务,而且支持opencv3版本。如果安装的是ros-kinetic-desktop-full的版本,那么这个包已经安装好了,如果没有ros-kinetic-desktop-full则需要单独安装,这里建议大家git clone源代码安装,因为如果用apt-get安装的话,会同时安装opencv2.4版本,这会造成一些版本的冲突,有关cv_bridge的使用方法,可以参考官方的示例代码。
2.ros2opencv3.py通用功能包
将原来的ros2opencv2.py针对opencv2.4的代码,移植到opencv3d的环境下,同时重命名为ros2opencv3.py,类名也改为ROS2OpenCV3,源代码请见GitHub,如下是主要函数功能说明:

init:ROS2OpenCV3的初始化函数,设置相关参数,初始化ROS 节点,订阅RGB及RGBD消息主题

on_mouse_click:鼠标事件,用户可以通过鼠标在视频窗口中画出ROI
image_callback:RGB图像的回调函数
depth_callback:深度图像的回调函数
convert_image:将RGB图像从ROS格式转换为opencv格式
convert_depth_image:将深度图像从ROS格式转换opencv格式
publish_roi:发布ROI(Region Of Interesting)消息,此消息描述图像中我们感兴趣的区域
process_image:RGB图像的处理函数,这里未做任何处理,留做后面继承类扩展
process_depth_image:深度图像的处理函数,这里未做任何处理,留做后面继承类扩展
display_selection:用矩形显示用户在视频上选择的区域,用小圆点显示用户在图像上点击的区域
is_rect_nonzero:判断矩形区域是否有效
cvBox2D_to_cvRect:将cvBox2D数据结构转换为cvRect
cvRect_to_cvBox2D:将cvRect数据结构转换为cvBox2D

3.人脸识别
3.1 人脸识别的方法
opencv人脸检测的一般步骤是在这里插入图片描述
分类器暨Haar特征分类器,人脸的Haar特征分类器就是一个XML文件,该文件中会描述人脸的Haar特征值,opencv算法针对图片中的数据匹配分类器,从而识别出人脸,我们这里使用到三个分类器

haarcascade_frontalface_alt2.xml
haarcascade_frontalface_alt.xml
haarcascade_profileface.xml

从摄像头读入图片视频后,我们首先需要对图像转换成灰度图像,然后在进行特征获取,最后调用opencv的detectMultiScale既可以检测人脸
3.2 代码分析
代码已经移植到opencv3,可以参见github
3.2.1完整的face_detector.py代码

#!/usr/bin/env python

import rospy
import cv2
from ros2opencv3 import ROS2OpenCV3

class FaceDetector(ROS2OpenCV3):
    def __init__(self, node_name):
        super(FaceDetector, self).__init__(node_name)
                  
        # Get the paths to the cascade XML files for the Haar detectors.
        # These are set in the launch file.
        cascade_1 = rospy.get_param("~cascade_1", "")
        cascade_2 = rospy.get_param("~cascade_2", "")
        cascade_3 = rospy.get_param("~cascade_3", "")
        
        # Initialize the Haar detectors using the cascade files
        self.cascade_1 = cv2.CascadeClassifier(cascade_1)
        self.cascade_2 = cv2.CascadeClassifier(cascade_2)
        self.cascade_3 = cv2.CascadeClassifier(cascade_3)
        
        # Set cascade parameters that tend to work well for faces.
        # Can be overridden in launch file
        self.haar_scaleFactor = rospy.get_param("~haar_scaleFactor", 1.3)
        self.haar_minNeighbors = rospy.get_param("~haar_minNeighbors", 3)
        self.haar_minSize = rospy.get_param("~haar_minSize", 30)
        self.haar_maxSize = rospy.get_param("~haar_maxSize", 150)
        
        # Store all parameters together for passing to the detector
        self.haar_params = dict(scaleFactor = self.haar_scaleFactor,
                                minNeighbors = self.haar_minNeighbors,
                                flags = cv2.CASCADE_DO_CANNY_PRUNING,
                                minSize = (self.haar_minSize, self.haar_minSize),
                                maxSize = (self.haar_maxSize, self.haar_maxSize)
                                )
                        
        # Do we should text on the display?
        self.show_text = rospy.get_param("~show_text", True)
        
        # Intialize the detection box
        self.detect_box = None
        
        # Track the number of hits and misses
        self.hits = 0
        self.misses = 0
        self.hit_rate = 0

    def process_image(self, cv_image):
        # Create a greyscale version of the image
        grey = cv2.cvtColor(cv_image, cv2.COLOR_BGR2GRAY)
        
        # Equalize the histogram to reduce lighting effects
        grey = cv2.equalizeHist(grey)
            
        # Attempt to detect a face
        self.detect_box = self.detect_face(grey)
        
        # Did we find one?
        if self.detect_box is not None:
            self.hits += 1
        else:
            self.misses += 1
        
        # Keep tabs on the hit rate so far
        self.hit_rate = float(self.hits) / (self.hits + self.misses)
                    
        return cv_image


    def detect_face(self, input_image):
        # First check one of the frontal templates
        if self.cascade_1:
            faces = self.cascade_1.detectMultiScale(input_image, **self.haar_params)
                                         
        # If that fails, check the profile template
        if len(faces) == 0 and self.cascade_3:
            faces = self.cascade_3.detectMultiScale(input_image, **self.haar_params)

        # If that also fails, check a the other frontal template
        if len(faces) == 0 and self.cascade_2:
            faces = self.cascade_2.detectMultiScale(input_image, **self.haar_params)

        # The faces variable holds a list of face boxes.
        # If one or more faces are detected, return the first one.  
        if len(faces) > 0:
            face_box = faces[0]
        else:
            # If no faces were detected, print the "LOST FACE" message on the screen
            if self.show_text:
                font_face = cv2.FONT_HERSHEY_SIMPLEX
                font_scale = 0.5
                cv2.putText(self.marker_image, "LOST FACE!", 
                            (int(self.frame_size[0] * 0.65), int(self.frame_size[1] * 0.9)), 
                            font_face, font_scale, (255, 50, 50))
            face_box = None

        # Display the hit rate so far
        if self.show_text:
            font_face = cv2.FONT_HERSHEY_SIMPLEX
            font_scale = 0.5
            cv2.putText(self.marker_image, "Hit Rate: " + 
                        str(trunc(self.hit_rate, 2)), 
                        (20, int(self.frame_size[1] * 0.9)), 
                        font_face, font_scale, (255, 255, 0))
        
        return face_box
        
def trunc(f, n):
    '''Truncates/pads a float f to n decimal places without rounding'''
    slen = len('%.*f' % (n, f))
    return float(str(f)[:slen])
    
if __name__ == '__main__':
    try:
        node_name = "face_detector"
        FaceDetector(node_name)
        rospy.spin()
    except KeyboardInterrupt:
        print "Shutting down face detector node."
        cv2.destroyAllWindows()

3.2.2代码解释
FaceDetector类继承自ROS2OpenCV3,重写了process_image,对图片进行灰度转换,同时新增了detect_face函数进行人脸检测,在类的初始化代码中载入分类器,cascade_1,cascade_2,cascade_3定义了三个分类器的位置

    # Get the paths to the cascade XML files for the Haar detectors.
        # These are set in the launch file.
        cascade_1 = rospy.get_param("~cascade_1", "")
        cascade_2 = rospy.get_param("~cascade_2", "")
        cascade_3 = rospy.get_param("~cascade_3", "")
        
        # Initialize the Haar detectors using the cascade files
        self.cascade_1 = cv2.CascadeClassifier(cascade_1)
        self.cascade_2 = cv2.CascadeClassifier(cascade_2)
        self.cascade_3 = cv2.CascadeClassifier(cascade_3)

在process_image函数中对图片进行灰度转换

        # Create a greyscale version of the image
        grey = cv2.cvtColor(cv_image, cv2.COLOR_BGR2GRAY)
        
        # Equalize the histogram to reduce lighting effects
        grey = cv2.equalizeHist(grey)

在detect_face函数中进行人脸检测

        if self.cascade_1:
            faces = self.cascade_1.detectMultiScale(input_image, **self.haar_params)
                                         
        # If that fails, check the profile template
        if len(faces) == 0 and self.cascade_3:
            faces = self.cascade_3.detectMultiScale(input_image, **self.haar_params)

        # If that also fails, check a the other frontal template
        if len(faces) == 0 and self.cascade_2:
            faces = self.cascade_2.detectMultiScale(input_image, **self.haar_params)

在detect_face函数中显示信息

        # The faces variable holds a list of face boxes.
        # If one or more faces are detected, return the first one.  
        if len(faces) > 0:
            face_box = faces[0]
        else:
            # If no faces were detected, print the "LOST FACE" message on the screen
            if self.show_text:
                font_face = cv2.FONT_HERSHEY_SIMPLEX
                font_scale = 0.5
                cv2.putText(self.marker_image, "LOST FACE!", 
                            (int(self.frame_size[0] * 0.65), int(self.frame_size[1] * 0.9)), 
                            font_face, font_scale, (255, 50, 50))
            face_box = None

        # Display the hit rate so far
        if self.show_text:
            font_face = cv2.FONT_HERSHEY_SIMPLEX
            font_scale = 0.5
            cv2.putText(self.marker_image, "Hit Rate: " + 
                        str(trunc(self.hit_rate, 2)), 
                        (20, int(self.frame_size[1] * 0.9)), 
                        font_face, font_scale, (255, 255, 0))
        
        return face_box

父类ROS2OpenCV3 的image_callback函数将会把检测到人脸位置用矩形线框表示出来
4. 载入人脸检测节点
首先确保启动xtion启动,可以通过如下代码启动:

roslaunch diego_vision openni_node.launch

face_detector.launch文件可见github的diego_vision/launch目录

<launch>
   <node pkg="diego_vision" name="face_detector" type="face_detector.py" output="screen">

   <remap from="input_rgb_image" to="/camera/rgb/image_color" />

   <rosparam>
       haar_scaleFactor: 1.3
       haar_minNeighbors: 3
       haar_minSize: 30
       haar_maxSize: 150
   </rosparam>

   <param name="cascade_1" value="$(find diego_vision)/data/haar_detectors/haarcascade_frontalface_alt2.xml" />
   <param name="cascade_2" value="$(find diego_vision)/data/haar_detectors/haarcascade_frontalface_alt.xml" />
   <param name="cascade_3" value="$(find diego_vision)/data/haar_detectors/haarcascade_profileface.xml" />

</node>
</launch>

启动人脸检测节点

roslaunch diego_vision face_detector.launch

启动后平面将会显示一个视频窗体,可以检测到摄像头捕捉到的人脸,并且用矩形标识出来

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

DiegoRobot

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值