opencv 图像雾检测_OpenCV图像处理-基于OpenPose的关键点检测

本文介绍了如何使用OpenCV和OpenPose进行手部关键点检测及人体姿态估计。手部关键点检测基于CMU的手部关键点检测模型,通过多视角图像训练网络以提高准确性,输出22个关键点。OpenPose人体姿态估计也类似,两者都提供了代码示例来展示如何输出heatmap和关键点。
摘要由CSDN通过智能技术生成

ba18f8815e04aa1eb47f46fb4d3d04a3.png

OpenCV基于OpenPose的手部关键点检测

概述

✔️ 手部关键点检测,旨在找出给定图片中手指上的关节点及指尖关节点, 其中手部关键点检测的应用场景主要包括:

  • 手势识别
  • 手语识别与理解
  • 手部的行为识别

✔️ Opencv的DNN手部识别主要基于 CMU Perceptual Computing Lab 开源的手部关键点检测模型OpenPose。

手部关键点检测器的实现主要是基于论文:Hand Keypoint Detection in Single Images using Multiview Bootstrapping - CVPR2017

其中,如下:

ed7aaaa313e9fb3b00ebc48a7f80d9d3.png

✏️ 论文中,首先采用少量标注的人手部关键点图像数据集,训练类似于人体姿态关键点所使用的CPM - Convolutional Pose Machines 网络,以得到手部关键点的粗略估计. 采用了 31个 HD 高清摄像头从不同的视角对人手部进行拍摄。然后,将拍摄图像送入手部关键点检测器,以初步得到许多粗略的关键点检测结果。一旦有了同一手部的不同视角的关键点,则构建关键点测量(Keypoint triangulation),以得到关键点的3D位置。关键点的3D位置被从3D重新投影到每一幅不同视角的 2D 图片,并采用2D图像和关键点,进一步训练网络,以鲁棒的预测手部关键点位置,这对于关键点难以预测的图片而言是尤其重要的。采用这种方式,通过少量几次迭代,即可得到较为准确的手部关键点检测器.

⛳ 总之,关键点检测器和多视角图像(multi-view images) 一起构建了较为准确的手部关键点检测模型. 采用的检测网络类似于人体关键点中所用的网络结构. 进度提升的主要因素是采用了多视角图片标注图片数据集.

✔️ 手部关键点检测模型共输出 22 个关键点,其中包括手部的 21 个点,第 22 个点表示背景. 如图:

acdff8975342db6515c6ff79f9bf7c06.png
手部关键点

代码示例

☑️️ ️模型文件准备:

[1] - hand/pose_deploy.prototxt

[2] - hand/pose_iter_102000.caffemodel

☑️️ python代码:

import os
import cv2
import time
import numpy as np
import matplotlib.pyplot as plt


class general_pose_model(object):
    def __init__(self, modelpath):
        self.num_points = 22
        self.point_pairs = [[0,1],[1,2],[2,3],[3,4],
                            [0,5],[5,6],[6,7],[7,8],
                            [0,9],[9,10],[10,11],[11,12],
                            [0,13],[13,14],[14,15],[15,16],
                            [0,17],[17,18],[18,19],[19,20]]
        self.inHeight = 368
        self.threshold = 0.1
        self.hand_net = self.get_hand_model(modelpath)

    # 模型加载
    def get_hand_model(self, modelpath):

        prototxt   = os.path.join(modelpath, "pose_deploy.prototxt")
        caffemodel = os.path.join(modelpath, "../pose_iter_102000.caffemodel")
        hand_model = cv2.dnn.readNetFromCaffe(prototxt, caffemodel)

        return hand_model

    # 预测
    def predict(self, imgfile):
        img_cv2 = cv2.imread(imgfile)
        img_height, img_width, _ = img_cv2.shape
        aspect_ratio = img_width / img_height

        inWidth = int(((aspect_ratio * self.inHeight) * 8) // 8)
        inpBlob = cv2.dnn.blobFromImage(img_cv2, 1.0 / 255, (inWidth, self.inHeight), (0, 0, 0), swapRB=False, crop=False)

        self.hand_net.setInput(inpBlob)

        output = self.hand_net.forward()

        # vis heatmaps
        self.vis_heatmaps(imgfile, output)

        points = []
        for idx in range(self.num_points):
            probMap = output[0, idx, :, :] # confidence map.
            probMap = cv2.resize(probMap, (img_width, img_height))

            # Find global maxima of the probMap.
            minVal, prob, minLoc, point = cv2.minMaxLoc(probMap)

            if prob > self.threshold:
                points.append((int(point[0]), int(point[1])))
            else:
                points.append(None)

        return points

    # heatmap可视化
    def vis_heatmaps(self, imgfile, net_outputs):
        img_cv2 = cv2.imread(imgfile)
        plt.figure(figsize=[10, 10])

        for pdx in range(self.num_points):
            probMap = net_outputs[0, pdx, :, :]
            probMap = cv2.resize(probMap, (img_cv2.shape[1], img_cv2.shape[0]))
            plt.subplot(5, 5, pdx+1)
            plt.imshow(cv2.cvtColor(img_cv2, cv2.COLOR_BGR2RGB))
            plt.imshow(probMap, alpha=0.6)

            plt.colorbar()
            plt.axis("off")
        plt.show()

    # 手部关键点可视化
    def vis_pose(self, imgfile, points):
        img_cv2 = cv2.imread(imgfile)
        img_cv2_copy = np.copy(img_cv2)
        for idx in range(len(points)):
            if points[idx]:
                cv2.circle(img_cv2_copy, points[idx], 8, (0, 255, 255), thickness=-1,
                           lineType=cv2.FILLED)
                cv2.putText(img_cv2_copy, "{}".format(idx), points[idx], cv2.FONT_HERSHEY_SIMPLEX,
                            1, (0, 0, 0), 2, lineType=cv2.LINE_AA)

        # 绘制连接点
        for pair in self.point_pairs:
            partA = pair[0]
            partB = pair[1]

            if points[partA] and points[partB]:
                cv2.line(img_cv2, points[partA], points[partB], (0, 255, 255), 3)
                cv2.circle(img_cv2, points[partA], 8, (0, 0, 255), thickness=-1, lineType=cv2.FILLED)

        plt.figure(figsize=[10, 10])

        plt.subplot(1, 2, 1)
        plt.imshow(cv2.cvtColor(img_cv2, cv2.COLOR_BGR2RGB))
        plt.axis("off")
        plt.subplot(1, 2, 2)
        plt.imshow(cv2.cvtColor(img_cv2_copy, cv2.COLOR_BGR2RGB))
        plt.axis("off")
        plt.show()


if __name__ == '__main__':
    print("[INFO]Pose estimation.")
    os.listdir(imgs_path)]
    img_files = ['hand.jpg']
    start = time.time()
    modelpath = ""
    pose_model = general_pose_model(modelpath)
    print("[INFO]Model loads time: ", time.time() - start)

    for img_file in img_files:
        start = time.time()
        res_points = pose_model.predict(img_file)
        print("[INFO]Model predicts time: ", time.time() - start)
        pose_model.vis_pose(img_file, res_points)

    print("[INFO]Done.")

[1] - 输出heatmap:

b88c34bae2bcd26d58b6a5c078cb3a94.png

[2] - 输出关键点:

fd8b708cece4dde3490c6d39a52a700c.png

OpenCV于OpenPose的人体姿态估计

✔️ 同样,Opencv也可以结合OpenPose进行人体姿态估计,具体实现和手部关键点检测类似,只是调用的模型函数有所区别,具体代码实现可以参考下文。

☑️ 模型文件下载:

OpenPose 人体姿态模型下载路径:

BODY25: http://posefs1.perception.cs.cmu.edu/OpenPose/models/pose/body_25/pose_iter_584000.caffemodel
COCO: http://posefs1.perception.cs.cmu.edu/OpenPose/models/pose/coco/pose_iter_440000.caffemodel
MPI: http://posefs1.perception.cs.cmu.edu/OpenPose/models/pose/mpi/pose_iter_160000.caffemodel

COCO prototxt:https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/models/pose/coco/pose_deploy_linevec.prototxt

☑️ 代码实现:

import cv2
import time
import numpy as np
import matplotlib.pyplot as plt
import os


class general_pose_model(object):
    def __init__(self, modelpath, mode="BODY25"):
        # 指定采用的模型
        #   Body25: 25 points
        #   COCO:   18 points
        #   MPI:    15 points
        self.inWidth = 368
        self.inHeight = 368
        self.threshold = 0.1
        if mode == "BODY25":
            self.pose_net = self.general_body25_model(modelpath)
        elif mode == "COCO":
            self.pose_net = self.general_coco_model(modelpath)
        elif mode == "MPI":
            self.pose_net = self.get_mpi_model(modelpath)


    def get_mpi_model(self, modelpath):
        self.points_name = { 
            "Head": 0, "Neck": 1, 
            "RShoulder": 2, "RElbow": 3, "RWrist": 4,
            "LShoulder": 5, "LElbow": 6, "LWrist": 
            7, "RHip": 8, "RKnee": 9, "RAnkle": 10, 
            "LHip": 11, "LKnee": 12, "LAnkle": 13, 
            "Chest": 14, "Background": 15 }
        self.num_points = 15
        self.point_pairs = [[0, 1], [1, 2], [2, 3], 
                            [3, 4], [1, 5], [5, 6], 
                            [6, 7], [1, 14],[14, 8], 
                            [8, 9], [9, 10], [14, 11], 
                            [11, 12], [12, 13]
                            ]
        prototxt = os.path.join(
            modelpath,
            "pose/mpi/pose_deploy_linevec_faster_4_stages.prototxt")
        caffemodel = os.path.join(
            modelpath, 
            "pose/mpi/pose_iter_160000.caffemodel")
        mpi_model = cv2.dnn.readNetFromCaffe(prototxt, caffemodel)

        return mpi_model


    def general_coco_model(self, modelpath):
        self.points_name = {
            "Nose": 0, "Neck": 1, 
            "RShoulder": 2, "RElbow": 3, "RWrist": 4,
            "LShoulder": 5, "LElbow": 6, "LWrist": 7, 
            "RHip": 8, "RKnee": 9, "RAnkle": 10, 
            "LHip": 11, "LKnee": 12, "LAnkle": 13, 
            "REye": 14, "LEye": 15, 
            "REar": 16, "LEar": 17, 
            "Background": 18}
        self.num_points = 18
        self.point_pairs = [[1, 0], [1, 2], [1, 5], 
                            [2, 3], [3, 4], [5, 6], 
                            [6, 7], [1, 8], [8, 9],
                            [9, 10], [1, 11], [11, 12], 
                            [12, 13], [0, 14], [0, 15], 
                            [14, 16], [15, 17]]
        prototxt   = os.path.join(
            modelpath, 
            "openpose_pose_coco.prototxt")
        caffemodel = os.path.join(
            modelpath, 
            "../pose_iter_440000.caffemodel")
        print(prototxt, caffemodel)
        coco_model = cv2.dnn.readNetFromCaffe(prototxt, caffemodel)

        return coco_model


    def general_body25_model(self, modelpath):
        self.num_points = 25
        self.point_pairs = [[1, 0], [1, 2], [1, 5], 
                            [2, 3], [3, 4], [5, 6], 
                            [6, 7], [0, 15], [15, 17], 
                            [0, 16], [16, 18], [1, 8],
                            [8, 9], [9, 10], [10, 11], 
                            [11, 22], [22, 23], [11, 24],
                            [8, 12], [12, 13], [13, 14], 
                            [14, 19], [19, 20], [14, 21]]
        prototxt   = os.path.join(
            modelpath, 
            "pose/body_25/pose_deploy.prototxt")
        caffemodel = os.path.join(
            modelpath, 
            "pose/body_25/pose_iter_584000.caffemodel")
        coco_model = cv2.dnn.readNetFromCaffe(prototxt, caffemodel)

        return coco_model


    def predict(self, imgfile):
        img_cv2 = cv2.imread(imgfile)
        img_height, img_width, _ = img_cv2.shape
        inpBlob = cv2.dnn.blobFromImage(img_cv2, 
                                        1.0 / 255, 
                                        (self.inWidth, self.inHeight),
                                        (0, 0, 0), 
                                        swapRB=False, 
                                        crop=False)
        self.pose_net.setInput(inpBlob)
        self.pose_net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
        self.pose_net.setPreferableTarget(cv2.dnn.DNN_TARGET_OPENCL)

        output = self.pose_net.forward()

        H = output.shape[2]
        W = output.shape[3]
        print(output.shape)

        # vis heatmaps
        self.vis_heatmaps(img_file, output)

        #
        points = []
        for idx in range(self.num_points):
            probMap = output[0, idx, :, :] # confidence map.

            # Find global maxima of the probMap.
            minVal, prob, minLoc, point = cv2.minMaxLoc(probMap)

            # Scale the point to fit on the original image
            x = (img_width * point[0]) / W
            y = (img_height * point[1]) / H

            if prob > self.threshold:
                points.append((int(x), int(y)))
            else:
                points.append(None)

        return points


    def vis_heatmaps(self, imgfile, net_outputs):
        img_cv2 = cv2.imread(imgfile)
        plt.figure(figsize=[10, 10])

        for pdx in range(self.num_points):
            probMap = net_outputs[0, pdx, :, :]
            probMap = cv2.resize(
                probMap, 
                (img_cv2.shape[1], img_cv2.shape[0])
            )
            plt.subplot(5, 5, pdx+1)
            plt.imshow(cv2.cvtColor(img_cv2, cv2.COLOR_BGR2RGB))
            plt.imshow(probMap, alpha=0.6)
            plt.colorbar()
            plt.axis("off")
        plt.show()


    def vis_pose(self, imgfile, points):
        img_cv2 = cv2.imread(imgfile)
        img_cv2_copy = np.copy(img_cv2)
        for idx in range(len(points)):
            if points[idx]:
                cv2.circle(img_cv2_copy, 
                           points[idx], 
                           3, 
                           (0, 0, 255), 
                           thickness=-1,
                           lineType=cv2.FILLED)
                cv2.putText(img_cv2_copy, 
                            "{}".format(idx), 
                            points[idx], 
                            cv2.FONT_HERSHEY_SIMPLEX,
                            .6, 
                            (0, 255, 255), 
                            1, 
                            lineType=cv2.LINE_AA)

        # Draw Skeleton
        for pair in self.point_pairs:
            partA = pair[0]
            partB = pair[1]

            if points[partA] and points[partB]:
                cv2.line(img_cv2, 
                         points[partA], 
                         points[partB], 
                         (0, 255, 0), 3)
                cv2.circle(img_cv2, 
                           points[partA], 
                           3, 
                           (0, 0, 255), 
                           thickness=-1, 
                           lineType=cv2.FILLED)

[1] - 输出heatmap:

8452c63625929a47e80d27cf2b6b2846.png

[2] - 输出姿态:

b5821670caabaf8bf7b09fd43b5b6846.png

------------------------------------------可爱の分割线------------------------------------------

更多Opencv教程可以 Follow github的opencv教程,中文&English 欢迎Star❤️❤️❤️

JimmyHHua/opencv_tutorials​github.com
a78267ae64375529ccffc035c4ca8ffe.png
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值