java seetaface6 人脸跟踪实战,视频流人脸识别优化实战

项目地址

seetaface6SDK地址

前期准备

以下代码可根据自己的实际环境需求进行修改,欢迎大家加群讨论(群地址在gitee上面,请不要忘记点star)。
  1. 下载项目
  2. 下载seetaface6 模型文件: 模型文件1 (code:ngne),模型文件2 (code: t6j0)
  3. 配置模型文件到项目中,运行Test,当作初步了解seetaface6
  4. 将seeta-sdk-platform 打包成 jar,引入到自己项目中。
    上传到本地maven仓库:
	mvn install:install-file -DgroupId=com.seeta.sdk -DartifactId=seeta-sdk-platform -Dversion=1.23 -Dpackaging=jar -Dfile=D:\face\seeta-sdk-platform\seeta-sdk-platform\target\seeta-sdk-platform-1.23.jar

##目的

获得每个人脸在一段时间内出现的最清晰的图片。

人脸跟踪介绍

  1. 普通的照片人脸识别器在视频流的情况下去识别每一帧图片效率底下,而且分辨不出多帧下同一个人。
  2. 人脸跟踪是指在图像序列中确定各帧间人脸的对应关系的过程,即确定每个人脸的运动轨迹及其大小变化的过程。(不同帧同一个人返回的PID一样)

人脸质量介绍

  1. 跟踪的每一个人都有PID,不同帧同一个人返回的PID一样;
  2. 人脸质量检测的目的是区分同一个PID后,再区分照片质量高低,方便将高质量照片最终存库或最终去对比其他库做准备。
  3. 最终目的是减少人脸去底库对比的次数

人脸跟踪代码

 
import com.seeta.pool.SeetaConfSetting;
import com.seeta.proxy.FaceLandmarkerProxy;
import com.seeta.proxy.QualityOfLBNProxy;
import com.seeta.proxy.QualityOfPoseExProxy;
import com.seeta.sdk.*;
import com.seeta.sdk.util.LoadNativeCore;
import me.calvin.example.utils.OpenCVImageUtil;
import org.bytedeco.javacv.CanvasFrame;
import org.bytedeco.javacv.Frame;
import org.bytedeco.javacv.OpenCVFrameConverter;
import org.bytedeco.javacv.OpenCVFrameGrabber;
import org.bytedeco.opencv.global.opencv_imgcodecs;
import org.bytedeco.opencv.opencv_core.Mat;
import org.bytedeco.opencv.opencv_core.Point;
import org.bytedeco.opencv.opencv_core.Rect;
import org.bytedeco.opencv.opencv_core.Scalar;

import javax.swing.*;
import java.io.IOException;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.Objects;

import static org.bytedeco.opencv.global.opencv_imgproc.*;

/**
 * 本地摄像头人脸检测
 *
 * @author Calvin
 */
public class Seetaface6Example {


    //模型路径
    public static String CSTA_PATH = "D:\\face\\models";

    /**
     * 输出图片路径
     */
    public static String BASE_OUTPUT_PATH = "F:\\0workSpace\\AIAS\\4_video_sdks\\camera_face_sdk\\output\\";

    /**
     * 人脸识别器模型
     */
    public static String[] detector_cstas = {CSTA_PATH + "/face_detector.csta"};

    /**
     * 68点人脸关键点模型
     */
    public static String[] landmarker68_cstas = {CSTA_PATH + "/face_landmarker_pts68.csta"};
    /**
     * 5点人脸关键点模型
     */
    public static String[] landmarker5_cstas = {CSTA_PATH + "/face_landmarker_pts5.csta"};

    /**
     * 人脸清晰图模型
     */
    public static String[] qualityOfLBN_cstas = {CSTA_PATH + "/quality_lbn.csta"};

    /**
     * 姿态评估模型
     */
    public static String[] pose_estimation_cstas = {CSTA_PATH + "/pose_estimation.csta"};

    // 68关键点识别器
    public static SeetaConfSetting faceLandmarker68PoolSetting = new SeetaConfSetting(new SeetaModelSetting(0, landmarker68_cstas, SeetaDevice.SEETA_DEVICE_AUTO));
    public static FaceLandmarkerProxy faceLandmarker68Proxy = new FaceLandmarkerProxy(faceLandmarker68PoolSetting);


    // 5关键点识别器
    public static SeetaConfSetting faceLandmarker5PoolSetting = new SeetaConfSetting(new SeetaModelSetting(0, landmarker5_cstas, SeetaDevice.SEETA_DEVICE_AUTO));
    public static FaceLandmarkerProxy faceLandmarker5Proxy = new FaceLandmarkerProxy(faceLandmarker5PoolSetting);

    //人脸清晰度评估器
    public static SeetaConfSetting setting = new SeetaConfSetting(new SeetaModelSetting(0, qualityOfLBN_cstas, SeetaDevice.SEETA_DEVICE_AUTO));
    public static QualityOfLBNProxy qualityOfLBNProxy = new QualityOfLBNProxy(setting);

    //人脸姿态评估器
    public static SeetaConfSetting qualityOfPoseExsetting = new SeetaConfSetting(new SeetaModelSetting(0, pose_estimation_cstas, SeetaDevice.SEETA_DEVICE_AUTO));
    public static QualityOfPoseExProxy qualityOfPoseExProxy = new QualityOfPoseExProxy(qualityOfPoseExsetting);


    public static void main(String[] args) throws IOException {
        //加载 dll
        LoadNativeCore.LOAD_NATIVE(SeetaDevice.SEETA_DEVICE_AUTO);
        faceDetection();
    }

    /**
     * 人脸检测
     */
    public static void faceDetection()
            throws IOException {

        // 开启摄像头,获取图像(得到的图像为frame类型,需要转换为mat类型进行检测和识别)
        OpenCVFrameGrabber grabber = new OpenCVFrameGrabber(0);
        grabber.start();

        // Frame与Mat转换
        OpenCVFrameConverter.ToMat converter = new OpenCVFrameConverter.ToMat();

        CanvasFrame canvas = new CanvasFrame("人脸检测"); // 新建一个预览窗口

        canvas.setCanvasSize(1000, 800);//设置大小

        canvas.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);

        canvas.setVisible(true);
        canvas.setFocusable(true);
        // 窗口置顶
        if (canvas.isAlwaysOnTopSupported()) {
            canvas.setAlwaysOnTop(true);
        }
        Frame frame;

        /**
         * 人脸跟踪器
         */
        FaceTracker faceTracker = new FaceTracker(detector_cstas[0], grabber.getImageWidth(), grabber.getImageHeight());

        for (; canvas.isVisible() && (frame = grabber.grab()) != null; ) {

            // 将获取的frame转化成mat数据类型
            Mat img = converter.convert(frame);
            //有可能为空
            if (img == null) {
                continue;
            }

            // 自定义 mat 转 SeetaImageData
            SeetaImageData image = OpenCVImageUtil.mat2SeetaImageData(img);

            //开始
            SeetaTrackingFaceInfo[] tracks = faceTracker.Track(image);

            // 遍历人脸
            for (SeetaTrackingFaceInfo item : tracks) {

                //官方源码没写get set 赋值不是很规范
                SeetaRect seetaRect = new SeetaRect();
                seetaRect.height = item.height;
                seetaRect.width = item.width;
                seetaRect.x = item.x;
                seetaRect.y = item.y;

                // 68点人脸关键点识别,为清晰度评估所用
                SeetaPointF[] point68FS = faceLandmarker68Proxy.mark(image, seetaRect);
                QualityOfLBNProxy.LBNClass lbnClass = qualityOfLBNProxy.detect(image, point68FS);

                // 5点人脸关键点识别,为姿态评估所用
                SeetaPointF[] point5FS = faceLandmarker5Proxy.mark(image, seetaRect);
                QualityOfPoseEx.QualityLevel level = qualityOfPoseExProxy.check(image, seetaRect, point5FS);

                //打印质量检测结果
                System.out.println(item.PID + ": " + "QualityLevel," + level + ", lbnClass===" + lbnClass.getBlurstate() + " === " + lbnClass.getLightstate() + " === " + lbnClass.getNoisestate());

                /**
                 * 保存时 上下左右都加10
                 */
                Rect face = new Rect(item.x - 10, item.y - 10, item.width + 20, item.height + 20);

                /**
                 * 保存图片,比一定真保存,同一个人只有分数高于前面才保存
                 */
                saveRect(item, img, lbnClass, level, face);

                // 绘制人脸矩形区域,scalar色彩顺序:BGR(蓝绿红)
                rectangle(img, face, new Scalar(0, 0, 255, 1));

                int pos_x = Math.max(face.tl().x() - 10, 0);
                int pos_y = Math.max(face.tl().y() - 10, 0);
                // 在人脸矩形上面绘制文字
                putText(
                        img,
                        "PID:" + item.PID,
                        new Point(pos_x, pos_y),
                        FONT_HERSHEY_COMPLEX,
                        1.0,
                        new Scalar(0, 0, 255, 2.0));
            }

            // 显示视频图像
            canvas.showImage(frame);
        }

        canvas.dispose();
        grabber.close();
    }


    /**
     * 将PID 质量评分,质量评估结果,保存到路径上
     *
     * @param pid      人脸PID 同一个人脸PID相同
     * @param score    分数
     * @param lbnClass 人脸清晰度评估结果
     * @param level    人脸姿态评估结果
     * @return path
     */
    public static String getPath(int pid, short score, QualityOfLBNProxy.LBNClass lbnClass, QualityOfPoseEx.QualityLevel level) {
        StringBuffer sb = new StringBuffer();
        sb.append(pid).append(", score ").append(score);
        if (level != null) {
            sb.append(", check ").append(level);
        }
        if (lbnClass != null) {
            sb.append(", lbn ").append(lbnClass.getBlurstate()).append(",").append(lbnClass.getNoisestate()).append(",").append(lbnClass.getLightstate());
        }

        sb.append(".jpg");
        return BASE_OUTPUT_PATH + sb;
    }


    /**
     * 人脸缓存  PID 人脸质量
     * <pid,Quality></>
     */
    public static Map<Integer, Quality> tempMap = Collections.synchronizedMap(new HashMap<>());

    /**
     * 保存正方形区域
     * 根据分数平判是否覆盖之前的图片
     *
     * @param item     跟踪到的1个人脸
     * @param img      原始帧照片
     * @param lbnClass 人脸清晰度结果
     * @param level    人脸姿态评估结果
     * @param face     人脸四方框
     */
    public static void saveRect(SeetaTrackingFaceInfo item, Mat img, QualityOfLBNProxy.LBNClass lbnClass, QualityOfPoseEx.QualityLevel level, Rect face) {
        // 计算分数
        short score = analysisScore(lbnClass, level);
        // 如果不包含 pid ,先直接保存图片和put到map
        if (!tempMap.containsKey(item.PID)) {
            Quality quality = new Quality(score, lbnClass, level);
            tempMap.put(item.PID, quality);
            saveImg(img, face, getPath(item.PID, score, lbnClass, level));
        } else {
            Quality quality = tempMap.get(item.PID);
            if (quality.getScore() < score) {
                quality = new Quality(score, lbnClass, level);
                tempMap.put(item.PID, quality);
                saveImg(img, face, getPath(item.PID, score, lbnClass, level));
            }
        }
    }


    /**
     * 分析分数 给各质量指标加权
     *
     * @param lbnClass 不想要 lbnClass 清晰度评估结果 可以传null
     * @param level    不想要 人脸姿态评估 结果 可以传null
     * @return 分数
     */
    public static short analysisScore(QualityOfLBNProxy.LBNClass lbnClass, QualityOfPoseEx.QualityLevel level) {

        short levelScore = 1;
        short blurstateScore = 1;
        short lightstate = 1;
        short noisestate = 1;

        if (level != null) {
            if (Objects.equals(QualityOfPoseEx.QualityLevel.HIGH, level)) {
                levelScore = 3;
            } else if (Objects.equals(QualityOfPoseEx.QualityLevel.MEDIUM, level)) {
                levelScore = 2;
            }
        }

        //不想要 lbnClass 清晰度评估结果 可以穿null
        if (lbnClass != null) {
            if (Objects.equals(QualityOfLBN.BLURSTATE.CLEAR, lbnClass.getBlurstate())) {
                blurstateScore = 2;
            }

            if (Objects.equals(QualityOfLBN.LIGHTSTATE.BRIGHT, lbnClass.getLightstate())) {
                lightstate = 2;
            }

            if (Objects.equals(QualityOfLBN.NOISESTATE.NONOISE, lbnClass.getNoisestate())) {
                noisestate = 2;
            }
        }

        /**
         * blurstateScore 比较重要,乘以 2
         */
        int lbnScore = blurstateScore * 2 + lightstate + noisestate;

        /**
         * levelScore 比较重要,乘以 2
         */
        return (short) (lbnScore + levelScore * 2);
    }

    /**
     * 保存头像图片
     *
     * @param img  原始帧图片
     * @param face 长方形
     * @param path 保存路径
     */
    public static void saveImg(Mat img, Rect face, String path) {
        Mat roi_img = new Mat(img, face);
        //创建临时的人脸拷贝图形
        Mat tmp_img = new Mat();
        //人脸拷贝
        roi_img.copyTo(tmp_img);
        opencv_imgcodecs.imwrite(path, tmp_img);
    }


    /**
     * 人脸质量对象
     */
    static class Quality {
        /**
         * 质量评分
         */
        private short score;
        /**
         * 人脸清晰度评估 结果
         */
        private QualityOfLBNProxy.LBNClass lbnClass;

        /**
         * 深度学习的 人脸姿态评估 结果
         */
        private QualityOfPoseEx.QualityLevel level;

        private Quality() {
        }

        public Quality(short score, QualityOfLBNProxy.LBNClass lbnClass, QualityOfPoseEx.QualityLevel level) {
            this.score = score;
            this.lbnClass = lbnClass;
            this.level = level;
        }

        public short getScore() {
            return score;
        }

        public void setScore(short score) {
            this.score = score;
        }

        public QualityOfLBNProxy.LBNClass getLbnClass() {
            return lbnClass;
        }

        public void setLbnClass(QualityOfLBNProxy.LBNClass lbnClass) {
            this.lbnClass = lbnClass;
        }

        public QualityOfPoseEx.QualityLevel getLevel() {
            return level;
        }

        public void setLevel(QualityOfPoseEx.QualityLevel level) {
            this.level = level;
        }
    }

}

  public static SeetaImageData mat2SeetaImageData(Mat matrix) {
    int cols = matrix.cols();
    int rows = matrix.rows();
    int elemSize = (int) matrix.elemSize();
    byte[] data = new byte[cols * rows * elemSize];
    matrix.data().get(data);
    switch (matrix.channels()) {
      case 1:
        break;
      case 3:
        byte b;
        for (int i = 0; i < data.length; i = i + 3) {
          b = data[i];
          data[i] = data[i + 2];
          data[i + 2] = b;
        }
        break;
      default:
        return null;
    }
    SeetaImageData seetaImageData = new SeetaImageData(cols, rows, matrix.channels());
    seetaImageData.data = data;

    return seetaImageData;
  }

主要的jar

         <dependency>
            <groupId>org.bytedeco</groupId>
            <artifactId>javacv-platform</artifactId>
            <version>1.5.7</version>
        </dependency>
        <dependency>
            <groupId>com.seeta.sdk</groupId>
            <artifactId>seeta-sdk-platform</artifactId>
            <version>1.23</version>
        </dependency>
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-pool2</artifactId>
            <version>2.5.0</version>
        </dependency>
        <dependency>
            <groupId>com.google.guava</groupId>
            <artifactId>guava</artifactId>
            <version>30.1.1-jre</version>
        </dependency>

输出结果

  1. 在BASE_OUTPUT_PATH路径下,生成每个人不同分数下的照片(如果只取保留最好的一张,需要修改保存图片时的代码)

图片名称结果如下:

0, score 10, check HIGH, lbn BLUR,HAVENOISE,DARK.jpg
0, score 12, check HIGH, lbn CLEAR,HAVENOISE,DARK.jpg
0, score 13, check HIGH, lbn CLEAR,NONOISE,DARK.jpg
0, score 14, check HIGH, lbn CLEAR,NONOISE,BRIGHT.jpg
1, score 9, check LOW, lbn CLEAR,HAVENOISE,BRIGHT.jpg
1, score 12, check MEDIUM, lbn CLEAR,NONOISE,BRIGHT.jpg
1, score 13, check HIGH, lbn CLEAR,NONOISE,DARK.jpg
2, score 9, check LOW, lbn CLEAR,NONOISE,DARK.jpg
2, score 10, check MEDIUM, lbn CLEAR,HAVENOISE,DARK.jpg
2, score 11, check HIGH, lbn BLUR,NONOISE,DARK.jpg
2, score 11, check MEDIUM, lbn CLEAR,NONOISE,DARK.jpg
2, score 12, check HIGH, lbn CLEAR,HAVENOISE,DARK.jpg
2, score 14, check HIGH, lbn CLEAR,NONOISE,BRIGHT.jpg
3, score 9, check MEDIUM, lbn BLUR,NONOISE,DARK.jpg
3, score 11, check HIGH, lbn BLUR,NONOISE,DARK.jpg
3, score 12, check HIGH, lbn CLEAR,HAVENOISE,DARK.jpg
3, score 13, check HIGH, lbn CLEAR,NONOISE,DARK.jpg
3, score 14, check HIGH, lbn CLEAR,NONOISE,BRIGHT.jpg
4, score 7, check LOW, lbn BLUR,NONOISE,DARK.jpg
5, score 9, check MEDIUM, lbn BLUR,NONOISE,DARK.jpg
5, score 10, check MEDIUM, lbn BLUR,NONOISE,BRIGHT.jpg
  1. 上面有5个PID,每个pid都有几个图片保存下来,经过肉眼对比 score分数对质量区分明显。

总结

  1. 人脸跟踪减少cpu和内存的使用,同时较少查底库次数。
  2. 人脸质量还有完整度评估,遮罩评估,眼睛开闭状态,完整度和尺寸等,可以根据需要进行加权打分。

不懂seetaface6SDK的可以先看gitee项目哦

  • 1
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 7
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值