实现嵌入式设备中的人脸识别

原文链接

欢迎大家对于本站的访问 - AsterCasc

前言

前文实现嵌入式设备中的人脸检测中,我们使用了libfacedetection库进行简单的人脸检测,现在我们尝试使用opencv原始库face进行人脸识别

opencv编译

关于嵌入式交叉编译环境的搭建,以及opencv的基本编译参考前文交叉编译armv7运行环境以及嵌入式opencv的编译示例。但是由于face库不属于opencv的基本库,而是位于opencv_contrib,所以我们需要进行额外编译

首先下载opencv_contrib,解压到特定文件夹,以/usr/local/opencv/opencv_contrib-4.8.0举例,我们前文中给出的编译选项为mkdir build && cd build && cmake -DCMAKE_TOOLCHAIN_FILE=../arm-gnueabi.toolchain.cmake -DCMAKE_INSTALL_PREFIX=/usr/local/opencv/install/ ../../..,这里只需要稍做修改,如果你希望编译所有opencv_contrib库,则增加参数-DOPENCV_EXTRA_MODULES_PATH=/usr/local/opencv/opencv_contrib-4.8.0/modules,如果你只需要部分库,增加指定库编译选项即可-DOPENCV_EXTRA_MODULES_PATH=/usr/local/opencv/opencv_contrib-4.8.0/modules/face

更多关于opencv_contrib的内容参考tutorial_contrib_root_4.8.0

实现

我们可以参考官文文档的教程tutorial_face_main_4.8.0opencv提供了三种内置算法EigenfacesFisherfaces以及Local Binary Patterns Histograms,由于EigenfacesFisherfaces在低训练集表现相对较差,我们这里选择LBPH进行实现,表现较差的原因参考:

Now real life isn’t perfect. You simply can’t guarantee perfect light settings in your images or 10 different images of a person. So what if there’s only one image for each person? Our covariance estimates for the subspace may be horribly wrong, so will the recognition. Remember the Eigenfaces method had a 96% recognition rate on the AT&T Facedatabase? How many images do we actually need to get such useful estimates? Here are the Rank-1 recognition rates of the Eigenfaces and Fisherfaces method on the AT&T Facedatabase, which is a fairly easy image database:

So in order to get good recognition rates you’ll need at least 8(±1) images for each person and the Fisherfaces method doesn’t really help here. The above experiment is a 10-fold cross validated result carried out with the facerec framework at: https://github.com/bytefish/facerec. This is not a publication, so I won’t back these figures with a deep mathematical analysis. Please have a look into [171] for a detailed analysis of both methods, when it comes to small training datasets.

So some research concentrated on extracting local features from images. The idea is to not look at the whole image as a high-dimensional vector, but describe only local features of an object. The features you extract this way will have a low-dimensionality implicitly. A fine idea! But you’ll soon observe the image representation we are given doesn’t only suffer from illumination variations. Think of things like scale, translation or rotation in images - your local description has to be at least a bit robust against those things. Just like SIFT, the Local Binary Patterns methodology has its roots in 2D texture analysis. The basic idea of Local Binary Patterns is to summarize the local structure in an image by comparing each pixel with its neighborhood. Take a pixel as center and threshold its neighbors against. If the intensity of the center pixel is greater-equal its neighbor, then denote it with 1 and 0 if not. You’ll end up with a binary number for each pixel

图片转换

我们首先需要将我们的彩色图片转为灰色图像,一方面可以压缩图片达到最小像素,另一方面去除色彩平面保留亮度平面可以简化我们判断图片空间维度的的维度的复杂性。为什么可以简化,大家可以想象之前的白金和蓝黑裙子之争

人脸识别的前提条件是人脸检测,所以我们这里需要加载用于人脸特征点检测的预训练模型,可以使用opencv自带的预训练模型,位置一般位于data/haarcascades。然后我们需要提供一些人脸以供训练,每个人不同角度光源的人脸,在不考虑训练成本的情况下,无论对于哪个算法来说肯定都是越多越好,但是我们这里是以低训练集举例,就每个人只提供一张清晰人脸,将8个人脸放在face/newpic目录下,示例代码如下:

void initFace()
{
    // load classifier
    cv::CascadeClassifier faceClassifier;
    if (!faceClassifier.load("haarcascade_frontalface_default.xml"))
    {
        std::cout << "Xml not found" << std::endl;
        return;
    }
    // convert
    for (int i = 1; i <= 8; i++)
    {
        cv::Mat     faceGray;
        std::string path("face/newpic/");
        path.append(std::to_string(i));
        path.append(".jpeg");
        cv::Mat img = cv::imread(path);
        cv::cvtColor(img, faceGray, cv::COLOR_BGR2GRAY);
        cv::equalizeHist(faceGray, faceGray);

        std::vector<cv::Rect> faces;
        faceClassifier.detectMultiScale(faceGray, faces);

        for (size_t j = 0; j < faces.size(); j++)
        {
            cv::Mat curFace = faceGray(faces[j]);
            cv::Mat convertedFace;
            if (curFace.cols > 100)
            {
                cv::resize(curFace, convertedFace, cv::Size(100, 100));
                std::string savePath("face/100/");
                savePath.append(std::to_string(i));
                savePath.append(".pgm");
                // QString savePath = QString("face/100/%1.bmp").arg(i);
                cv::imwrite(savePath, convertedFace);
            }
        }
    }
    std::cout << "Conversion finish" << std::endl;
}

以上代码会生成8张不同人的灰色100x100的人脸照片

模型训练

LBPH算法举例,其他算法在低训练集表现太差,不建议使用。但其实LBPH算法也是相当古老了,但是这里是以opencv原生算法的示例,所以择优而取吧,但即使是LBPH,不是大型商业目的,没有特别高要求的精准度的情况下使用还是完全没有问题的

void generateModel()
{
    std::vector<cv::Mat> faces;
    std::vector<int>     labels;

    for (int i = 1; i <= 8; i++)
    //        for (int i = 1; i <= 7; i++)
    {
        std::string path("face/100/");
        path.append(std::to_string(i));
        path.append(".pgm");

        faces.push_back(cv::imread(path, cv::IMREAD_GRAYSCALE));
        labels.push_back(i);
    }

    cv::Ptr<cv::face::FaceRecognizer> model = cv::face::LBPHFaceRecognizer::create();
    model->train(faces, labels);

    //    std::vector<cv::Mat> faces2;
    //    std::vector<int>     labels2;
    //    std::string          path2("face/100/8.pgm");
    //    faces2.push_back(cv::imread(path2, cv::IMREAD_GRAYSCALE));
    //    labels2.push_back(8);
    //    model->update(faces2, labels2);

    model->save("TestLBPHModel.xml");
    std::cout << "Generation finish" << std::endl;
}

至此我们已经获得了由8个人训练的模型TestLBPHModel.xml

人脸识别

最后我们使用第8个人的另一张照片,看看识别的效果

void testFace()
{
    // load classifier
    cv::CascadeClassifier faceClassifier;
    if (!faceClassifier.load("haarcascade_frontalface_default.xml"))
    {
        std::cout << "Xml not found" << std::endl;
        return;
    }
    // load model
    cv::Ptr<cv::face::FaceRecognizer> model = cv::face::LBPHFaceRecognizer::create();
    model->read("TestLBPHModel.xml");
    // check
    cv::Mat     faceGray;
    std::string path("face/newpic/test.jpeg");
    cv::Mat     img = cv::imread(path);
    cv::cvtColor(img, faceGray, cv::COLOR_BGR2GRAY);
    cv::equalizeHist(faceGray, faceGray);

    std::vector<cv::Rect> faces;
    faceClassifier.detectMultiScale(faceGray, faces);

    if (0 == faces.size())
    {
        std::cout << "Test pic not have any face" << std::endl;
    }

    for (size_t i = 0; i < faces.size(); ++i)
    {
        cv::Mat curFace = faceGray(faces[i]);
        cv::Mat convertedFace;
        if (curFace.cols > 100)
        {
            cv::resize(curFace, convertedFace, cv::Size(100, 100));

            int    label      = -1;
            double confidence = 0.0;
            model->predict(convertedFace, label, confidence);
            std::cout << "Test pic face is num " << label << " confidence is " << confidence << std::endl;
        }
        else
        {
            std::cout << "Test pic not face too small" << std::endl;
        }
    }
}
int main(int argc, char* argv[])
{
    initFace();
    generateModel();
    testFace();
}

最后进行编译arm-none-linux-gnueabihf-g++ main.cpp -o testFaceStatic -std=c++11 -static $(pkg-config --cflags --libs --static opencv4),获得执行文件,传入运行机器运行即可

获得结果Test pic face is num 8 confidence is 66.327,识别成功。一般置信位于80以下就可以认为是同一个人,当然根据训练集的大小以及精准度需求以及参数自行调整实际的接受置信即可

最后

如果希望在摄像头内动态人脸识别,在使用Qt的前提下,可以参考本站Windows下使用Qt引用opencv库进行二维码识别内部分代码

原文链接

欢迎大家对于本站的访问 - AsterCasc

  • 45
    点赞
  • 31
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值