IOS 集成SeetaFace6框架,实现从采集到图像格式转换再到人脸检测,活体检测,人脸识别

前言

前脚把讯飞的语音唤醒+识别的功能整合到到了项目中,后脚就说要加入人脸检测+活体检测+人脸识别的功能。哎,那就整吧。
还好之前就关注了一个人脸识别的框架SeetaFace6, SeetaFace6 是中科视拓技术开发体系最新的版本,该版本为开放版,免费供大家使用。该版本包含了人脸识别、活体检测、属性识别、质量评估模块。
SeetaFace6由C++编写跨平台,任何的平台都可以用,有些比较不常见的平台就需要自己编译了,不过这些都不是事,最主要的是免费可商用。

1、准备工作

SeetaFace6 下载对应的IOS开发包,记得下载模型文件
SeetaFace6 入门教程

下载完,打开项目,进入TARGETS->General->Frameworks 中将下载的库添加进去,在加入依赖库

在这里插入图片描述

在配置一下Enable Bitcode 为 No

在这里插入图片描述

2、开敲

2.1采集32BGRA图像,并转换成24BGR图像

主要是SeetaFace6传入的图像格式就为24BGR,在设置kCVPixelBufferPixelFormatTypeKey时,也是有kCVPixelFormatType_24BGR图像格式可以设置的,但是不知道是不是我真机的原因,很多格式我都试过了,就kCVPixelFormatType_32BGRA和kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange可以用,如果你的设备可以直接使用kCVPixelFormatType_24BGR图像格式采集,就跳过转换那一步,直接提取数据送到人脸检测中。开整
ps:相关的注释都写在代码了,这就不在复述了

FFVideoCapturer.h

#import <UIKit/UIKit.h>
#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>

NS_ASSUME_NONNULL_BEGIN

@protocol FFVideoCapturerDelegate <NSObject>

/**
 摄像头采集数据输出
 
 @param sampleBuffer 采集到的数据
 */
- (void)videoCaptureOutputDataCallback:(CMSampleBufferRef)sampleBuffer;
/**
 摄像头采集数据输出
 
 @param frame 采集到的数据
 @param channels 通道
 @param width 宽
  @param height 高
 */
- (void)videoCaptureOutputDataBGRCallback:(uint8_t *)frame Channels:(int)channels Width:(int)width Height:(int)height;
@end


@interface FFVideoCapturerParam : NSObject/*设置视频相关参数对象*/

/**摄像头位置,默认为前置摄像头AVCaptureDevicePositionFront*/
@property (nonatomic,assign) AVCaptureDevicePosition devicePosition;
/**视频分辨率 默认AVCaptureSessionPreset1280x720*/
@property (nonatomic,assign) AVCaptureSessionPreset sessionPreset;
/**帧率 单位为 帧/秒, 默认为15帧/秒*/
@property (nonatomic,assign) NSInteger frameRate;
/**摄像头方向 默认为当前手机屏幕方向*/
@property (nonatomic,assign) AVCaptureVideoOrientation videoOrientation;

@end


@interface FFVideoCapturer : NSObject
/**代理*/
@property (nonatomic,weak) id <FFVideoCapturerDelegate> delegate;
/** 预览图层,把这个图层加在View上并且为这个图层设置frame就能播放  */
@property (nonatomic,strong,readonly)AVCaptureVideoPreviewLayer *videoPreviewLayer;
/**视频采集参数对象*/
@property (nonatomic,strong) FFVideoCapturerParam *capturerParam;


/**单例*/
+(instancetype) shareInstance;

/**
 初始化方法
 
 @param param 参数
 @return 实例
 */
- (int)initWithCaptureParam:(FFVideoCapturerParam *)param error:(NSError **)error;

/**开始采集*/
- (NSError *)startCapture;

/**停止采集*/
- (NSError *)stopCapture;

/**抓图 block返回UIImage*/
- (void)imageCapture:(void(^)(UIImage *image))completion;

/**动态调整帧率*/
- (NSError *)adjustFrameRate:(NSInteger)frameRate;

/** 翻转摄像头 */
- (NSError *)reverseCamera;

/** 采集过程中动态修改视频分辨率 */
- (void)changeSessionPreset:(AVCaptureSessionPreset)sessionPreset;
@end

NS_ASSUME_NONNULL_END

FFVideoCapturer.m

@implementation FFVideoCapturerParam
-(instancetype)init
{
    if(self = [super init]){
       /*设置默认参数*/
        _devicePosition = AVCaptureDevicePositionFront;//默认前摄像头
        _sessionPreset = AVCaptureSessionPreset640x480;//默认分辨率
        _frameRate = 25;
        _videoOrientation = AVCaptureVideoOrientationLandscapeRight;//摄像头方向
        
        switch ([UIDevice currentDevice].orientation) {
            case UIDeviceOrientationPortrait:
            case UIDeviceOrientationPortraitUpsideDown:
                _videoOrientation = AVCaptureVideoOrientationPortrait;
                break;
            case UIDeviceOrientationLandscapeRight:
                _videoOrientation = AVCaptureVideoOrientationLandscapeRight;
                break;
            case UIDeviceOrientationLandscapeLeft:
                _videoOrientation = AVCaptureVideoOrientationLandscapeLeft;
                break;
            default:
                break;
        }
    }

    return self;
}
@end


@interface FFVideoCapturer() <AVCaptureVideoDataOutputSampleBufferDelegate>

/** 采集会话 */
@property (nonatomic, strong)AVCaptureSession *captureSession;
/**采集输入设备 也就是摄像头*/
@property (nonatomic, strong)AVCaptureDeviceInput *captureDeviceInput;
/**采集视频输出*/
@property (nonatomic, strong) AVCaptureVideoDataOutput *captureVideoDataOutput;
/**采集音频输出*/
@property (nonatomic, strong) AVCaptureAudioDataOutput *captureAudioDataOutput;
/** 抓图输出 */
@property (nonatomic, strong) AVCaptureStillImageOutput *captureStillImageOutput;
/**预览图层,把这个图层加在View上就能播放*/
@property (nonatomic, strong) AVCaptureVideoPreviewLayer *videoPreviewLayer;
/**输出连接*/
@property (nonatomic, strong) AVCaptureConnection *captureConnection;
/**是否已经在采集*/
@property (nonatomic,assign) BOOL isCapturing;
/**开始记录毫秒*/
@property (nonatomic,assign) UInt64 startRecordTime;
/**结束记录毫秒*/
@property (nonatomic,assign) UInt64 endRecordTime;
/**存储状态,保存一帧原始数据*/
@property (nonatomic,assign) BOOL storeState;
@end

static FFVideoCapturer* _instance = nil;
@implementation FFVideoCapturer

- (void)dealloc
{
    NSLog(@"%s",__func__);
    
}


/**单例*/
+(instancetype) shareInstance
{
    
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        _instance = [[self alloc]init];
    });
    return _instance;

}


- (int)initWithCaptureParam:(FFVideoCapturerParam *)param error:(NSError * _Nullable __autoreleasing *)error
{
    if(param)
    {
        NSError *errorMessage = nil;
        self.storeState = NO;
        self.capturerParam = param;

        /****************** 设置输入设备 ************************/
        //获取所有摄像头
        NSArray *cameras = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
        //获取当前方向摄像头
        NSArray *captureDeviceArray = [cameras filteredArrayUsingPredicate:[NSPredicate predicateWithFormat:@"position == %d",_capturerParam.devicePosition]];
        if(captureDeviceArray.count == 0){
            errorMessage = [self p_errorWithDomain:@"MAVideoCapture::Get Camera Faild!"];
            return -1;
        }
        //转化为输入设备
        AVCaptureDevice *camera = captureDeviceArray.firstObject;
        self.captureDeviceInput  = [AVCaptureDeviceInput deviceInputWithDevice:camera error:&errorMessage];
        if(errorMessage){
            errorMessage = [self p_errorWithDomain:@"MAVideoCapture::AVCaptureDeviceInput init error"];
            return -1;
        }
        
        /****************** 设置输出设备 ************************/
        //设置视频输出
        //初始化视频输出对象
        self.captureVideoDataOutput = [[AVCaptureVideoDataOutput alloc]init];
        //kCVPixelFormatType_24BGR
        NSDictionary *videoSetting = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA],kCVPixelBufferPixelFormatTypeKey, nil];
        [self.captureVideoDataOutput setVideoSettings:videoSetting];
        
//
        //设置输出串行队列和数据回调
        dispatch_queue_t outputQueue = dispatch_queue_create("VCVideoCapturerOutputQueue", DISPATCH_QUEUE_SERIAL);
        //设置数据回调代理和串行队列线程
        [self.captureVideoDataOutput setSampleBufferDelegate:self queue:outputQueue];
        
        //丢弃延迟的帧
        self.captureVideoDataOutput.alwaysDiscardsLateVideoFrames = YES;
        
        //设置抓图输出
        self.captureStillImageOutput = [[AVCaptureStillImageOutput alloc]init];
//        [self.captureStillImageOutput setOutputSettings:@{AVVideoCodecKey:AVVideoCodecJPEG}];
        
        /****************** 初始化会话 ************************/
        self.captureSession = [[AVCaptureSession alloc]init];
        self.captureSession.usesApplicationAudioSession = NO;
        //添加输入设备到会话
        if([self.captureSession canAddInput:self.captureDeviceInput]){
            [self.captureSession addInput:self.captureDeviceInput];
        }else{
            [self p_errorWithDomain:@"MAVideoCapture::Add captureDeviceInput failed!"];
            return -1;
        }
        
        //添加输出设备到会话
        if([self.captureSession canAddOutput:self.captureVideoDataOutput]){
            [self.captureSession addOutput:self.captureVideoDataOutput];
        }else{
            [self p_errorWithDomain:@"MAVideoCapture::Add captureVideoDataOutput Faild!"];
            return -1;
        }
        
        //添加抓图输出到会话
        if([self.captureSession canAddOutput:self.captureStillImageOutput]){
            [self.captureSession addOutput:self.captureStillImageOutput];
        }else{
            [self p_errorWithDomain:@"MAVideoCapture::Add captureStillImageOutput Faild!"];
            return -1;
        }
        
        //设置分辨率
        if([self.captureSession canSetSessionPreset:self.capturerParam.sessionPreset])
        {
            self.captureSession.sessionPreset = self.capturerParam.sessionPreset;
        }
        
        
        /****************** 初始化连接 ************************/
        self.captureConnection = [self.captureVideoDataOutput connectionWithMediaType:AVMediaTypeVideo];
        
        //设置摄像头镜像,不设置的话前置摄像头采集出来的图像是反转的
        if(self.capturerParam.devicePosition == AVCaptureDevicePositionFront && self.captureConnection.supportsVideoMirroring){
            self.captureConnection.videoMirrored = YES;
        }
        self.captureConnection.videoOrientation = self.capturerParam.videoOrientation;
        
        //AVCaptureVideoPreviewLayer可以用来快速呈现相机(摄像头)所收集到的原始数据。
        self.videoPreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
        self.videoPreviewLayer.connection.videoOrientation = self.capturerParam.videoOrientation;
        self.videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
        if(error)
        {
            *error = errorMessage;
        }
        
        //设置帧率
        [self adjustFrameRate:self.capturerParam.frameRate];
    }
    return 1;
}


-(NSError *)startCapture
{//开始采集
    if (self.isCapturing){
        return [self p_errorWithDomain:@"MAVideoCapture::startCapture failed! is capturing!"];
    }
    // 摄像头权限判断
    AVAuthorizationStatus videoAuthStatus = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo];
    if(videoAuthStatus != AVAuthorizationStatusAuthorized){
        [self p_errorWithDomain:@"MAVideoCapture::Camera Authorizate failed!"];
    }
    [self.captureSession startRunning];
    self.isCapturing = YES;
    _startRecordTime = [[NSDate date] timeIntervalSince1970]*1000;//获取开始时间
    return nil;
}

-(NSError *)stopCapture
{//停止采集
    if(!self.isCapturing){
        return [self p_errorWithDomain:@"MAVideoCapture::stopCapture failed! is not capturing!"];
    }
    [self.captureSession stopRunning];
    self.isCapturing = NO;
    
    return nil;
}



-(NSError *)reverseCamera
{//翻转摄像头
    
    //获取所有摄像头
    NSArray *cameras = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    
    //获取当前摄像头方向
    AVCaptureDevicePosition currentPosition = self.captureDeviceInput.device.position;
    AVCaptureDevicePosition toPosition = AVCaptureDevicePositionUnspecified;
    //判断当前摄像头是前置摄像头还是后摄像头
    if(currentPosition == AVCaptureDevicePositionBack || currentPosition == AVCaptureDevicePositionUnspecified){
        toPosition = AVCaptureDevicePositionFront;
    }else{
        toPosition = AVCaptureDevicePositionBack;
    }
    NSArray *captureDevviceArray = [cameras filteredArrayUsingPredicate:[NSPredicate predicateWithFormat:@"position == %d",toPosition]];
    
    if(captureDevviceArray.count == 0){
        return [self p_errorWithDomain:@"MAVideoCapture::reverseCamera failed! get new Camera Faild!"];
    }
    
    NSError *errpr = nil;
    AVCaptureDevice *camera = captureDevviceArray.firstObject;
    AVCaptureDeviceInput *newInput = [AVCaptureDeviceInput deviceInputWithDevice:camera error:&errpr];
    
    //修改输入设备
    [self.captureSession beginConfiguration];
    [self.captureSession removeInput:self.captureDeviceInput];
    if([_captureSession canAddInput:newInput]){
        [_captureSession addInput:newInput];
        self.captureDeviceInput = newInput;
    }
    [self.captureSession commitConfiguration];
    
    //重新获取连接并设置方向
    self.captureConnection = [self.captureVideoDataOutput connectionWithMediaType:AVMediaTypeVideo];
    if(toPosition == AVCaptureDevicePositionFront && self.captureConnection.supportsVideoMirroring){
        self.captureConnection.videoMirrored = YES;
    }
    self.captureConnection.videoOrientation = self.capturerParam.videoOrientation;
    
    
    return nil;
}

- (void)imageCapture:(void (^)(UIImage * _Nonnull))completion
{//抓图 block返回UIImage
    [self.captureStillImageOutput captureStillImageAsynchronouslyFromConnection:self.captureConnection completionHandler:^(CMSampleBufferRef  _Nullable imageDataSampleBuffer, NSError * _Nullable error)
    {
        UIImage *image = [UIImage imageWithData:[AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer]];
        completion(image);
        
    }];
    
}


- (NSError *)adjustFrameRate:(NSInteger)frameRate
{//动态调整帧率
    NSError *error = nil;
    AVFrameRateRange *frameRateRange = [self.captureDeviceInput.device.activeFormat.videoSupportedFrameRateRanges objectAtIndex:0];
    if (frameRate > frameRateRange.maxFrameRate || frameRate < frameRateRange.minFrameRate){
        return [self p_errorWithDomain:@"MAVideoCapture::Set frame rate failed! out of range"];
    }
    
    [self.captureDeviceInput.device lockForConfiguration:&error];
    self.captureDeviceInput.device.activeVideoMinFrameDuration = CMTimeMake(1, (int)self.capturerParam.frameRate);
    self.captureDeviceInput.device.activeVideoMaxFrameDuration = CMTimeMake(1, (int)self.capturerParam.frameRate);
    [self.captureDeviceInput.device unlockForConfiguration];
    return error;
}

- (void)changeSessionPreset:(AVCaptureSessionPreset)sessionPreset
{//采集过程中动态修改视频分辨率
    self.capturerParam.sessionPreset = sessionPreset;
    if([self.captureSession canSetSessionPreset:self.capturerParam.sessionPreset])
    {
        self.captureSession.sessionPreset = self.capturerParam.sessionPreset;
    }
}


- (NSError *)p_errorWithDomain:(NSString *)domain
{
    
    NSLog(@"%@",domain);
    return [NSError errorWithDomain:domain code:1 userInfo:nil];
}


#pragma mark - AVCaptureVideoDataOutputSampleBufferDelegate

/**
 摄像头采集的数据回调
 
 @param output 输出设备
 @param sampleBuffer 帧缓存数据,描述当前帧信息
 @param connection 连接
 */
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    
//    if([self.delegate respondsToSelector:@selector(videoCaptureOutputDataCallback:)]){
//        [self.delegate videoCaptureOutputDataCallback:sampleBuffer];
//    }

    _endRecordTime = [[NSDate date] timeIntervalSince1970]*1000;
    if(_endRecordTime-_startRecordTime > 100){//500毫秒转换一次,并送去检测检测。
        NSLog(@"====>decode start:%llu",_endRecordTime-_startRecordTime);
        [self processVideoSampleBufferToRGB:sampleBuffer];
        _startRecordTime = [[NSDate date] timeIntervalSince1970]*1000;
    }
    
}

/**视频格式为:kCVPixelFormatType_32BGRA 转换成BGR图像格式*/
- (void)processVideoSampleBufferToRGB:(CMSampleBufferRef)sampleBuffer
{

    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    //size_t count = CVPixelBufferGetPlaneCount(pixelBuffer);
    //printf("%zud\n", count);
    
    //表示开始操作数据
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    
    int pixelWidth = (int) CVPixelBufferGetWidth(pixelBuffer);
    int pixelHeight = (int) CVPixelBufferGetHeight(pixelBuffer);
    
    // BGRA数据
    uint8_t *frame = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    uint8_t *bgr = malloc(pixelHeight * pixelWidth * 3);
//    uint8_t *rgb = malloc(pixelHeight * pixelWidth * 3);
    int BGRA = 4;
    int BGR  = 3;
    
    for (int i = 0; i < pixelWidth * pixelHeight; i ++) {//循环踢掉alpha
        
        NSUInteger byteIndex = i * BGRA;
        NSUInteger newByteIndex = i * BGR;
        
        // Get BGR
        CGFloat blue   = frame[byteIndex + 0];
        CGFloat green  = frame[byteIndex + 1];
        CGFloat red    = frame[byteIndex + 2];
        //CGFloat alpha  = rawData[byteIndex + 3];// 这里Alpha值是没有用的
        
        // Set RGB To New RawData
        bgr[newByteIndex + 0] = blue;   // B
        bgr[newByteIndex + 1] = green;  // G
        bgr[newByteIndex + 2] = red;    // R
        
//        rgb[newByteIndex + 0] = red;   // R
//        rgb[newByteIndex + 1] = green; // G
//        rgb[newByteIndex + 2] = blue;  // B
    }
   
    // Unlock
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    
#if 0
    if(self.storeState){//保存一帧BGR数据
        
        NSString *dir = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
        NSString *documentPath = [NSString stringWithFormat:@"%@/11.bgr", dir];
        FILE* fp = fopen(documentPath.UTF8String, "ab+");
        if(fp)
        {
            size_t size = fwrite(bgr, 1, pixelHeight * pixelWidth * 3, fp);
            NSLog(@"handleVideoData---fwrite:%lu", size);
            fclose(fp);
        }
        self.storeState = YES;
    }
#endif
    
    //转换完成,数据回调
    if([self.delegate respondsToSelector:@selector(videoCaptureOutputDataBGRCallback:Channels:Width:Height:)]){
        [self.delegate videoCaptureOutputDataBGRCallback:bgr Channels:3 Width:pixelWidth  Height:pixelHeight ];
    }
    if (NULL != bgr)
    {
        free (bgr);
		bgr = NULL;
    }
}

@end

2.2 加载模型,初始化,实现人脸检测,关键点提取,活体检测

FaceRecognizerManagers.h

#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
NS_ASSUME_NONNULL_BEGIN

@protocol FaceRecognizerManagersDelegate <NSObject>

/**
 检测到人脸
 
 @param face_frame 检测到的人脸坐标
 @param width 宽
 @param height 高
 */
- (void)faceDetectSuccessCallback:(CGRect)face_frame Width:(int)width Height:(int)height;
@end


@interface FaceRecognizerManagers : NSObject

/**代理*/
@property (nonatomic,weak) id <FaceRecognizerManagersDelegate> delegate;

/**单例*/
+(instancetype) shareInstance;
/**初始化人脸识别相关类*/
- (void) initFaceRecognizerObject;

/**
人脸检测

 @param frame 转换完成的BGR数据
 @param channels 通道号,默认为3
 @param width 宽
 @param height 高
*/
- (void) faceDetect:(uint8_t *)frame Channels:(int)channels Width:(int)width Height:(int)height;
@end

NS_ASSUME_NONNULL_END

FaceRecognizerManagers.m

#import "FaceRecognizerManagers.h"
#import <SeetaFaceDetector600/seeta/FaceDetector.h>
#import <SeetaFaceAntiSpoofingX600/seeta/FaceAntiSpoofing.h>
#import <SeetaFaceLandmarker600/seeta/FaceLandmarker.h>
#import <SeetaFaceRecognizer610/seeta/FaceRecognizer.h>

@interface FaceRecognizerManagers(){
    seeta::FaceDetector *facedector;//人脸检测
    seeta::FaceLandmarker *faceLandmarker;//人脸关键点
    seeta::FaceAntiSpoofing *faceantspoofing;//活体检测
    seeta::FaceRecognizer *faceRecognizer;//人脸识别
}
/**人脸检测模型路径*/
@property (nonatomic,copy) NSString *faceDector_path;
/**人脸关键点模型路径*/
@property (nonatomic,copy) NSString *faceLandmarker_path;
/**局部活体检测模型路径*/
@property (nonatomic,copy) NSString *fasfirst_path;
/**全局活体检测模型路径*/
@property (nonatomic,copy) NSString *fassecond_path;
/**人脸识别模型路径*/
@property (nonatomic,copy) NSString *faceRecognizer_path;
@end

static FaceRecognizerManagers* _instance = nil;
const char *SPOOF_STATE_STR[] = { "real face","spoof face","unknown","judging" };
@implementation FaceRecognizerManagers

/**单例*/
+(instancetype) shareInstance
{
    
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        _instance = [[self alloc]init];
    });
    return _instance;

}


-(void) initParam
{

    _faceDector_path = [[NSBundle mainBundle] pathForResource:@"face_detector" ofType:@"csta"];
    _faceLandmarker_path = [[NSBundle mainBundle] pathForResource:@"face_landmarker_pts5" ofType:@"csta"];
    _fasfirst_path = [[NSBundle mainBundle] pathForResource:@"fas_first" ofType:@"csta"];
    _fassecond_path = [[NSBundle mainBundle] pathForResource:@"fas_second" ofType:@"csta"];
    _faceRecognizer_path = [[NSBundle mainBundle] pathForResource:@"face_recognizer" ofType:@"csta"];
//    NSLog(@"===>%@====>%@",path,_faceDector_path);
}

- (void)facedector_init
{
    seeta::ModelSetting setting;
    setting.append([_faceDector_path UTF8String]);
    setting.set_device( seeta::ModelSetting::AUTO );
    setting.set_id(0);
    facedector = new seeta::FaceDetector(setting);
//    facedector->set(seeta::FaceDetector::PROPERTY_MIN_FACE_SIZE, 100);
}

- (void)facelandmarker_init
{
    seeta::ModelSetting setting;
    setting.append([_faceLandmarker_path UTF8String]);
    faceLandmarker = new seeta::FaceLandmarker(setting);
}

- (void)faceantspoofing_init:(int)version
{
    seeta::ModelSetting setting;
    switch (version)
    {
        case 0:
            setting.append([_fasfirst_path UTF8String]);
            break;
        case 1:
            setting.append([_fassecond_path UTF8String]);
            break;
        case 2:
            setting.append([_fasfirst_path UTF8String]);
            setting.append([_fassecond_path UTF8String]);
            break;
        default:
            NSLog(@"version input error");
            throw 2;
    }
    
    faceantspoofing = new seeta::FaceAntiSpoofing(setting);
}

- (void)facerecognizer_ini
{
    seeta::ModelSetting setting;
    setting.append([_faceRecognizer_path UTF8String]);
    faceRecognizer = new seeta::FaceRecognizer(setting);
}


- (void)initFaceRecognizerObject
{
    
    //初始化默认参数
    [self initParam];

    //初始化人脸检测
    [self facedector_init];
    
    //初始化人脸关键点
    [self facelandmarker_init];
    
    //初始化活体检测 0局部 1全局 2局部+全局
    [self faceantspoofing_init:0];
    
    //初始化人脸识别
//    [self facerecognizer_ini];
    
}

//在视频识别模式中,如果该识别结果已经完成,需要开始新的视频的话,需要调用ResetVideo重置识别状态,然后重新输入视频
- (void) reset_video {
    faceantspoofing->ResetVideo();
}

//设置活体检测的视频帧数
- (void) set_frame:(int32_t)number
{
    faceantspoofing->SetVideoFrameCount(number);//默认是10;
    
}


//人脸检测_检测人脸并放到数组中
- (SeetaFaceInfoArray) face_detect:(SeetaImageData)image
{
    if (facedector == NULL)
    {
        NSLog(@"dont init facedector");
        throw 1;
    }
    return facedector->detect(image);
}


//关键点提取_提取图像中人脸的特征点
- (std::vector<SeetaPointF>) face_mark:(const SeetaImageData)image WithSeetaRect:(const SeetaRect)face
{
    if (faceLandmarker == NULL)
    {
        NSLog(@"dont init facelandmarker");
        throw 1;
    }
    //这里检测到的5点坐标循序依次为,左眼中心、右眼中心、鼻尖、左嘴角和右嘴角。
    return faceLandmarker->mark(image, face);
    
}



//活体检测_way如果是0为单帧识别,1为多帧识别
- (int) face_predict:(const SeetaImageData)image WithSeetaRect:(const SeetaRect)face WithSeetaPointF:(std::vector<SeetaPointF>)v_points WithWay:(int)way
{
    
    if (faceantspoofing == NULL)
    {
        NSLog(@"faceantspoofing dont init");
        throw 1;
    }
    
    SeetaPointF points[5];
    for (int i = 0; i < 5; i++)
    {
        points[i] = v_points.at(i);
        
    }
    
    int status;
    switch (way)
    {
        case 0:
            status = faceantspoofing->Predict(image, face, points);
            break;
        case 1:
            status = faceantspoofing->PredictVideo(image, face, points);
            break;
        default:
            NSLog(@"way input error") ;
            throw 2;
            
    }
//    auto status1 = faceantspoofing->PredictVideo(image, face, points);
    switch (status) {
        case seeta::FaceAntiSpoofing::REAL:
            NSLog(@"真实人脸"); break;
        case seeta::FaceAntiSpoofing::SPOOF:
            NSLog(@"攻击人脸"); break;
        case seeta::FaceAntiSpoofing::FUZZY:
            NSLog(@"无法判断"); break;
        case seeta::FaceAntiSpoofing::DETECTING:
            NSLog(@"正在检测"); break;
    }
    return status;
}

//人脸对比_获取图片中特征
- (float*) fase_extract_feature:(const SeetaImageData)image WithSeetaPointF:(std::vector<SeetaPointF>)faces
{
    if (faceRecognizer == NULL)
    {
        NSLog(@"dont init facerecongizer");
        throw 1;
    }
    SeetaPointF points[5];
    for (int i = 0; i < 5; i++)
    {
        points[i] = faces.at(i);
    }
    float* feature = new float[faceRecognizer->GetExtractFeatureSize()];
    faceRecognizer->Extract(image, points, feature);
    return feature;
    
}

//人脸对比_比较两个特征判断是否相似
- (float) fase_compare:(float*)feature1 With:(float*)feature2
{
    return faceRecognizer->CalculateSimilarity(feature1, feature2);
}


//按人脸大小排列人脸数组
- (void) face_sort:(SeetaFaceInfoArray)face_sfia
{
    int m = face_sfia.size;
    std::vector<SeetaFaceInfo> faces(m);
    for (int i = 0; i < face_sfia.size; i++)
    {
        faces.at(i) = face_sfia.data[i];
    }
    std::partial_sort(faces.begin(), faces.begin() + 1, faces.end(), [](SeetaFaceInfo a, SeetaFaceInfo b) {
        return a.pos.width > b.pos.width;
    });
    for (int i = 0; i < face_sfia.size; i++)
    {
        face_sfia.data[i] = faces.at(i);
    }
}

//BGR数据转SeetaImageData
- (SeetaImageData) frame_to_seetaImageData:(uint8_t *)frame Channels:(int)channels Width:(int)width Height:(int)height
{
    SeetaImageData img;
    img.width = width;
    img.height = height;
    img.channels = channels;
    img.data = frame;
    return img;
    
}

//检测数据过来了
- (void)faceDetect:(uint8_t *)frame Channels:(int)channels Width:(int)width Height:(int)height
{
    
    SeetaImageData img = [self frame_to_seetaImageData:frame Channels:channels Width:width Height:height];
    SeetaFaceInfoArray infoArray =  [self face_detect:img];
    if (infoArray.size <= 0)
    {
        NSLog(@"未检测到脸");
        return;
    }
    if(infoArray.size > 1){
        [self face_sort:infoArray];
    }
    
    for (int i=0; i<infoArray.size; i++) {//循环取出检测到的人脸,在根据检测出的人脸提取关键点,活体检测,人脸对比一系列操作。
        SeetaFaceInfo faceInfo = infoArray.data[i];
        if(self.delegate && [self.delegate respondsToSelector:@selector(faceDetectSuccessCallback:Width:Height:)]){
            //                NSLog(@"Face_X->%d,Face_Y->%d,Face_Width->%d,Face_Height->%d",faceInfo->pos.x,faceInfo->pos.y,faceInfo->pos.width,faceInfo->pos.height);
            CGRect frame = CGRectMake(faceInfo.pos.x, faceInfo.pos.y, faceInfo.pos.width, faceInfo.pos.height);
            [self.delegate faceDetectSuccessCallback:frame Width:width Height:height];
        }
        
        std::vector<SeetaPointF> spf = [self face_mark:img WithSeetaRect:infoArray.data[i].pos];
        int status = [self face_predict:img WithSeetaRect:infoArray.data[i].pos WithSeetaPointF:spf WithWay:1];
        if(self.delegate && [self.delegate respondsToSelector:@selector(facePredictCallback:)]){
            [self.delegate facePredictCallback:status];
        }
        NSLog(@"status->%d,SPOOF_STATE_STR->%s",status,SPOOF_STATE_STR[status]);
        
        //检测到活体后做人脸对比,人脸对比的代码已实现,但未加入测试。
        
        
    }
    
}

@end

3、报错集合

不配置Enable Bitcode 为 No

在这里插入图片描述

这错是报找不到C++的文件,需要将导入了SeetaFace6框的.m文件修改为.mm文件,还有一种方式是设置Build Settings->Compile Sources As->Objective-C++,不过不建议设置Compile Sources As,设置了会导致C的无法使用

在这里插入图片描述

报"Unknown type name ‘SeetaFaceDetector’", 这错真的很奇葩,奇葩到我都想哭了,突然之间就出现的,我把人脸识别库都删掉还是搞不到,最后新建一个项目,重新导入就可以了😢

在这里插入图片描述

4、效果

最后贴出 “本人” 头像检测结果
在这里插入图片描述
Demo:https://download.csdn.net/download/FF_lz/15039639

评论 42
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值