firebase ios_Firebase ML Kit:在iOS中构建面部手势检测应用程序(第二部分)

firebase ios

This article demonstrates the detection of different facial gestures (Head Nods, Eye Blinks, Smile etc) with the help of Firebase ML Kit Face Detection API. Here we will be mainly focusing on the use of Firebase ML Kit Vision API to detect different facial gestures. For initial setup of the project you can visit the Part One of this series.

本文演示了借助Firebase ML Kit面部检测 API来检测不同的面部手势(头部点头,眨眼,微笑等)。 在这里,我们将主要集中于使用Firebase ML Kit Vision API来检测不同的面部手势。 对于项目的初始设置,您可以访问本系列的第1部分

使用ML Kit进行人脸检测 (Face Detection Using ML Kit)

With ML Kit’s face detection API, you can detect faces in an image, identify key facial features, and get the contours of detected faces.

使用ML Kit的面部检测API,您可以检测图像中的面部,识别关键的面部特征并获取检测到的面部轮廓。

With face detection, you can get the information you need to perform tasks like embellishing selfies and portraits, or generating avatars from a user’s photo. Because ML Kit can perform face detection in real time, you can use it in applications like video chat or games that respond to the player’s expressions.

使用面部检测,您可以获得执行诸如修饰自拍照和肖像或从用户的照片生成化身等任务所需的信息。 由于ML Kit可以实时执行人脸检测,因此您可以在视频聊天或游戏中响应玩家表情的应用程序中使用它。

You can know more about Firebase Face Detection API by clicking the link here.

单击此处的链接,您可以了解有关Firebase人脸检测API的更多信息。

Let’s not waste anymore time and get started.

让我们不再浪费时间开始吧。

讲解 (Tutorial)

In the Part One of the series we have completed the initial setup of the application. In this article we will see how to make use of Firebase MLVision API to detect different facial gestures.

在系列的第一部分中,我们已经完成了应用程序的初始设置。 在本文中,我们将看到如何利用Firebase MLVision API来检测不同的面部手势。

Creating a FacialGestureCameraView.swift File

创建一个FacialGestureCameraView.swift文件

  1. Let’s create a “FacialGestureCameraView.swift” file which is a subclass of UIView class and let’s import the below frameworks in the header of the file.

    让我们创建一个“ FacialGestureCameraView.swift ”文件,该文件是UIView类的子类,并在文件头中导入以下框架。

import AVFoundationimport FirebaseMLVision

2. Then let’s create the below threshold variables to determine different facial gestures.

2.然后,创建以下阈值变量以确定不同的面部手势。

public var leftNodThreshold: CGFloat = 20.0public var rightNodThreshold: CGFloat = -4public var smileProbality: CGFloat = 0.8public var openEyeMaxProbability: CGFloat = 0.95public var openEyeMinProbability: CGFloat = 0.1private var restingFace: Bool = true

There is no need to explain about this variables as it is self explanatory.

由于它是自解释的,因此无需对此变量进行解释。

3. Let’s create few more lazy variables which will be used for computing facial gestures as shown below.

3.让我们创建更多的惰性变量,这些变量将用于计算面部手势,如下所示。

private lazy var vision: Vision = {return Vision.vision()}()private lazy var options: VisionFaceDetectorOptions = {let option = VisionFaceDetectorOptions()option.performanceMode = .accurateoption.landmarkMode = .noneoption.classificationMode = .alloption.isTrackingEnabled = falseoption.contourMode = .nonereturn option}()private lazy var videoDataOutput: AVCaptureVideoDataOutput = {let videoOutput = AVCaptureVideoDataOutput()videoOutput.alwaysDiscardsLateVideoFrames = truevideoOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)videoOutput.connection(with: .video)?.isEnabled = truereturn videoOutput}()private let videoDataOutputQueue: DispatchQueue = DispatchQueue(label: Constants.videoDataOutputQueue)private lazy var previewLayer: AVCaptureVideoPreviewLayer = {let layer = AVCaptureVideoPreviewLayer(session: session)layer.videoGravity = .resizeAspectFillreturn layer}()private let captureDevice: AVCaptureDevice? = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front)private lazy var session: AVCaptureSession = {return AVCaptureSession()}()

4. Now let’s write logic to begin and end the session as shown below.

4.现在,让我们编写逻辑来开始和结束会话,如下所示。

func beginSession() {guard let captureDevice = captureDevice else { return }guard let deviceInput = try? AVCaptureDeviceInput(device: captureDevice) else { return }if session.canAddInput(deviceInput) {session.addInput(deviceInput)}if session.canAddOutput(videoDataOutput) {session.addOutput(videoDataOutput)}layer.masksToBounds = truelayer.addSublayer(previewLayer)previewLayer.frame = boundssession.startRunning()}func stopSession() {session.stopRunning() }

5. Now let’s implement “AVCaptureVideoDataOutputSampleBufferDelegate” delegate method and it’s dependent methods as shown below.

5.现在,我们实现“ AVCaptureVideoDataOutputSampleBufferDelegate ”委托方法及其相关方法,如下所示。

public func captureOutput(_ output: AVCaptureOutput,didOutput sampleBuffer: CMSampleBuffer,from connection: AVCaptureConnection) {guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {print("Failed to get image buffer from sample buffer.")return}let visionImage = VisionImage(buffer: sampleBuffer)let metadata = VisionImageMetadata()let visionOrientation = visionImageOrientation(from: imageOrientation())metadata.orientation = visionOrientationvisionImage.metadata = metadatalet imageWidth = CGFloat(CVPixelBufferGetWidth(imageBuffer))let imageHeight = CGFloat(CVPixelBufferGetHeight(imageBuffer))DispatchQueue.global().async {self.detectFacesOnDevice(in: visionImage,width: imageWidth,height: imageHeight)}}private func visionImageOrientation(from imageOrientation: UIImage.Orientation) ->VisionDetectorImageOrientation {switch imageOrientation {case .up:return .topLeftcase .down:return .bottomRightcase .left:return .leftBottomcase .right:return .rightTopcase .upMirrored:return .topRightcase .downMirrored:return .bottomLeftcase .leftMirrored:return .leftTopcase .rightMirrored:return .rightBottom@unknown default:fatalError()}}private func imageOrientation(fromDevicePosition devicePosition: AVCaptureDevice.Position = .front) -> UIImage.Orientation {var deviceOrientation = UIDevice.current.orientationif deviceOrientation == .faceDown ||deviceOrientation == .faceUp ||deviceOrientation == .unknown {deviceOrientation = currentUIOrientation()}switch deviceOrientation {case .portrait:return devicePosition == .front ? .leftMirrored : .rightcase .landscapeLeft:return devicePosition == .front ? .downMirrored : .upcase .portraitUpsideDown:return devicePosition == .front ? .rightMirrored : .leftcase .landscapeRight:return devicePosition == .front ? .upMirrored : .downcase .faceDown, .faceUp, .unknown:return .up@unknown default:fatalError()}}private func currentUIOrientation() -> UIDeviceOrientation {let deviceOrientation = { () -> UIDeviceOrientation inswitch UIApplication.shared.windows.first?.windowScene?.interfaceOrientation {case .landscapeLeft:return .landscapeRightcase .landscapeRight:return .landscapeLeftcase .portraitUpsideDown:return .portraitUpsideDowncase .portrait, .unknown, .none:return .portrait@unknown default:fatalError()}}guard Thread.isMainThread else {var currentOrientation: UIDeviceOrientation = .portraitDispatchQueue.main.sync {currentOrientation = deviceOrientation()}return currentOrientation}return deviceOrientation()}

6. Now let’s create delegates which will be triggered when a particular gesture will be detected as shown below.

6.现在,我们创建代表,当检测到特定手势时将触发该代表,如下所示。

@objc public protocol FacialGestureCameraViewDelegate: class {@objc optional func doubleEyeBlinkDetected()@objc optional func smileDetected()@objc optional func nodLeftDetected()@objc optional func nodRightDetected()@objc optional func leftEyeBlinkDetected()@objc optional func rightEyeBlinkDetected()}

7. Now let’s create a “delegate” object in the “FacialGestureCameraView” class which needs to be confirmed to implement the delegate methods as shown below.

7.现在,让我们在“ FacialGestureCameraView ”类中创建一个“ 委托 ”对象,需要确认该对象以实现委托方法,如下所示。

public weak var delegate: FacialGestureCameraViewDelegate?

8. Now let’s write the most important method where the face gesture detection logic has been implemented.

8.现在,让我们编写最重要的方法,其中已实现了面部姿势检测逻辑。

private func detectFacesOnDevice(in image: VisionImage, width: CGFloat, height: CGFloat) {let faceDetector = vision.faceDetector(options: options)faceDetector.process(image, completion: { features, error inif let error = error {print(error.localizedDescription)return}guard error == nil, let features = features, !features.isEmpty else {return}if let face = features.first {let leftEyeOpenProbability = face.leftEyeOpenProbabilitylet rightEyeOpenProbability = face.rightEyeOpenProbability// left head nodif face.headEulerAngleZ > self.leftNodThreshold {if self.restingFace {self.restingFace = falseself.delegate?.nodLeftDetected?()}} else if face.headEulerAngleZ < self.rightNodThreshold {//Right head tiltif self.restingFace {self.restingFace = falseself.delegate?.nodRightDetected?()}} else if leftEyeOpenProbability > self.openEyeMaxProbability &&rightEyeOpenProbability < self.openEyeMinProbability {// Right Eye Blinkif self.restingFace {self.restingFace = falseself.delegate?.rightEyeBlinkDetected?()}} else if rightEyeOpenProbability > self.openEyeMaxProbability &&leftEyeOpenProbability < self.openEyeMinProbability {// Left Eye Blinkif self.restingFace {self.restingFace = falseself.delegate?.leftEyeBlinkDetected?()}} else if face.smilingProbability > self.smileProbality {// smile detectedif self.restingFace {self.restingFace = falseself.delegate?.smileDetected?()}} else if leftEyeOpenProbability < self.openEyeMinProbability && rightEyeOpenProbability < self.openEyeMinProbability {// full/both eye blinkif self.restingFace {self.restingFace = falseself.delegate?.doubleEyeBlinkDetected?()}} else {// Face got resetedself.restingFace = true}}})}

I know the article is getting lengthy, but we are almost done with our logic. The only thing which is pending is to implement the delegate methods from our “ViewController.swift” class. Let’s implement that as well.

我知道这篇文章篇幅越来越长,但是我们的逻辑已经差不多完成了。 唯一待处理的事情是从“ ViewController.swift”类中实现委托方法。 让我们也实现它。

Implementing Logic In ViewController.swift File

在ViewController.swift文件中实现逻辑

  1. In this file we need to implement FacialGestureCameraViewDelegate methods so that we will receive callbacks when a particular facial gesture is detected. Create an extension of ViewController and implement the delegate methods as shown below.

    在此文件中,我们需要实现FacialGestureCameraViewDelegate方法,以便在检测到特定的面部手势时接收回调。 创建ViewController的扩展并实现委托方法,如下所示。
extension ViewController: FacialGestureCameraViewDelegate {func doubleEyeBlinkDetected() {print("Double Eye Blink Detected")}func smileDetected() {print("Smile Detected")}func nodLeftDetected() {print("Nod Left Detected")}func nodRightDetected() {print("Nod Right Detected")}func leftEyeBlinkDetected() {print("Left Eye Blink Detected")}func rightEyeBlinkDetected() {print("Right Eye Blink Detected")}}

2. Add the remaining code in the “ViewController.swift” file which is used to start the camera session and confirms to the “FacialGestureCameraViewDelegate” methods.

2.将剩余的代码添加到“ ViewController.swift ”文件中,该文件用于启动摄像头会话并确认“ FacialGestureCameraViewDelegate”方法。

class ViewController: UIViewController {override func viewDidLoad() {super.viewDidLoad()// Do any additional setup after loading the view.addCameraViewDelegate()}override func viewDidAppear(_ animated: Bool) {super.viewDidAppear(animated)startGestureDetection()}override func viewDidDisappear(_ animated: Bool) {super.viewDidDisappear(animated)stopGestureDetection()}}extension ViewController {func addCameraViewDelegate() {cameraView.delegate = self}func startGestureDetection() {cameraView.beginSession()}func stopGestureDetection() {cameraView.stopSession()}}

3. Then we need to create a IBOutLet of “FacialGestureCameraView” in our view controller. In order to do that we need to first add a view in ViewController’s “Main.storyboard” file and assign the class as “FacialGestureCameraView” as shown below.

3.然后,我们需要在视图控制器中创建一个“ FacialGestureCameraView ”的IBOutLet 。 为此,我们需要首先在ViewController的“ Main.storyboard ”文件中添加一个视图,并将该类分配为“ FacialGestureCameraView ”,如下所示。

Image for post

4. Once done create an IBOutlet in “ViewController.swift” file as shown below.

4.完成后,在“ ViewController.swift ”文件中创建一个IBOutlet,如下所示。

@IBOutlet weak var cameraView: FacialGestureCameraView!

Awesome. We are finally done with the implementation of our delegate methods which will be triggered when a particular face gesture is detected.

太棒了 我们终于完成了委托方法的实现,该方法将在检测到特定的面部手势时触发。

5. Now run the code and check if the delegate methods are getting triggered and if it runs successfully you will see the outputs getting printed in the console.

5.现在运行代码,并检查是否触发了委托方法,并且如果成功运行,您将在控制台中看到输出打印。

结论 (Conclusion)

In this article we have made use of the Firebase ML Kit Vision API and have also implemented our custom delegate methods which gets triggered when a particular face gesture is detected. In the Part Three of the series we will learn how to make use of this delegate methods to implement some of the use cases.

在本文中,我们使用了Firebase ML Kit Vision API,还实现了我们的自定义委托方法,该方法在检测到特定的面部手势时触发。 在本系列的第三部分中,我们将学习如何利用此委托方法来实现一些用例。

The source code for this tutorial can be found here and don’t forget to run “pod install” command before building the project.

本教程的源代码可以在这里找到 并且不要忘记在构建项目之前运行“ pod install”命令。

If you finds these useful, feel free to share this. Thanks for reading!

如果您觉得这些有用,请随时分享。 谢谢阅读!

Till Then

直到那时

Image for post
Image Credit: https://keepcalms.com/p/keep-learning-and-happy-coding/
图片来源: https : //keepcalms.com/p/keep-learning-and-happy-coding/

翻译自: https://medium.com/swlh/firebase-ml-kit-building-a-facial-gesture-detecting-app-in-ios-part-two-2f5322906d1d

firebase ios

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值