CIDetector Class Reference

CIDetector Class Reference

Inherits from
Conforms to
Framework
Library/Frameworks/ CoreImage.framework
Availability
Available in iOS 5.0 and later.
Declared in
CIDetector.h
Related sample code

Overview

CIDetector object uses image processing to look for features (i.e., faces) in a picture. You might also want to use the CIFaceFeature class, which can find eye and mouth positions in faces that are detected with CIDetector.

This class can maintain many state variables that can impact performance. So for best performance, reuse CIDetector instances instead of creating new ones.

Tasks

Creating a Detector Object

Using a Detector Object to Find Features

Class Methods

detectorOfType:context:options:

Creates and returns a configured detector.

+ (CIDetector *)detectorOfType:( NSString *) type context:( CIContext *) context options:( NSDictionary *) options
Parameters
type

A string indicating the kind of detector you are interested in. See “Detector Types”.

context

A Core Image context that the detector can use when analyzing an image.

options

A dictionary containing details on how you want the detector to be configured. See “Detector Configuration Keys”.

Return Value

A configured detector.

Discussion

CIDetector object can potentially create and hold a significant amount of resources. Where possible, reuse the same CIDetector instance. Also, when processing CIImages with a detector object, your application performs better if the CIContext used to initialize the detector is the same context used to process the CIImage objects it will process.

Availability
  • Available in iOS 5.0 and later.
Related Sample Code
Declared In
CIDetector.h

Instance Methods

featuresInImage:

Searches for features in an image.

- ( NSArray *)featuresInImage:( CIImage *) image
Parameters
image

The image you want to examine.

Return Value

An array of CIFeature objects. Each object represents a feature detected in the image.

Availability
  • Available in iOS 5.0 and later.
Declared In
CIDetector.h

featuresInImage:options:

Searches for features in an image based on the specified image orientation.

- ( NSArray *)featuresInImage:( CIImage *) image options:( NSDictionary *) options
Parameters
image

The image you want to examine.

options

A dictionary that specifies face detection options. See “Feature Detection Keys” for allowed keys and their possible values.

Return Value

An array of CIFeature objects. Each object represents a feature detected in the image.

Discussion

The options dictionary should contain a value for the key CIDetectorImageOrientation, and may contain other values specifying optional face-recognition features.

Availability
  • Available in iOS 5.0 and later.
Related Sample Code
Declared In
CIDetector.h

Constants

Detector Types

Strings used to declare the detector for which you are interested.

NSString* const CIDetectorTypeFace
Constants
CIDetectorTypeFace

A detector that searches for faces in a photograph.

Available in iOS 5.0 and later.

Declared in CIDetector.h.

Detector Configuration Keys

Keys used in the options dictionary to configure a detector.

NSString* const CIDetectorAccuracy;
Constants
CIDetectorAccuracy

A key used to specify the desired accuracy for the detector.

The value associated with the key should be one of the values found in “Detector Accuracy Options”.

Available in iOS 5.0 and later.

Declared in CIDetector.h.

CIDetectorTracking

A key used to enable or disable face tracking for the detector. Use this option when you want to track faces across frames in a video.

Available in iOS 6.0 and later.

Declared in CIDetector.h.

CIDetectorMinFeatureSize

A key used to specify the minimum size that the detector will recognize as a feature.

The value for this key is an NSNumber object ranging from 0.0 through 1.0 that represents a fraction of the minor dimension of the image.

Available in iOS 6.0 and later.

Declared in CIDetector.h.

Detector Accuracy Options

Value options used to specify the desired accuracy of the detector.

NSString* const CIDetectorAccuracyLow;
NSString* const CIDetectorAccuracyHigh;
Constants
CIDetectorAccuracyLow

Indicates that the detector should choose techniques that are lower in accuracy, but can be processed more quickly.

Available in iOS 5.0 and later.

Declared in CIDetector.h.

CIDetectorAccuracyHigh

Indicates that the detector should choose techniques that are higher in accuracy, even if it requires more processing time.

Available in iOS 5.0 and later.

Declared in CIDetector.h.

Feature Detection Keys

Keys used in the options dictionary for featuresInImage:options:.

NSString* const CIDetectorImageOrientation;
NSString* const CIDetectorEyeBlink;
NSString* const CIDetectorSmile;
Constants
CIDetectorImageOrientation

An option for the display orientation of the image whose features you want to detect.

The value of this key is an NSNumber object whose value is an integer between 1 and 8. The TIFF and EXIF specifications define these values to indicate where the pixel coordinate origin (0,0) of the image should appear when it is displayed. The default value is 1, indicating that the origin is in the top left corner of the image. For further details, see kCGImagePropertyOrientation.

Core Image only detects faces whose orientation matches that of the image. You should provide a value for this key if you want to detect faces in a different orientation.

Available in iOS 5.0 and later.

Declared in CIDetector.h.

CIDetectorEyeBlink

An option for whether Core Image will perform additional processing to recognize closed eyes in detected faces.

Available in iOS 7.0 and later.

Declared in CIDetector.h.

CIDetectorSmile

An option for whether Core Image will perform additional processing to recognize smiles in detected faces.

Available in iOS 7.0 and later.

Declared in CIDetector.h.

在iOS上进行人脸识别并抠取人脸的方法可以使用Core Image框架中的CIDetector类。通过以下代码可以实现人脸识别和抠取人脸的功能: ```swift func detectFace(withImage image: UIImage) { // 将图像转为CIImage,使用Core Image需要使用CIImage guard let personCIImg = CIImage(image: image) else { return } // 设置识别精度 let opts: \[String: Any\] = \[CIDetectorAccuracy: CIDetectorAccuracyHigh\] // 初始化识别器 let detector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: opts) let result: \[CIFaceFeature\] = (detector?.features(in: personCIImg, options: opts) as? \[CIFaceFeature\])! if result.count > 0 { for face in result { let faceBox = UIView(frame: face.bounds) // 画一个红框画出面部位置 faceBox.layer.borderWidth = 3 faceBox.layer.borderColor = UIColor.red.cgColor faceBox.backgroundColor = UIColor.clear // 添加红框到图片上 imgView.addSubview(faceBox) print("面部坐标------> %d ", faceBox.frame) } } } ``` 这段代码会将传入的UIImage对象转换为CIImage对象,然后使用CIDetector进行人脸识别。识别到的人脸会通过在UIImageView上添加红色边框的方式进行标记。你可以根据需要对这段代码进行修改和扩展,以满足你的具体需求。 #### 引用[.reference_title] - *1* *2* [ios人脸识别_适用于Android和iOS的10种最佳人脸识别应用程序](https://blog.csdn.net/cumian8165/article/details/108160585)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] - *3* [iOS人脸识别Demo](https://blog.csdn.net/kangpengpeng1/article/details/79197201)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值