ios 手机相机调用使用(媒体捕捉)

AVFoundation的适用范围

AVFoundation是苹果在iOS和OS X系统中用于处理基于时间的媒体数据的Objective-C框架. 供使用者来开发媒体类型的应用程序.

虽然现在你可以在iOS和 OS X系统中通过WebView或HTML标签直接来播放音视频内容,但是如果想要加入额外媒体相关功能如:录制,音视频流控制,添加额外动画和图片效果等.则需要专门的音视频框架进行控制.
在这里插入图片描述

  • AVCaptureSession 捕捉会话

AVCaptureSession相当于一个虚拟的插座,连接了输入和输出资源.它管理着从物理设备(摄像头和麦克风)得到的数据流,然后输出到其他地方.
可以额外配置一个会话预设值,用来控制捕捉数据的格式和质量,默认是AVCaptureSessionPresetHigh

  • AVCaptureDevice 捕捉设备

AVCaptureDevice为物理设备定义一个接口和大量控制方法,例如对焦、曝光、白平衡和闪光等

  • 捕捉设备输入 AVCaptureDeviceInput

在使用AVCaptureDevice进行处理前,需要将它封装到AVCaptureDeviceInput实例中.因为一个捕捉设备不能直接添加到AVCaptureSession中

  • 捕捉的输出 AVCaptureOutput

AVCaptureOutput有许多扩展类,它本身只是一个抽象基类,用于为从Session得到的数据寻找输出目的地. 看上面结构图就可以看出各扩展类功能方向, 这里单独说一下,AVCaptureAudioDataOutput和AVCaptureVideoDataOutput可以直接访问硬件扑捉到的数字样本,可以用于音视频流进行实施处理

  • 捕捉连接 AVCaptureConnection

这个类其实就是上图中连接不同组件的连接箭头所表示. 对这些连接的访问可以让开发者对信号流就行底层控制,比如禁用某些特定的连接,或者音频连接中限制单独的音频轨道

  • 捕捉预览 AVCaptureVideoPreviewLayer

这个类不在上图中, 它是对捕捉视频数据进行实时预览.在视频角色中类似于AVPlayerLayer

捕捉会话应用流程

a.用之前先判断相机权限:

    // 检查相机app是否有相机权限
    AVAuthorizationStatus authStatus = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo];
    if (authStatus == AVAuthorizationStatusDenied){
        
        UIAlertController *alertC = [UIAlertController alertControllerWithTitle:@"提示" message:@"请在iPhone的\"设置-隐私-相机\"选项中,允许云眸访问你的相机" preferredStyle:UIAlertControllerStyleAlert];
        
        UIAlertAction *alertA = [UIAlertAction actionWithTitle:@"确定" style:(UIAlertActionStyleDefault) handler:^(UIAlertAction * _Nonnull action) {
            
        }];
        
        [alertC addAction:alertA];
        [currentController presentViewController:alertC animated:YES completion:nil];
        return;
    }

b. 创建会话

@property (nonatomic,strong) AVCaptureDevice *cam_back;

@property (nonatomic, strong) AVCaptureSession *session;

@property (nonatomic, strong) AVCaptureVideoDataOutput *videoDataOutput;

@property (nonatomic, strong) AVCaptureVideoPreviewLayer *videoPreviewLayer;

//获取设备摄像头列表
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    AVCaptureDevice *deviceF;
    for (AVCaptureDevice *device in devices )
    {
        //后置摄像头
        if (self.isPositionBack) {
            if (device.position == AVCaptureDevicePositionBack) {
                deviceF = device;
                break;
            }
        }
        //后置摄像头
        else{
            if (device.position == AVCaptureDevicePositionFront) {
                deviceF = device;
                break;
            }
        }
    }
    self.cam_back = deviceF;
    //摄像头的拍摄数据对象
    AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:deviceF error:nil];
    //实例生成适用于使用其他媒体API进行处理的视频帧,可以使用captureOutput:didOutputSampleBuffer:FromConnection:Delegate方法访问帧

    // 创建设备输出流
    AVCaptureVideoDataOutput *output = [AVCaptureVideoDataOutput new];
    //设置接收程序的委托,该委托将接受捕获的缓冲区和将调用委托的调度队列
    [output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

    // 创建数据输出流
    AVCaptureMetadataOutput *metaout = [[AVCaptureMetadataOutput alloc] init];
    [metaout setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
    //创建捕捉会话
    self.session = [[AVCaptureSession alloc] init];
    
    // beginConfiguration 和 commitConfiguration 包住设置过程
    [self.session beginConfiguration];
    if ([self.session canAddInput:input]) {
        [self.session addInput:input];
    }
    
    if (self.isPositionBack) {
        //设置捕捉像素 AVCaptureSessionPresetHigh AVCaptureSessionPresetMedium AVCaptureSessionPresetLow
        if ([self.session canSetSessionPreset:AVCaptureSessionPresetHigh]) {
            [self.session setSessionPreset:AVCaptureSessionPresetHigh];
        }else{
            [self.session setSessionPreset:AVCaptureSessionPreset640x480];
        }
    }else{
        if ([self.session canSetSessionPreset:AVCaptureSessionPreset640x480]) {
            [self.session setSessionPreset:AVCaptureSessionPreset640x480];
        }
    }
    //添加点击时获取CGpoint的事件
    UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc]initWithTarget:self action:@selector(focusGesture:)];
    [currentController.view addGestureRecognizer:tapGesture];
    
    //注意改变设备属性前一定要首先调用lockForConfiguration:调用完之后使用unlockForConfiguration方法解锁,不然app崩溃
    NSError *error;
    if ([self.cam_back lockForConfiguration:&error]) {
        self.cam_back.subjectAreaChangeMonitoringEnabled = YES;
        //闪光灯自动调整
        if ([self.cam_back isFlashModeSupported:AVCaptureFlashModeAuto]) {
            [self.cam_back setFlashMode:AVCaptureFlashModeAuto];
        }
        
        //自动白平衡自动调整
        if ([self.cam_back isWhiteBalanceModeSupported:AVCaptureWhiteBalanceModeAutoWhiteBalance]) {
            [self.cam_back setWhiteBalanceMode:AVCaptureWhiteBalanceModeAutoWhiteBalance];
        }
        [self.cam_back unlockForConfiguration];
    }else{
        NSLog(@"设置设备属性过程发生错误,错误信息:%@",error.localizedDescription);
    }
    // 设置捕捉会话的输出部分
    // image输出
    if ([self.session canAddOutput:output]) {
        [self.session addOutput:output];
    }
    
    if ([self.session canAddOutput:metaout]) {
        [self.session addOutput:metaout];
    }
    [self.session commitConfiguration];
    
    // 设置视频输出编码格式
    NSString     *key           = (NSString *)kCVPixelBufferPixelFormatTypeKey;
    NSNumber     *value         = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
    NSDictionary *videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
    [output setVideoSettings:videoSettings];
    self.videoDataOutput = output;
    
    if (!self.isPositionBack) {
        //这里 我们告诉要检测到人脸 就给我一些反应,里面还有QRCode 等 都可以放进去,就是 如果视频流检测到了你要的 就会出发didOutputSampleBuffer代理方法
        [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]];
        AVCaptureSession* session = (AVCaptureSession *)self.session;
        //前置摄像头一定要设置一下 要不然画面是镜像
        for (AVCaptureVideoDataOutput* output in session.outputs) {
            for (AVCaptureConnection * av in output.connections) {
                //判断是否是前置摄像头状态
                if (av.supportsVideoMirroring) {
                    //镜像设置
                    av.videoOrientation = AVCaptureVideoOrientationPortrait;
                    av.videoMirrored = YES;
                }
            }
        }
    }else{
        AVCaptureSession* session = (AVCaptureSession *)self.session;
        //前置摄像头一定要设置一下 要不然画面是镜像
        for (AVCaptureVideoDataOutput* output in session.outputs) {
            for (AVCaptureConnection * av in output.connections) {
                //判断是否是前置摄像头状态
                if (av.supportsVideoMirroring) {
                    //镜像设置
                    av.videoOrientation = AVCaptureVideoOrientationPortrait;
                }
            }
        }
    }
    
    
    // 实例化预览图层, 传递_session是为了告诉图层将来显示什么内容
    _videoPreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession:_session];
    // 保持纵横比;填充层边界
    
    _videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    CGFloat x = 0;
    CGFloat y = 0;
    CGFloat w = [UIScreen mainScreen].bounds.size.width;
    CGFloat h = [UIScreen mainScreen].bounds.size.height;
    _videoPreviewLayer.frame = CGRectMake(x, y, w, h);
    [currentController.view.layer insertSublayer:_videoPreviewLayer atIndex:0];
    
    // 9、启动会话
    [_session startRunning];

c . 停止,启动,检查会话

    - (void)startRunning {
        [_session startRunning];
    }

    - (void)stopRunning {
        [_session stopRunning];
    }

    - (BOOL)isScanRunning {
        return [_session isRunning];
    }

其他细节(切换摄像头,聚焦)

a.切换摄像头

// 当前捕捉会话对应的摄像头.
- (AVCaptureDevice *)activeCamera {                                         
    return self.activeVideoInput.device;
}
// 当前未激活的摄像头 
- (AVCaptureDevice *)inactiveCamera {                                      
    AVCaptureDevice *device = nil;
    if (self.cameraCount > 1) {
        if ([self activeCamera].position == AVCaptureDevicePositionBack) {  
            device = [self cameraWithPosition:AVCaptureDevicePositionFront];
        } else {
            device = [self cameraWithPosition:AVCaptureDevicePositionBack];
        }
    }
    return device;
}
// 切换摄像头
- (BOOL)switchCameras {

    if (![self canSwitchCameras]) {                                         
        return NO;
    }

    // 1. 获取当前未使用的摄像头,并为他创建一个新的Input.
    NSError *error;
    AVCaptureDevice *videoDevice = [self inactiveCamera];                   
    AVCaptureDeviceInput *videoInput =
    [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
    // 2. 用新的Input替换掉正在激活的Input
    if (videoInput) {
        [self.captureSession beginConfiguration];    // 原子性配置开始.保证线程安全

        [self.captureSession removeInput:self.activeVideoInput];            

        if ([self.captureSession canAddInput:videoInput]) {                 
            [self.captureSession addInput:videoInput];
            self.activeVideoInput = videoInput;
        } else { // 确保安全,如果新的Input不能添加,继续使用旧的
            [self.captureSession addInput:self.activeVideoInput];
        }

        [self.captureSession commitConfiguration];    // 原子性配置完成

    } else {    // 错误处理
        [self.delegate deviceConfigurationFailedWithError:error];           
        return NO;
    }

    return YES;
}

- (BOOL)canSwitchCameras {                                                  
    return self.cameraCount > 1;
}

- (NSUInteger)cameraCount {                                                 
    return [[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo] count];
}

// 获取指定位置的AVCaptureDevice
- (AVCaptureDevice *)cameraWithPosition:(AVCaptureDevicePosition)position { 
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for (AVCaptureDevice *device in devices) {  
        if (device.position == position) {
            return device;
        }
    }
    return nil;
}

b.代理处理数据(拍照)

@property (nonatomic,copy) NSArray *currentMetadata;

typedef void (^ShutterImageBlock)(UIImage *image);

@property (nonatomic,copy) ShutterImageBlock callBack;

@interface VideoScanManager()
<
AVCaptureMetadataOutputObjectsDelegate,
AVCaptureVideoDataOutputSampleBufferDelegate
>
#pragma mark - - - AVCaptureMetadataOutputObjectsDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection {
    _currentMetadata = metadataObjects;
}

#pragma mark - - - AVCaptureVideoDataOutputSampleBufferDelegate的方法
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    if (self.callBack) {
        self.callBack([self imageFromSampleBuffer:sampleBuffer]);
        self.callBack = nil;
    }
    if (self.currentMetadata.count > 0) {
        // face表示当前获取到的 人脸对象AVMetadataObject<AVMetadataFaceObject>
        AVMetadataFaceObject *face = (AVMetadataFaceObject *)[_videoPreviewLayer transformedMetadataObjectForMetadataObject:self.currentMetadata.lastObject];
        // 此处对进行处理
        // 。。。。。。。
    }
}

从获取的sampleBuffer中获取图片数据

// 通过抽样缓存数据创建一个UIImage对象
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
    // 为媒体数据设置一个CMSampleBuffer的Core Video图像缓存对象
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    // 锁定pixel buffer的基地址
    CVPixelBufferLockBaseAddress(imageBuffer, 0);
    
    // 得到pixel buffer的基地址
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
    
    // 得到pixel buffer的行字节数
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    // 得到pixel buffer的宽和高
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    
    // 创建一个依赖于设备的RGB颜色空间
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    
    // 用抽样缓存的数据创建一个位图格式的图形上下文(graphics context)对象
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
                                                 bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    // 根据这个位图context中的像素数据创建一个Quartz image对象
    CGImageRef quartzImage = CGBitmapContextCreateImage(context);
    // 解锁pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);
    
    // 释放context和颜色空间
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);
    
    // 用Quartz image创建一个UIImage对象image
    UIImage *image = [UIImage imageWithCGImage:quartzImage];
    
    // 释放Quartz image对象
    CGImageRelease(quartzImage);
   
    return (image);
}

c.聚焦

手动聚焦方式:点击屏幕,获取point数据,调用捕捉的设备 setFocusPointOfInterest和setFocusMode , 两者调用顺序不能变。

_focusView是UIView类型,显示点击的区域。

注意⚠️:给设备配置属性时需要用lockForConfiguration ,unlockForConfiguration 包住,不然会使APP崩溃。

在初始化session时添加点击事件,用来获取点击的CGPoint

//添加点击时获取CGpoint的事件
    UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc]initWithTarget:self action:@selector(focusGesture:)];
    [currentController.view addGestureRecognizer:tapGesture];
- (void)focusGesture:(UITapGestureRecognizer*)gesture{
    CGPoint point = [gesture locationInView:gesture.view];
    [self focusAtPoint:point];
}

- (void)focusAtPoint:(CGPoint)point{
    CGSize size = _videoPreviewLayer.bounds.size;
    // focusPoint 函数后面Point取值范围是取景框左上角(0,0)到取景框右下角(1,1)之间,按这个来但位置就是不对,只能按上面的写法才可以。前面是点击位置的y/PreviewLayer的高度,后面是1-点击位置的x/PreviewLayer的宽度
    CGPoint focusPoint = CGPointMake( point.y /size.height ,1 - point.x/size.width );
    
    if ([self.cam_back lockForConfiguration:nil]) {
        
        if ([self.cam_back isFocusModeSupported:AVCaptureFocusModeAutoFocus]) {
            [self.cam_back setFocusPointOfInterest:focusPoint];
            [self.cam_back setFocusMode:AVCaptureFocusModeAutoFocus];
        }
        
        if ([self.cam_back isExposureModeSupported:AVCaptureExposureModeAutoExpose ]) {
            [self.cam_back setExposurePointOfInterest:focusPoint];
            //曝光量调节
            [self.cam_back setExposureMode:AVCaptureExposureModeAutoExpose];
        }
        
        [self.cam_back unlockForConfiguration];
        self.focusView.center = point;
        self.focusView.hidden = NO;
        [UIView animateWithDuration:0.2 animations:^{
            self.focusView.transform = CGAffineTransformMakeScale(1.25, 1.25);
        }completion:^(BOOL finished) {
            [UIView animateWithDuration:0.3 animations:^{
                self.focusView.transform = CGAffineTransformIdentity;
            } completion:^(BOOL finished) {
                self.focusView.hidden = YES;
            }];
        }];
    }
}

- (UIView *)focusView{
    if (!_focusView) {
        _focusView = [[UIView alloc]initWithFrame:CGRectMake(0, 0, 60, 60)];
        _focusView.layer.borderWidth = 1.0;
        _focusView.layer.borderColor = [UIColor yellowColor].CGColor;
        _focusView.hidden = YES;
    }
    return _focusView;
}

最后不要忘记在Plist里面添加相机权限获取的请求,不然app崩溃

在这里插入图片描述

学习记录完毕。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

路途遥远gg

帮到你了就好

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值