Mos_Tec Tutorial : 001 - iOS视频录制

HELLO WORLD!

公司在做一个视频社交类APP,既然是视频社交,视频的录制与播放自然就成为了关键的功能,同时产品经理希望能在录制视频的时候添加一些酷炫的滤镜,比如美颜,贴纸等等。虽然市面上的视频社交APP不在少数,但是依旧有很多开发者对视频的录制-处理-播放并不是太了解。本系列博客将按照采集-录制-合成-播放-滤镜的顺序帮助大家扫盲,同时向大家介绍一下android的视频滤镜技术实现方式。那么我们直接切入正题吧。

录制:

必要元素:iOS多媒体库AVFoundation重要类:AVCaptureSession,AVCaptureDevice,AVCaptureDeviceInput,AVCaptureVideoDataOutput,AVCaptureVideoPreviewLayer在iOS中,视频的录制可以基本分为三种实现方式:1.UIImagePickerController进行视频拍摄2.AVCaptureMovieFileOutput进行视频文件输出3.AVAssetWriterInputPixelBufferAdaptor最后是视频贞写入第一种方式非常实用非常简单,缺点是视频采集界面不可自定义,视频无法添加滤镜,对于高制定性功能的视频采集。第一种方式不可取。第二种方式为基本的视频采集录制。输出视频文件,其缺点在于无法进行滤镜添加,但是可以自定义采集界面,基本的视频采集项目可以使用第二种方式实现视频的录制。第三种方式为视频贞写入,可以随意处理视频贞,进行滤镜叠加等效果。本文讲讲解2.3两种方式的视频录制方式。本系列教程全部代码将在完成后上传至github。有兴趣的同好可以下载观看,同时请指出不足。

step1:视频采集

新建工程,创建viewController,引入AVFoundation,创建成员变量,偏爱属性的童鞋也可以搞属性。这都很随意的。
@import AVFoundation;
@interface NormalCameraCaptureViewController ()
{
    AVCaptureSession * captureSession;

    AVCaptureDevice * videoCaptureDevice;
    AVCaptureDevice * audioCaptureDevice;

    AVCaptureDeviceInput * videoDeviceInput;
    AVCaptureDeviceInput * audioDeviceInput;

    AVCaptureVideoPreviewLayer* previewLayer;
}
@end
首先AVCaptureSession为音视频采集类,采集工作全在此类中进行。AVCaptureDevice为采集设备类,例如前/后摄像头,麦克风。AVCaptureDeviceInput为输入类,此类讲captureDevice采集到的数据传递给captureSession进行后续工作。AVCaptureVideoDataOutput为视频贞数据输出类。他会把视频元数据进行驻贞输出,同理AVCaptureAudioDataOutput为音频数据输出类。视频采集相对比较简单并且清晰易懂
创建session
-(void)createSesssion
{
    // captureSession
    captureSession = [[AVCaptureSession alloc]init];
    captureSession.sessionPreset = AVCaptureSessionPreset1280x720;
    // previewLayer
    previewLayer =  [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
    previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    dispatch_async(dispatch_get_main_queue(), ^{
        previewLayer.frame = self.view.layer.bounds;
        [self.view.layer insertSublayer:previewLayer atIndex:0];
    });
    [captureSession startRunning];
}
session有较多参数可以进行设置,其中sessionPreset比较关键,采集的分辨率,输入规格等预支信息,有以下方式可供选择:(具体参数详情请参考api)

NSString *const AVCaptureSessionPresetPhoto;
NSString *const AVCaptureSessionPresetHigh; 
NSString *const AVCaptureSessionPresetMedium; 
NSString *const AVCaptureSessionPresetLow; 
NSString *const AVCaptureSessionPreset352x288; 
NSString *const AVCaptureSessionPreset640x480; 
NSString *const AVCaptureSessionPreset1280x720; 
NSString *const AVCaptureSessionPreset1920x1080; 
NSString *const AVCaptureSessionPresetiFrame960x540; 
NSString *const AVCaptureSessionPresetiFrame1280x720; 
NSString *const AVCaptureSessionPresetInputPriority;
获取摄像头权限:
-(void)cameraPermission
{
    void (^requestCameraPermission)(void) = ^{
        [AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo completionHandler:^(BOOL granted) {
            if (granted) {
                // userAllowUseCamera
                [self addCameraInputOutput];
            } else {
                // userNotAllowUseCamera
            }
        }];
    };
    AVAuthorizationStatus status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo];
    switch (status) {
        case AVAuthorizationStatusAuthorized:
            // allow
            [self addCameraInputOutput];
            break;
        case AVAuthorizationStatusNotDetermined:
            requestCameraPermission();
            break;
        case AVAuthorizationStatusDenied:
        case AVAuthorizationStatusRestricted:
        default:
            // not allow
            break;
    }
}
 
添加摄像头:
-(void)addCameraInputOutput
{
    [captureSession beginConfiguration];

    // captureDevice
    NSArray *devices = [AVCaptureDevice devices];
    for (AVCaptureDevice *device in devices) {
        if ([device hasMediaType:AVMediaTypeVideo] && AVCaptureDevicePositionBack == device.position) {
            videoCaptureDevice = device;
            NSError *error;
            [device lockForConfiguration:&error];
            device.activeVideoMinFrameDuration = CMTimeMake(1, 30);
            device.activeVideoMaxFrameDuration = CMTimeMake(1, 30);
            [device unlockForConfiguration];
            break;
        }
    }

    if (!videoCaptureDevice) {
        // no camera
    }

    // deviceInput
    NSError * videoDeviceErr = nil;
    videoDeviceInput = [[AVCaptureDeviceInput alloc]initWithDevice:videoCaptureDevice error:&videoDeviceErr];

    // addInput
    if ([captureSession canAddInput:videoDeviceInput]) {
        [captureSession addInput:videoDeviceInput];
    }

    [captureSession commitConfiguration];
}
获取麦克风权限:
-(void)micorphonePermission{
    void (^requestMicorphonePermission)(void) = ^{
        [AVCaptureDevice requestAccessForMediaType:AVMediaTypeAudio
completionHandler:^(BOOL granted) {
            if (granted) {                
                // userAllowUseMicorphone
                [self addAudioInputOutput];
            } else {
                // userNotAllowUseMicorphone
            }
        }];
    };
    AVAuthorizationStatus status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeAudio];
    switch (status) {
        case AVAuthorizationStatusAuthorized:
            // allow
            [self addAudioInputOutput];
            break;
        case AVAuthorizationStatusNotDetermined:
            requestMicorphonePermission();
            break;
        case AVAuthorizationStatusDenied:
        case AVAuthorizationStatusRestricted:
        default:
            // not allow
            break;
    }
}
添加麦克风:
-(void)addAudioInputOutput
{
    [captureSession beginConfiguration];
    // captureDevice
    NSArray *devices = [AVCaptureDevice devices];
    for (AVCaptureDevice *device in devices) {
        if ([device hasMediaType:AVMediaTypeAudio]) {
            audioCaptureDevice = device;
        }
    }
    if (!audioCaptureDevice) {
        // no mic
    }
    // deviceInput

    NSError * audioDeviceErr = nil;
    audioDeviceInput = [[AVCaptureDeviceInput alloc]initWithDevice:audioCaptureDevice error:&audioDeviceErr];

    // addInput&Output    
    if ([captureSession canAddInput:audioDeviceInput]) {
        [captureSession addInput:audioDeviceInput];
    }
    [captureSession commitConfiguration];
}
旋转摄像头:
- (void)reorientCamera:(AVCaptureVideoOrientation)orientation
{
    if (!captureSession) {
       return;
    }
    AVCaptureSession* session = (AVCaptureSession *)captureSession;
    for (AVCaptureVideoDataOutput* output in session.outputs) {
        for (AVCaptureConnection * av in output.connections) {
            av.videoOrientation = orientation;
        }
    }
}
- (AVCaptureDevice *)cameraWithPosition:(AVCaptureDevicePosition)position
{
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for ( AVCaptureDevice *device in devices )
        if ( device.position == position )
            return device;
    return nil;
}
// rotateCamera
-(void)rotateCamera
{
    if (!captureSession) {
        return;
    }
    NSArray *inputs = captureSession.inputs;
    for ( AVCaptureDeviceInput *input in inputs )
    {
        AVCaptureDevice *device = input.device;
        if ( [device hasMediaType:AVMediaTypeVideo] )
        {
            AVCaptureDevicePosition position = device.position;
            AVCaptureDevice *newCamera = nil;
            AVCaptureDeviceInput *newInput = nil;

            if (position == AVCaptureDevicePositionFront)
                newCamera = [self cameraWithPosition:AVCaptureDevicePositionBack];
            else
                newCamera = [self cameraWithPosition:AVCaptureDevicePositionFront];
            newInput = [AVCaptureDeviceInput deviceInputWithDevice:newCamera error:nil];
            // beginConfiguration ensures that pending changes are not applied immediately
            [captureSession beginConfiguration];
            [captureSession removeInput:input];
            [captureSession addInput:newInput];
            // Changes take effect once the outermost commitConfiguration is invoked.
            [captureSession commitConfiguration];
            break;
        }
    }
    // always call this to get correct Orientation
    [self reorientCamera:AVCaptureVideoOrientationPortrait];
}
此处需要注意:reorient camera必须要实现并且调用,因为摄像头输出的影像并不是竖直的,前置摄像头左转90度,后置摄像头右转90度。至于原因我胡乱猜一下哈。。应该是摄像头在手机中硬件摆放的方向问题。。实现以上代码即可捕获视频影像,现在运行你就可以在屏幕上看到摄像头获取到的画面了。但是并没有实现任何视频录制的功能,所以目前只是在看而已。。

step2.1:视频录制(方式2)

视频录制类AVCaptureMovieFileOutput可以实现对视频的录制并写入文件,使用方式非常简单。在成员变量中添加声明
    // fileOutput
    AVCaptureMovieFileOutput * output;
在create session方法中进行初始化
    // movieFileOutput
    output = [[AVCaptureMovieFileOutput alloc]init];
    if ([captureSession canAddOutput:output]) {
        [captureSession addOutput:output];
    }
添加代理AVCaptureFileOutputRecordingDelegate并实现代理方法,其中我希望在录制结束后能自动把录制好的视频保存到本地相册
-(void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error
{
    ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
    if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:outputFileURL]) {
        [library writeVideoAtPathToSavedPhotosAlbum:outputFileURL completionBlock:^(NSURL *assetURL, NSError *error){
            dispatch_async(dispatch_get_main_queue(), ^{
                if (error) {
                    // erre
                }else
                {
                    // success
                }
            });
        }];
    }
    NSLog(@"recordEnd");
}
-(void)captureOutput:(AVCaptureFileOutput *)captureOutput didStartRecordingToOutputFileAtURL:(NSURL *)fileURL fromConnections:(NSArray *)connections
{
    NSLog(@"reocrdStarted");
}
最后添加录制按钮,实现录制和结束录制的调用
-(void)captureAction
{
    startCapture.selected = !startCapture.selected;
    if (startCapture.selected)
    {
        NSString * movieUrl = [NSHomeDirectory() stringByAppendingString:@"/Documents/001.m4v"];
        unlink([movieUrl UTF8String]);
        [output startRecordingToOutputFileURL:[NSURL fileURLWithPath:movieUrl] recordingDelegate:self];
    }
    else
    {
        [output stopRecording];
    }
}
其中值得注意的是,movieurl得路径一旦不存在,或存在同名文件,视频将会无法录制。所以每次创建视频输出路径时,尽量调用unlink()函数删除路径内容。以确保不会出现同名碎片。

step2.2:视频录制(方式3)

为session添加音视频流输出。首先添加两个成员变量声明,遵循并实现两个代理:
    // dataOutput
    AVCaptureVideoDataOutput * videoDataOutput;
    AVCaptureAudioDataOutput * audioDataOutput;

<AVCaptureVideoDataOutputSampleBufferDelegate,
AVCaptureAudioDataOutputSampleBufferDelegate>
在添加视频和音频输出源的时候添加音视频输出
  // videoDataOutput
    dispatch_queue_t cameraQueue = dispatch_queue_create("mos_tec_video", 0);

    videoDataOutput = [[AVCaptureVideoDataOutput alloc]init];
    [videoDataOutput setSampleBufferDelegate:self queue:cameraQueue];

    if ([captureSession canAddOutput:videoDataOutput]) {
        [captureSession addOutput:videoDataOutput];
    }

 // audioDataOutput
    dispatch_queue_t audioQueue = dispatch_queue_create("mos_tec_audio", 0);

    audioDataOutput = [[AVCaptureAudioDataOutput alloc]init];
    [audioDataOutput setSampleBufferDelegate:self queue:audioQueue];

    if ([captureSession canAddOutput:audioDataOutput]) {
        [captureSession addOutput:audioDataOutput];
    }
音视频流输出代理的回调方法相同,通过区分captureOutput来辨别音视频来源。
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    if (captureOutput == audioDataOutput)
    {
        // audio
    }
    else
    {
        // video
    }
}
在这个回调中我们可以获取音频/视频流,CMSampleBufferRef即为视频流/音频流数据。这个数据可以说是视频滤镜相关功能的开始,了解这个数据,我们便可以得知如何进行滤镜实现。首先我们来看一下CMSampleBufferRef的组成结构,api有详细的组成介绍:
/*!
@functionCMSampleBufferCreate
@abstractCreates a CMSampleBuffer.
@discussionArray parameters (sampleSizeArray, sampleTimingArray) should have only one element if that same
 element applies to all samples. All parameters are copied; on return, the caller can release them,
 free them, reuse them or whatever.  On return, the caller owns the returned CMSampleBuffer, and
 must release it when done with it.
 Example of usage for in-display-order video frames:
 <ul> dataBuffer: contains 7 Motion JPEG frames
 <li> dataFormatDescription: describes Motion JPEG video
 <li> numSamples: 7
 <li> numSampleTimingEntries: 1
 <li> sampleTimingArray: one entry = {duration = 1001/30000, presentationTimeStamp = 0/30000, decodeTimeStamp = invalid }
 <li> numSampleSizeEntries: 7
 <li> sampleSizeArray: {105840, 104456, 103464, 116460, 100412, 94808, 120400}
 </ul>
 Example of usage for out-of-display-order video frames:
 <ul> dataBuffer: contains 6 H.264 frames in decode order (P2,B0,B1,I5,B3,B4)
 <li> dataFormatDescription: describes H.264 video
 <li> numSamples: 6
 <li> numSampleTimingEntries: 6
 <li> sampleTimingArray: 6 entries = {
 <ul> {duration = 1001/30000, presentationTimeStamp = 12012/30000, decodeTimeStamp = 10010/30000},
 <li> {duration = 1001/30000, presentationTimeStamp = 10010/30000, decodeTimeStamp = 11011/30000},
 <li> {duration = 1001/30000, presentationTimeStamp = 11011/30000, decodeTimeStamp = 12012/30000},
 <li> {duration = 1001/30000, presentationTimeStamp = 15015/30000, decodeTimeStamp = 13013/30000},
 <li> {duration = 1001/30000, presentationTimeStamp = 13013/30000, decodeTimeStamp = 14014/30000},
 <li> {duration = 1001/30000, presentationTimeStamp = 14014/30000, decodeTimeStamp = 15015/30000}}
 </ul>
 <li> numSampleSizeEntries: 6
 <li> sampleSizeArray: {10580, 1234, 1364, 75660, 1012, 988}
 </ul>
 Example of usage for compressed audio:
 <ul> dataBuffer: contains 24 compressed AAC packets
 <li> dataFormatDescription: describes 44.1kHz AAC audio
 <li> numSamples: 24
 <li> numSampleTimingEntries: 1
 <li> sampleTimingArray: one entry = {
 <ul> {duration = 1024/44100, presentationTimeStamp = 0/44100, decodeTimeStamp = invalid }}
 </ul>
 <li> numSampleSizeEntries: 24
 <li> sampleSizeArray:
 <ul> {191, 183, 208, 213, 202, 206, 209, 206, 204, 192, 202, 277,
 <li> 282, 240, 209, 194, 193, 197, 196, 198, 168, 199, 171, 194}
 </ul>
 </ul>
 Example of usage for uncompressed interleaved audio:
 <ul> dataBuffer: contains 24000 uncompressed interleaved stereo frames, each containing 2 Float32s =
 <ul> {{L,R},
 <li> {L,R},
 <li> {L,R}, ...}
 </ul>
 <li> dataFormatDescription: describes 48kHz Float32 interleaved audio
 <li> numSamples: 24000
 <li> numSampleTimingEntries: 1
 <li> sampleTimingArray: one entry = {
 <ul> {duration = 1/48000, presentationTimeStamp = 0/48000, decodeTimeStamp = invalid }}
 </ul>
 <li> numSampleSizeEntries: 1
 <li> sampleSizeArray: {8}
 </ul>
 Example of usage for uncompressed non-interleaved audio:
 <ul> dataBuffer: contains 24000 uncompressed non-interleaved stereo frames, each containing 2 (non-contiguous) Float32s =
 <ul> {{L,L,L,L,L,...},
 <li> {R,R,R,R,R,...}}
 </ul>
 <li> dataFormatDescription: describes 48kHz Float32 non-interleaved audio
 <li> numSamples: 24000
 <li> numSampleTimingEntries: 1
 <li> sampleTimingArray: one entry = {duration = 1/48000, presentationTimeStamp = 0/48000, decodeTimeStamp = invalid }
 <li> numSampleSizeEntries: 0
 <li> sampleSizeArray: NULL (because the samples are not contiguous)
 </ul>
 */
以上天书我看也烦,通俗的讲一下,CMSampleBufferRef内包含了一个时间信息,一个数据包,一个数据的描述信息。好奇的童鞋可以尝试在当前的vc上叠加一个imageview。然后从CMSampleBufferRef视频流(记住!是视频流,别弄成音频流了)中获取图像信息并放在imageview上。
        CVImageBufferRef buffer;
        buffer = CMSampleBufferGetImageBuffer(sampleBuffer);        


        CVPixelBufferLockBaseAddress(buffer, 0);
        uint8_t *base;
        size_t width, height, bytesPerRow;
        base = CVPixelBufferGetBaseAddress(buffer);
        width = CVPixelBufferGetWidth(buffer);
        height = CVPixelBufferGetHeight(buffer);
        bytesPerRow = CVPixelBufferGetBytesPerRow(buffer);


        CGColorSpaceRef colorSpace;
        CGContextRef cgContext;
        colorSpace = CGColorSpaceCreateDeviceRGB();
        cgContext = CGBitmapContextCreate(base, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
        CGColorSpaceRelease(colorSpace);


        CGImageRef finalCGImage;
        finalCGImage = CGBitmapContextCreateImage(cgContext);


        dispatch_async(dispatch_get_main_queue(), ^{
            <#yourImageView#>.layer.contents = (__bridge id)finalCGImage;
        });
本文到此为止总规是把capture的东西讲完了。。。至于我们获取CMSampleBufferRef,请听下回书~视频写入原文地址:http://mostec.cn-hangzhou.aliapp.com/mos_tec-tutorial-001-video-capture-in-ios/
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值