ios音视频流

本文转载自:http://mostec.cn-hangzhou.aliapp.com

pixelbuffer

上回书说到iOS视频采集并使用AVCaptureMovieFileOutput类进行视频录制,以及使用AVCaptureVideoDataOutput,AVCaptureAudioDataOutput进行音视频流获取。本期我讲介绍如何进行视频文件的写入。

捕获视频流的录制:

上回书中我们为capture session添加了音视频输出,但是并没有对其进行文件写入,现在我们需要对捕获到的音视频数据进行写入。首先添加以下成员变量。

    // assetWriter

    AVAssetWriter * asserWriter;
    AVAssetWriterInput * videoWriterInput;
    AVAssetWriterInput * audioWriterInput;

    BOOL recording;
    CMTime lastSampleTime;
    NSString * videoFileUrl;

AVAssetWriter为视频写入类,它负责输出视频文件到本地,你可以为他指定其输入内容。AVAssetWriterInput即为输入类,对其进行正确的参数配置后,即可将音视频元数据输入。我们需要做一个开关来确定当前的录制状态,以及当前的录制时间(CMTime)还有一个录制文件的文件名

step1:创建writer

-(void)createWriter
{

1.首先确定录制视频的长宽大小

    CGSize size = CGSizeMake(720, 1280);

2.创建视频文件路径,同时删除可能的同名文件

    videoFileUrl = [NSHomeDirectory() stringByAppendingString:@"/Documents/test.mov"];
    unlink([videoFileUrl UTF8String]);

3.创建AVAssetWriter对象,同时需要创建错误指针,用来收集错误信息

    NSError * error = nil;
    asserWriter = [[AVAssetWriter alloc]initWithURL:[NSURL fileURLWithPath:videoFileUrl] fileType:AVFileTypeQuickTimeMovie error:&error];

3.1-1这里使用一个断言快速捕获错误。断言虽然是个好的编程习惯但是对我来说这个东西其实非常可恶,因为即使它真的出错了,也没什么太大的意义,此外粗心的时候还会有可能把断言留在代码中忘记删除,日后上线的时候尿路不畅痛苦不堪。

    NSParameterAssert(asserWriter);

3.1-2人性化的办法就是捕获错误,至少这样给程序提供了一个选择的余地。

    if(error)
    {
        NSLog(@"error = %@", [error localizedDescription]);
    }

4.配置视频输入

4.1首先视频的压缩配置.比特率

    // add video input
    NSDictionary * videoCompressionPropertys = @{AVVideoAverageBitRateKey:[NSNumber numberWithDouble:128.0 * 1024.0]};

4.2视频的参数配置:h264编码

    NSDictionary * videoSettings = @{AVVideoCodecKey:AVVideoCodecH264,
                                     AVVideoWidthKey:[NSNumber numberWithFloat:size.width],
                                     AVVideoHeightKey:[NSNumber numberWithFloat:size.height],
              AVVideoCompressionPropertiesKey:videoCompressionPropertys};

4.3初始化视频输入

    videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

4.4捕获一下异常

    NSParameterAssert(videoWriterInput);

4.5设置为实时数据输入

    videoWriterInput.expectsMediaDataInRealTime = YES;

5.我们需要确保能够添加视频输入

    NSParameterAssert([asserWriter canAddInput:videoWriterInput]);              

    if ([asserWriter canAddInput:videoWriterInput])
        NSLog(@"I can add this input");
    else
        NSLog(@"i can't add this input");

6.初始化音频输入

音频我完全不懂。。。。所以一下代码没法给你们解释了。

    // Add audio input

    AudioChannelLayout acl;
    bzero( &acl, sizeof(acl));
    acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;

    NSDictionary* audioOutputSettings = nil;

    audioOutputSettings = [ NSDictionary dictionaryWithObjectsAndKeys:
                           [ NSNumber numberWithInt: kAudioFormatMPEG4AAC ], AVFormatIDKey,
                           [ NSNumber numberWithInt:64000], AVEncoderBitRateKey,
                           [ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
                           [ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
                           [ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
                           nil ];  


    audioWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioOutputSettings];
    audioWriterInput.expectsMediaDataInRealTime = YES;

7.最后一步,添加源,并打印writer的状态

    // add input  
    [asserWriter addInput:audioWriterInput];
    [asserWriter addInput:videoWriterInput];

    NSLog(@"%ld",(long)asserWriter.status);

}

step2:在回调函数中添加录制功能

创建好asset writer并且status为0就可以进行正常的录制工作了,那么录制将会在-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection回调中进行,我们需要从中区分出音频流还是视频流,并且我们需要实时更新一个session时间用来开始写入。

 @autoreleasepool {
        lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

        if (!recording) {
            return;
        }

        if (captureOutput == videoDataOutput) {

            // video

            if (asserWriter.status > AVAssetWriterStatusWriting)
            {
                NSLog(@"Warning: writer status is %ld", (long)asserWriter.status);

                if (asserWriter.status == AVAssetWriterStatusFailed)
                {
                    NSLog(@"Error: %@", asserWriter.error);
                    return;
                }
            }
            if ([videoWriterInput isReadyForMoreMediaData])
            {
                // writer buffer
                if (![videoWriterInput appendSampleBuffer:sampleBuffer])
                {
                    NSLog(@"unable to write video frame : %lld",lastSampleTime.value);
                }
                else
                {
                    NSLog(@"recorded frame time %lld",lastSampleTime.value/lastSampleTime.timescale);
                }
            }
        }
        else
        {
            // audio

            if (asserWriter.status > AVAssetWriterStatusWriting)
            {
                NSLog(@"Warning: writer status is %ld", (long)asserWriter.status);

                if (asserWriter.status == AVAssetWriterStatusFailed)
                {
                    NSLog(@"Error: %@", asserWriter.error);
                    return;
                }
            }

            if ([audioWriterInput isReadyForMoreMediaData])
            {
                // writer buffer
                if (![audioWriterInput appendSampleBuffer:sampleBuffer])
                {
                    NSLog(@"unable to write audio frame : %lld",lastSampleTime.value);
                }
                else
                {
                    NSLog(@"recorded audio frame time %lld",lastSampleTime.value/lastSampleTime.timescale);
                }
            }
        }
    }

因为代码是洒家一笔一划写完测试过后粘贴过来的,所以按照文中的步骤来。不会有错误,不过我在这里奉劝大家把代码作为参考,自己尝试着实现一遍,assetwritter的坑非常多,从上面的花式if和尿性的断言大家应该就能看出来这一点,最好自己把坑都踩一遍,这样才能领会到东西,光靠看别人的东西你就会渐渐的变成一个中二。

“真正牛逼的人不是什么都会,而是能快速摸清那些不会的东西。也不是犯错误最少的,而是能够快速发现错误并找到办法修改的。请记住,没有人生下来就什么都会,什么都知道,你们眼中的大神都是从一个一个坑中爬出来并铭记在心的海盗。”

最后一步,录制按钮都调用函数实现,同样,我们希望在录制结束的时候能够把录制好的视频存到本地相册方便测试

  startCapture.selected = !startCapture.selected;
    if (startCapture.selected)
    {
        recording = YES;
        if (recording && asserWriter.status != AVAssetWriterStatusWriting)
        {
            [asserWriter startWriting];
            [asserWriter startSessionAtSourceTime:lastSampleTime];
        }
    }
    else
    {
        recording = NO;
        [asserWriter finishWritingWithCompletionHandler:^{
            ALAssetsLibrary * library = [[ALAssetsLibrary alloc] init];
            if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:[NSURL fileURLWithPath:videoFileUrl]]) {
                [library writeVideoAtPathToSavedPhotosAlbum:[NSURL fileURLWithPath:videoFileUrl] completionBlock:^(NSURL *assetURL, NSError *error){

                    dispatch_async(dispatch_get_main_queue(), ^{

                        if (error) {
                            // error
                        }else
                        {
                            // success
                        }

                    });
                }];
            }
        }];
    }

大家可以发现我多次判断asserWriter.status这个属性,这并不是因为我的胆小如鼠的代码风格。。而是这个破b东西太容易造成以下错误了:

2016-02-17 15:57:52.443 videoCompresserDemo[1539:776248] *** Terminating app due to uncaught exception ‘NSInternalInconsistencyException’, reason: ‘*** -[AVAssetWriter startWriting] Cannot call method when status is 2′(也可能是3也可能不是这个错误)

以上我们就成功的实现了使用assetwriter进行对实时捕获的音视频流进行文件写入,到此大家可能觉得此种录制方式与上回书说到的方式2相比较,此方式有种脱裤子放屁的感觉。并且细心的盆友会发现在旋转摄像头的时候会出现报错,这是因为我们在调整视频输出方向的时候,做了一个方向调整,记得上回书说到的前置摄像头左转90度,后置摄像头右转90度的问题么。这个问题的本质,其实是这样的:

/*!

 @property videoOrientation
 @abstract
    Indicates whether the video flowing through the connection should be rotated
    to a given orientation.
 @discussion
    This property is only applicable to AVCaptureConnection instances involving video.  If -isVideoOrientationSupported returns YES, videoOrientation may be set to rotate the video buffers being consumed by the connection's output.  Note that setting videoOrientation does not necessarily result in a physical rotation of video buffers.  For instance, a video connection to an AVCaptureMovieFileOutput handles orientation using a Quicktime track matrix.  In the AVCaptureStillImageOutput,orientation is handled using Exif tags.

*/

这个方式仅仅在使用moviefileoutput的时候有效,并且在某些播放器中无效,因为像素并没有被旋转,我们仅仅通过调整output的输出方向使得输出的视频影像都遵循目标方向而已。并且,这个方法并不是每次都好用(可以说这个方法基本不好用。。),首先在这里添加判断。

 if ([av isVideoOrientationSupported]) 
 {
      av.videoOrientation = orientation;
 }

逻辑深的人会发现这一步似乎给自己挖了一个坑。如果有的支持,有的不支持,那不支持的该怎么办,如果有办法,难道在处理这个方向问题的时候还需要判断一下是否需要处理么?这听起来有种学术派和实力派的抗衡(代码逻辑与代码效率的抉择),我的观点是,不使用这个系统属性。而对所有输出讯号进行方向矫正处理。至少GPUImage就是这么做的。除非你更牛逼

其实到此为止,如果录制视频的时候大家不需要对视频进行特殊的实时效果处理,没必要搞得这么复杂。上回书说到的方式二:moviefileoutput是一个很好的视频录制解决方案。那么既然moviefileoutput那么牛逼,我们何必还要学习这个坑爹的assetwritter呢?

用图片数组生成一个视频文件:

本文标题点名video writing 所以把捕获到的视频影像写入视频文件只是一个润色的过程。那么接下来我们先不考虑用摄像头捕获视频影像,how about。用一个图像序列生成一个视频。

新ViewController,随便起个名字,随便设计你的布局,搞一个数组,再搞一系列图片素材,我们来做一个视频相册。添加以下成员变量

    NSArray * imageArr;

    AVAssetWriter * videoWriter;
    AVAssetWriterInput * writerInput;
    AVAssetWriterInputPixelBufferAdaptor * adaptor;

    NSString * fileUrl;

初始化图片:

-(instancetype)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
{
    if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil])
    {
        imageArr = @[[UIImage imageNamed:@"welcome1"],
                     [UIImage imageNamed:@"welcome2"],
                     [UIImage imageNamed:@"welcome3"],
                     [UIImage imageNamed:@"welcome4"],
                     [UIImage imageNamed:@"welcome5"],
                     [UIImage imageNamed:@"welcome6"]];
    }
    return self;
}

step1:初始化asset writer

视频文件写入用asset writer跟上面一样,我们依旧是进行基本的初始化,但是之前我们使用videoDataOutput作为输出信号源,现在我们的数据来源是一系列图片,那么我们就需要一个能把图片拼接到writerinput的adaptor

AVAssetWriterInputPixelBufferAdaptor这个类让我感到很神奇,因为名字太长而且听着像一个电源
-(void)createMovieWriter
{
    fileUrl = [NSHomeDirectory() stringByAppendingString:@"/Documents/001.mov"];
    unlink([fileUrl UTF8String]);

    NSError * err = nil;
    videoWriter = [[AVAssetWriter alloc]initWithURL:[NSURL fileURLWithPath:fileUrl] fileType:AVFileTypeQuickTimeMovie error:&err];

    NSParameterAssert(videoWriter);

    if (err) 
    {
        NSLog(@"videoWriterFailed");
    }

    NSDictionary * videoSettings = @{AVVideoCodecKey:AVVideoCodecH264,
                                     AVVideoWidthKey:[NSNumber numberWithInt:640],
                                     AVVideoHeightKey:[NSNumber numberWithInt:640]};

    writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

    adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:nil];

    NSParameterAssert(writerInput);
    NSParameterAssert([videoWriter canAddInput:writerInput]);

    if ([videoWriter canAddInput:writerInput]) 
    {
        [videoWriter addInput:writerInput];
    }
}

AVAssetWriterInputPixelBufferAdaptor的作用就是把CVPixelBufferRef视频贞图像拼接到视频中,这里有一个参数设置PixelBufferAttributes。请看文档

Pixel buffer attributes keys for the pixel buffer pool are defined in <CoreVideo/CVPixelBuffer.h>. To specify the pixel format type, the pixelBufferAttributes dictionary should contain a value for kCVPixelBufferPixelFormatTypeKey.  For example, use [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] for 8-bit-per-channel BGRA. See the discussion under appendPixelBuffer:withPresentationTime: for advice on choosing a pixel format.
Clients that do not need a pixel buffer pool for allocating buffers should set sourcePixelBufferAttributes to nil.

这部分的难度系数比较高,大家先了解一下。笔者正在整理相关资料,在后续讲解滤镜的时候我将会给大家详细讲解图像处理的相关信息。我们继续来说AVAssetWriterInputPixelBufferAdaptor,他有一个方法:

- (BOOL)appendPixelBuffer:(CVPixelBufferRef)pixelBuffer withPresentationTime:(CMTime)presentationTime;

这个方法就是奇迹出现的关键,把一个CVPixelBufferRef按照presentationTime拼接到视频当中,说到这一步我们不得不仔细的说说这个CMTime

step1-1:CMTime

CMTime structs are non-opaque mutable structs representing times (either timestamps or durations).
CMTime is represented as a rational number, with a numerator (an int64_t value), and a denominator (an int32_t timescale). Conceptually, the timescale specifies the fraction of a second each unit in the numerator occupies. Thus if the timescale is 4, each unit represents a quarter of a second; if the timescale is 10, each unit represents a tenth of a second, and so on. In addition to a simple time value, a CMTime can represent non-numeric values: +infinity, -infinity, and indefinite. Using a flag CMTime indicates whether the time been rounded at some point.
CMTimes contain an epoch number, which is usually set to 0, but can be used to distinguish unrelated timelines: for example, it could be incremented each time through a presentation loop, to differentiate between time N in loop 0 from time N in loop 1.
You can convert CMTimes to and from immutable CFDictionaries (see CFDictionaryRef) using CMTimeCopyAsDictionary and CMTimeMakeFromDictionary, for use in annotations and various Core Foundation containers.

大家应该都是知道老电影的胶片,很长很长的胶卷上面有一贞一贞的图像,那么也许童鞋们应该知道,在放电影的时候,一贞一贞连续播放图像就形成了连续的动画,那么我们如何精确的表述电影时间呢。

typedef struct { 
    CMTimeValue value;
    CMTimeScale timescale;
    CMTimeFlags flags;
    CMTimeEpoch epoch; 
} CMTime;

apple定义了这个结构体来表示电影时间,这就是CMTime的数据结构,api有详细的介绍

value

The value of the CMTime. // 当前第几帧
value/timescale = seconds.
timescale

The timescale of the CMTime. // 一秒多少贞
value/timescale = seconds.
flags

// 状态
A bitfield representing the flags set for the CMTime.
For example, kCMTimeFlags_Valid. See CMTime Flags for possible values.
epoch

The epoch of the CMTime.
You use the epoch to differentiate between equal timestamps that are actually different because of looping, multi-item sequencing, and so on.
The epoch is used during comparison: greater epochs happen after lesser ones. Addition or subtraction is only possible within a single epoch, however, since the epoch length may be unknown or variable.

现在大家了解了CMTime就可以确定我们插入的帧对应的时间了,那么CVPixelBufferRef是什么呢,暂且你先不能需要做过于复杂的了解,只需要知道这个:

A Core Video pixel buffer is an image buffer that holds pixels in main memory. Applications generating frames, compressing or decompressing video, or using Core Image can all make use of Core Video pixel buffers.

在ios视频处理开发中,CVPixelBuffer就好比CGImageRef。同时

/*!
    @typedef CVPixelBufferRef
    @abstract   Based on the image buffer type. The pixel buffer implements the memory storage for an image buffer.
*/

typedef CVImageBufferRef CVPixelBufferRef;

CVPixelBuffer就是我们要插入的帧图像(这个说法很不严谨但是却非常易懂)

step2:startAppendImage

- (BOOL)appendPixelBuffer:(CVPixelBufferRef)pixelBuffer withPresentationTime:(CMTime)presentationTime;

我们知道这个方法就可以把图像写入到视频中,但是这个图像我们从何而来呢。ios的图像类似乎只有UIImage,那么UIImage如何变成我们需要的CVPixelBuffer呢。

首先定义一个方法用来输出CVPixelBuffer,输入源我们使用CGImageRef

-(CVPixelBufferRef)imageToPixelBuffer:(CGImageRef)image

我们先声明一个CVPixelBuffer

    CVPixelBufferRef pixelBuffer = NULL;

初始化CVPixelBuffer的方法比较复杂:

CVReturn CVPixelBufferCreate ( CFAllocatorRef allocator, size_t width, size_t height, OSType pixelFormatType, CFDictionaryRefpixelBufferAttributes, CVPixelBufferRef  _Nullable *pixelBufferOut );

返回值是一个CVReturn枚举,来通知我们创建的结果是成功还是失败。如果失败会告知我们失败的类型,枚举有以下情况

/*
     kCVReturnSuccess                         = 0,
     kCVReturnFirst                           = -6660,

     kCVReturnError                           = kCVReturnFirst,
     kCVReturnInvalidArgument                 = -6661,
     kCVReturnAllocationFailed                = -6662,
     kCVReturnUnsupported                     = -6663,

     // DisplayLink related errors
     kCVReturnInvalidDisplay                  = -6670,
     kCVReturnDisplayLinkAlreadyRunning       = -6671,
     kCVReturnDisplayLinkNotRunning           = -6672,
     kCVReturnDisplayLinkCallbacksNotSet      = -6673,

     // Buffer related errors
     kCVReturnInvalidPixelFormat              = -6680,
     kCVReturnInvalidSize                     = -6681,
     kCVReturnInvalidPixelBufferAttributes    = -6682,
     kCVReturnPixelBufferNotOpenGLCompatible  = -6683,
     kCVReturnPixelBufferNotMetalCompatible   = -6684,

     // Buffer Pool related errors
     kCVReturnWouldExceedAllocationThreshold  = -6689,
     kCVReturnPoolAllocationFailed            = -6690,
     kCVReturnInvalidPoolAttributes           = -6691,

     kCVReturnLast                            = -6699
*/

看似种类繁多的错误,当你在自己进行开发时,会渐渐的一个一个的遇到。

创建CVPixelBuffer的参数有6个,api有较为详细的讲解:

allocator

The allocator to use to create the pixel buffer. Pass NULL to specify the default allocator.

width

Width of the pixel buffer, in pixels.

height

Height of the pixel buffer, in pixels.

pixelFormatType

The pixel format identified by its respective four-character code (type OSType).

pixelBufferAttributes

A dictionary with additional attributes for a pixel buffer. This parameter is optional. See Pixel Buffer Attribute Keys for more details.

pixelBufferOut

On output, the newly created pixel buffer. Ownership follows the The Create Rule.

那么创建方式就是下面这样:

 NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
 int width = 640;
 int height = 640;
 CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width,height,kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,&pixelBuffer);

其中formate是个很复杂的参数:

 kCVPixelFormatType_1Monochrome    = 0x00000001, /* 1 bit indexed */
  kCVPixelFormatType_2Indexed       = 0x00000002, /* 2 bit indexed */
  kCVPixelFormatType_4Indexed       = 0x00000004, /* 4 bit indexed */
  kCVPixelFormatType_8Indexed       = 0x00000008, /* 8 bit indexed */
  kCVPixelFormatType_1IndexedGray_WhiteIsZero = 0x00000021, /* 1 bit indexed gray, white is zero */
  kCVPixelFormatType_2IndexedGray_WhiteIsZero = 0x00000022, /* 2 bit indexed gray, white is zero */
  kCVPixelFormatType_4IndexedGray_WhiteIsZero = 0x00000024, /* 4 bit indexed gray, white is zero */
  kCVPixelFormatType_8IndexedGray_WhiteIsZero = 0x00000028, /* 8 bit indexed gray, white is zero */
  kCVPixelFormatType_16BE555        = 0x00000010, /* 16 bit BE RGB 555 */
  kCVPixelFormatType_16LE555        = 'L555',     /* 16 bit LE RGB 555 */
  kCVPixelFormatType_16LE5551       = '5551',     /* 16 bit LE RGB 5551 */
  kCVPixelFormatType_16BE565        = 'B565',     /* 16 bit BE RGB 565 */
  kCVPixelFormatType_16LE565        = 'L565',     /* 16 bit LE RGB 565 */
  kCVPixelFormatType_24RGB          = 0x00000018, /* 24 bit RGB */
  kCVPixelFormatType_24BGR          = '24BG',     /* 24 bit BGR */
  kCVPixelFormatType_32ARGB         = 0x00000020, /* 32 bit ARGB */
  kCVPixelFormatType_32BGRA         = 'BGRA',     /* 32 bit BGRA */
  kCVPixelFormatType_32ABGR         = 'ABGR',     /* 32 bit ABGR */
  kCVPixelFormatType_32RGBA         = 'RGBA',     /* 32 bit RGBA */
  kCVPixelFormatType_64ARGB         = 'b64a',     /* 64 bit ARGB, 16-bit big-endian samples */
  kCVPixelFormatType_48RGB          = 'b48r',     /* 48 bit RGB, 16-bit big-endian samples */
  kCVPixelFormatType_32AlphaGray    = 'b32a',     /* 32 bit AlphaGray, 16-bit big-endian samples, black is zero */
  kCVPixelFormatType_16Gray         = 'b16g',     /* 16 bit Grayscale, 16-bit big-endian samples, black is zero */
  kCVPixelFormatType_30RGB          = 'R10k',     /* 30 bit RGB, 10-bit big-endian samples, 2 unused padding bits (at least significant end). */
  kCVPixelFormatType_422YpCbCr8     = '2vuy',     /* Component Y'CbCr 8-bit 4:2:2, ordered Cb Y'0 Cr Y'1 */
  kCVPixelFormatType_4444YpCbCrA8   = 'v408',     /* Component Y'CbCrA 8-bit 4:4:4:4, ordered Cb Y' Cr A */
  kCVPixelFormatType_4444YpCbCrA8R  = 'r408',     /* Component Y'CbCrA 8-bit 4:4:4:4, rendering format. full range alpha, zero biased YUV, ordered A Y' Cb Cr */
  kCVPixelFormatType_4444AYpCbCr8   = 'y408',     /* Component Y'CbCrA 8-bit 4:4:4:4, ordered A Y' Cb Cr, full range alpha, video range Y'CbCr. */
  kCVPixelFormatType_4444AYpCbCr16  = 'y416',     /* Component Y'CbCrA 16-bit 4:4:4:4, ordered A Y' Cb Cr, full range alpha, video range Y'CbCr, 16-bit little-endian samples. */
  kCVPixelFormatType_444YpCbCr8     = 'v308',     /* Component Y'CbCr 8-bit 4:4:4 */
  kCVPixelFormatType_422YpCbCr16    = 'v216',     /* Component Y'CbCr 10,12,14,16-bit 4:2:2 */
  kCVPixelFormatType_422YpCbCr10    = 'v210',     /* Component Y'CbCr 10-bit 4:2:2 */
  kCVPixelFormatType_444YpCbCr10    = 'v410',     /* Component Y'CbCr 10-bit 4:4:4 */
  kCVPixelFormatType_420YpCbCr8Planar = 'y420',   /* Planar Component Y'CbCr 8-bit 4:2:0.  baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrPlanar struct */
  kCVPixelFormatType_420YpCbCr8PlanarFullRange    = 'f420',   /* Planar Component Y'CbCr 8-bit 4:2:0, full range.  baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrPlanar struct */
  kCVPixelFormatType_422YpCbCr_4A_8BiPlanar = 'a2vy', /* First plane: Video-range Component Y'CbCr 8-bit 4:2:2, ordered Cb Y'0 Cr Y'1; second plane: alpha 8-bit 0-255 */
  kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v', /* Bi-Planar Component Y'CbCr 8-bit 4:2:0, video-range (luma=[16,235] chroma=[16,240]).  baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */
  kCVPixelFormatType_420YpCbCr8BiPlanarFullRange  = '420f', /* Bi-Planar Component Y'CbCr 8-bit 4:2:0, full-range (luma=[0,255] chroma=[1,255]).  baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */ 
  kCVPixelFormatType_422YpCbCr8_yuvs = 'yuvs',     /* Component Y'CbCr 8-bit 4:2:2, ordered Y'0 Cb Y'1 Cr */
  kCVPixelFormatType_422YpCbCr8FullRange = 'yuvf', /* Component Y'CbCr 8-bit 4:2:2, full range, ordered Y'0 Cb Y'1 Cr */
  kCVPixelFormatType_OneComponent8  = 'L008',     /* 8 bit one component, black is zero */
  kCVPixelFormatType_TwoComponent8  = '2C08',     /* 8 bit two component, black is zero */
  kCVPixelFormatType_OneComponent16Half  = 'L00h',     /* 16 bit one component IEEE half-precision float, 16-bit little-endian samples */
  kCVPixelFormatType_OneComponent32Float = 'L00f',     /* 32 bit one component IEEE float, 32-bit little-endian samples */
  kCVPixelFormatType_TwoComponent16Half  = '2C0h',     /* 16 bit two component IEEE half-precision float, 16-bit little-endian samples */
  kCVPixelFormatType_TwoComponent32Float = '2C0f',     /* 32 bit two component IEEE float, 32-bit little-endian samples */
  kCVPixelFormatType_64RGBAHalf          = 'RGhA',     /* 64 bit RGBA IEEE half-precision float, 16-bit little-endian samples */
  kCVPixelFormatType_128RGBAFloat        = 'RGfA',     /* 128 bit RGBA IEEE float, 32-bit little-endian samples */

以上的颜色格式非常重要,我们将在后续滤镜部分进行详细讲解,大家好奇的可以先百度一下,或者。可以自己尝试把颜色设置成这里面的,看看会有什么效果(可以变黑白的)

创建后一定要检测这个返回状态:这里值得注意的是,此处最好写为断言,没错,断言,因为如果真的创建失败,继续运行下去也没有任何意义。

    NSParameterAssert(status == kCVReturnSuccess && pixelBuffer != NULL);

创建后一定要检测这个返回状态:这里值得注意的是,此处最好写为断言,没错,断言,因为如果真的创建失败,继续运行下去也没有任何意义。

那么CVPixelBuffer已经有了,但是我们创建的CVPixelBuffer目前只是一块内存区域,我们需要给他内容:

    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pixelBuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();

    CGContextRef context = CGBitmapContextCreate(pxdata, width,height, 8, 4*width, rgbColorSpace,kCGImageAlphaNoneSkipFirst);

    NSParameterAssert(context);
    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));

    CGContextDrawImage(context, CGRectMake(0, 0,CGImageGetWidth(image),CGImageGetHeight(image)), image);

    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

这样我们就成功的添加里内容到我们指定的内存区域中,返回pixelBuffer即可,如果上述代码童鞋们不太懂,以后我会单独写关于CoreGraphic的教程给大家讲解。

现在我们知道如何转化uiimage为pixelBuffer就可以开始进行添加了,我准备了6张图片。那么我希望输出一个60秒的60fps视频,每10秒切换一张图片。没什么值得讲解的,请大家注意其中的内存管理,视频合成是一个内存消耗高能的事情,你可以想象一下这一过程会产生3600个图片内存,如果管理不慎你就会在模拟器上发现1g的运行内存或者真机直接闪退掉。

-(void)createMovieFileWithImageSequence
{
    [videoWriter startWriting];
    [videoWriter startSessionAtSourceTime:kCMTimeZero];

    for (int i = 0; i < 60 * 60; i ++)
    {
        @autoreleasepool {
            CGImageRef inputImage = [[imageArr objectAtIndex:i/600]CGImage];

            if (writerInput.readyForMoreMediaData)
            {
                [self appendNewFrame:inputImage frame:i];
            }
            else
            {
                i--;
            }
        }
    }

    [writerInput markAsFinished];
    [videoWriter finishWritingWithCompletionHandler:^{

        ALAssetsLibrary * library = [[ALAssetsLibrary alloc] init];
        if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:[NSURL fileURLWithPath:fileUrl]])
        {
            [library writeVideoAtPathToSavedPhotosAlbum:[NSURL fileURLWithPath:fileUrl] completionBlock:^(NSURL *assetURL, NSError *error){
                dispatch_async(dispatch_get_main_queue(), ^{

                    if (error)
                    {
                        // erre
                    }else
                    {
                        // success
                        UIAlertView * aleart = [[UIAlertView alloc]initWithTitle:@"saved" message:nil delegate:nil cancelButtonTitle:@"ok" otherButtonTitles:nil, nil];
                        [aleart show];
                    }
                });
            }];
        }
    }];
}
-(void)appendNewFrame:(CGImageRef)inputImage frame:(int)frame
{
    NSLog(@"frameTime::::%d",frame);

    CVPixelBufferRef pixelBuffer = [self imageToPixelBuffer:inputImage];
    [adaptor appendPixelBuffer:pixelBuffer withPresentationTime:CMTimeMake(frame, 60)];
    CFRelease(pixelBuffer);
}

到此我们就实现了把图片序列合并为一整个视频,思维活跃的童鞋已经有一点头绪了吧,如果我们能够获取视频的帧数据,我们能自己合成视频帧数据并输出视频,那么我们是不是就可以在获取到视频帧数据后对视频帧进行处理,并写入到新的视频文件呢?是的。但是这不是下一篇博客要讲解的令人激动的滤镜部分,我觉得在讲解滤镜之前,我们需要先让流程连贯下来,所以下回书我会先告诉大家如果在ios中播放视频或流媒体(hls)

 

Mos_Tec Tutorial : 001 – Video Capture in iOS

HELLO WORLD!

公司在做一个视频社交类APP,既然是视频社交,视频的录制与播放自然就成为了关键的功能,同时产品经理希望能在录制视频的时候添加一些酷炫的滤镜,比如美颜,贴纸等等。虽然市面上的视频社交APP不在少数,但是依旧有很多开发者对视频的录制-处理-播放并不是太了解。本系列博客将按照采集-录制-合成-播放-滤镜的顺序帮助大家扫盲,同时向大家介绍一下android的视频滤镜技术实现方式。那么我们直接切入正题吧。

录制:

必要元素:iOS多媒体库AVFoundation

重要类:AVCaptureSession,AVCaptureDevice,AVCaptureDeviceInput,AVCaptureVideoDataOutput,AVCaptureVideoPreviewLayer

在iOS中,视频的录制可以基本分为三种实现方式:

1.UIImagePickerController进行视频拍摄

2.AVCaptureMovieFileOutput进行视频文件输出

3.AVAssetWriterInputPixelBufferAdaptor最后是视频贞写入

第一种方式非常实用非常简单,缺点是视频采集界面不可自定义,视频无法添加滤镜,对于高制定性功能的视频采集。第一种方式不可取。第二种方式为基本的视频采集录制。输出视频文件,其缺点在于无法进行滤镜添加,但是可以自定义采集界面,基本的视频采集项目可以使用第二种方式实现视频的录制。第三种方式为视频贞写入,可以随意处理视频贞,进行滤镜叠加等效果。本文讲讲解2.3两种方式的视频录制方式。本系列教程全部代码将在完成后上传至github。有兴趣的同好可以下载观看,同时请指出不足。

step1:视频采集

新建工程,创建viewController,引入AVFoundation,创建成员变量,偏爱属性的童鞋也可以搞属性。这都很随意的。

@import AVFoundation;
@interface NormalCameraCaptureViewController ()
{
    AVCaptureSession * captureSession;

    AVCaptureDevice * videoCaptureDevice;
    AVCaptureDevice * audioCaptureDevice;

    AVCaptureDeviceInput * videoDeviceInput;
    AVCaptureDeviceInput * audioDeviceInput;

    AVCaptureVideoPreviewLayer* previewLayer;
}
@end

首先AVCaptureSession为音视频采集类,采集工作全在此类中进行。AVCaptureDevice为采集设备类,例如前/后摄像头,麦克风。AVCaptureDeviceInput为输入类,此类讲captureDevice采集到的数据传递给captureSession进行后续工作。AVCaptureVideoDataOutput为视频贞数据输出类。他会把视频元数据进行驻贞输出,同理AVCaptureAudioDataOutput为音频数据输出类。

视频采集相对比较简单并且清晰易懂

创建session
-(void)createSesssion
{
    // captureSession
    captureSession = [[AVCaptureSession alloc]init];
    captureSession.sessionPreset = AVCaptureSessionPreset1280x720;
    // previewLayer
    previewLayer =  [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
    previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    dispatch_async(dispatch_get_main_queue(), ^{
        previewLayer.frame = self.view.layer.bounds;
        [self.view.layer insertSublayer:previewLayer atIndex:0];
    });
    [captureSession startRunning];
}

session有较多参数可以进行设置,其中sessionPreset比较关键,采集的分辨率,输入规格等预支信息,有以下方式可供选择:(具体参数详情请参考api)


NSString *const AVCaptureSessionPresetPhoto;
NSString *const AVCaptureSessionPresetHigh; 
NSString *const AVCaptureSessionPresetMedium; 
NSString *const AVCaptureSessionPresetLow; 
NSString *const AVCaptureSessionPreset352x288; 
NSString *const AVCaptureSessionPreset640x480; 
NSString *const AVCaptureSessionPreset1280x720; 
NSString *const AVCaptureSessionPreset1920x1080; 
NSString *const AVCaptureSessionPresetiFrame960x540; 
NSString *const AVCaptureSessionPresetiFrame1280x720; 
NSString *const AVCaptureSessionPresetInputPriority;
获取摄像头权限:
-(void)cameraPermission
{
    void (^requestCameraPermission)(void) = ^{
        [AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo completionHandler:^(BOOL granted) {
            if (granted) {
                // userAllowUseCamera
                [self addCameraInputOutput];
            } else {
                // userNotAllowUseCamera
            }
        }];
    };
    AVAuthorizationStatus status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo];
    switch (status) {
        case AVAuthorizationStatusAuthorized:
            // allow
            [self addCameraInputOutput];
            break;
        case AVAuthorizationStatusNotDetermined:
            requestCameraPermission();
            break;
        case AVAuthorizationStatusDenied:
        case AVAuthorizationStatusRestricted:
        default:
            // not allow
            break;
    }
}

 

添加摄像头:
-(void)addCameraInputOutput
{
    [captureSession beginConfiguration];

    // captureDevice
    NSArray *devices = [AVCaptureDevice devices];
    for (AVCaptureDevice *device in devices) {
        if ([device hasMediaType:AVMediaTypeVideo] && AVCaptureDevicePositionBack == device.position) {
            videoCaptureDevice = device;
            NSError *error;
            [device lockForConfiguration:&error];
            device.activeVideoMinFrameDuration = CMTimeMake(1, 30);
            device.activeVideoMaxFrameDuration = CMTimeMake(1, 30);
            [device unlockForConfiguration];
            break;
        }
    }

    if (!videoCaptureDevice) {
        // no camera
    }

    // deviceInput
    NSError * videoDeviceErr = nil;
    videoDeviceInput = [[AVCaptureDeviceInput alloc]initWithDevice:videoCaptureDevice error:&videoDeviceErr];

    // addInput
    if ([captureSession canAddInput:videoDeviceInput]) {
        [captureSession addInput:videoDeviceInput];
    }

    [captureSession commitConfiguration];
}
获取麦克风权限:
-(void)micorphonePermission{
    void (^requestMicorphonePermission)(void) = ^{
        [AVCaptureDevice requestAccessForMediaType:AVMediaTypeAudio
completionHandler:^(BOOL granted) {
            if (granted) {                
                // userAllowUseMicorphone
                [self addAudioInputOutput];
            } else {
                // userNotAllowUseMicorphone
            }
        }];
    };
    AVAuthorizationStatus status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeAudio];
    switch (status) {
        case AVAuthorizationStatusAuthorized:
            // allow
            [self addAudioInputOutput];
            break;
        case AVAuthorizationStatusNotDetermined:
            requestMicorphonePermission();
            break;
        case AVAuthorizationStatusDenied:
        case AVAuthorizationStatusRestricted:
        default:
            // not allow
            break;
    }
}
添加麦克风:
-(void)addAudioInputOutput
{
    [captureSession beginConfiguration];
    // captureDevice
    NSArray *devices = [AVCaptureDevice devices];
    for (AVCaptureDevice *device in devices) {
        if ([device hasMediaType:AVMediaTypeAudio]) {
            audioCaptureDevice = device;
        }
    }
    if (!audioCaptureDevice) {
        // no mic
    }
    // deviceInput

    NSError * audioDeviceErr = nil;
    audioDeviceInput = [[AVCaptureDeviceInput alloc]initWithDevice:audioCaptureDevice error:&audioDeviceErr];

    // addInput&Output    
    if ([captureSession canAddInput:audioDeviceInput]) {
        [captureSession addInput:audioDeviceInput];
    }
    [captureSession commitConfiguration];
}
旋转摄像头:
- (void)reorientCamera:(AVCaptureVideoOrientation)orientation
{
    if (!captureSession) {
       return;
    }
    AVCaptureSession* session = (AVCaptureSession *)captureSession;
    for (AVCaptureVideoDataOutput* output in session.outputs) {
        for (AVCaptureConnection * av in output.connections) {
            av.videoOrientation = orientation;
        }
    }
}
- (AVCaptureDevice *)cameraWithPosition:(AVCaptureDevicePosition)position
{
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for ( AVCaptureDevice *device in devices )
        if ( device.position == position )
            return device;
    return nil;
}
// rotateCamera
-(void)rotateCamera
{
    if (!captureSession) {
        return;
    }
    NSArray *inputs = captureSession.inputs;
    for ( AVCaptureDeviceInput *input in inputs )
    {
        AVCaptureDevice *device = input.device;
        if ( [device hasMediaType:AVMediaTypeVideo] )
        {
            AVCaptureDevicePosition position = device.position;
            AVCaptureDevice *newCamera = nil;
            AVCaptureDeviceInput *newInput = nil;

            if (position == AVCaptureDevicePositionFront)
                newCamera = [self cameraWithPosition:AVCaptureDevicePositionBack];
            else
                newCamera = [self cameraWithPosition:AVCaptureDevicePositionFront];
            newInput = [AVCaptureDeviceInput deviceInputWithDevice:newCamera error:nil];
            // beginConfiguration ensures that pending changes are not applied immediately
            [captureSession beginConfiguration];
            [captureSession removeInput:input];
            [captureSession addInput:newInput];
            // Changes take effect once the outermost commitConfiguration is invoked.
            [captureSession commitConfiguration];
            break;
        }
    }
    // always call this to get correct Orientation
    [self reorientCamera:AVCaptureVideoOrientationPortrait];
}

此处需要注意:reorient camera必须要实现并且调用,因为摄像头输出的影像并不是竖直的,前置摄像头左转90度,后置摄像头右转90度。至于原因我胡乱猜一下哈。。应该是摄像头在手机中硬件摆放的方向问题。。

实现以上代码即可捕获视频影像,现在运行你就可以在屏幕上看到摄像头获取到的画面了。但是并没有实现任何视频录制的功能,所以目前只是在看而已。。

step2.1:视频录制(方式2)

视频录制类AVCaptureMovieFileOutput可以实现对视频的录制并写入文件,使用方式非常简单。

在成员变量中添加声明

    // fileOutput
    AVCaptureMovieFileOutput * output;

在create session方法中进行初始化

    // movieFileOutput
    output = [[AVCaptureMovieFileOutput alloc]init];
    if ([captureSession canAddOutput:output]) {
        [captureSession addOutput:output];
    }

添加代理AVCaptureFileOutputRecordingDelegate并实现代理方法,其中我希望在录制结束后能自动把录制好的视频保存到本地相册

-(void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error
{
    ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
    if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:outputFileURL]) {
        [library writeVideoAtPathToSavedPhotosAlbum:outputFileURL completionBlock:^(NSURL *assetURL, NSError *error){
            dispatch_async(dispatch_get_main_queue(), ^{
                if (error) {
                    // erre
                }else
                {
                    // success
                }
            });
        }];
    }
    NSLog(@"recordEnd");
}
-(void)captureOutput:(AVCaptureFileOutput *)captureOutput didStartRecordingToOutputFileAtURL:(NSURL *)fileURL fromConnections:(NSArray *)connections
{
    NSLog(@"reocrdStarted");
}

最后添加录制按钮,实现录制和结束录制的调用

-(void)captureAction
{
    startCapture.selected = !startCapture.selected;
    if (startCapture.selected)
    {
        NSString * movieUrl = [NSHomeDirectory() stringByAppendingString:@"/Documents/001.m4v"];
        unlink([movieUrl UTF8String]);
        [output startRecordingToOutputFileURL:[NSURL fileURLWithPath:movieUrl] recordingDelegate:self];
    }
    else
    {
        [output stopRecording];
    }
}

其中值得注意的是,movieurl得路径一旦不存在,或存在同名文件,视频将会无法录制。所以每次创建视频输出路径时,尽量调用unlink()函数删除路径内容。以确保不会出现同名碎片。

step2.2:视频录制(方式3)

为session添加音视频流输出。首先添加两个成员变量声明,遵循并实现两个代理:

    // dataOutput
    AVCaptureVideoDataOutput * videoDataOutput;
    AVCaptureAudioDataOutput * audioDataOutput;

<AVCaptureVideoDataOutputSampleBufferDelegate,
AVCaptureAudioDataOutputSampleBufferDelegate>

在添加视频和音频输出源的时候添加音视频输出

  // videoDataOutput
    dispatch_queue_t cameraQueue = dispatch_queue_create("mos_tec_video", 0);

    videoDataOutput = [[AVCaptureVideoDataOutput alloc]init];
    [videoDataOutput setSampleBufferDelegate:self queue:cameraQueue];

    if ([captureSession canAddOutput:videoDataOutput]) {
        [captureSession addOutput:videoDataOutput];
    }

 // audioDataOutput
    dispatch_queue_t audioQueue = dispatch_queue_create("mos_tec_audio", 0);

    audioDataOutput = [[AVCaptureAudioDataOutput alloc]init];
    [audioDataOutput setSampleBufferDelegate:self queue:audioQueue];

    if ([captureSession canAddOutput:audioDataOutput]) {
        [captureSession addOutput:audioDataOutput];
    }

音视频流输出代理的回调方法相同,通过区分captureOutput来辨别音视频来源。

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    if (captureOutput == audioDataOutput)
    {
        // audio
    }
    else
    {
        // video
    }
}

在这个回调中我们可以获取音频/视频流,CMSampleBufferRef即为视频流/音频流数据。这个数据可以说是视频滤镜相关功能的开始,了解这个数据,我们便可以得知如何进行滤镜实现。首先我们来看一下CMSampleBufferRef的组成结构,api有详细的组成介绍:

/*!
@functionCMSampleBufferCreate
@abstractCreates a CMSampleBuffer.
@discussionArray parameters (sampleSizeArray, sampleTimingArray) should have only one element if that same
 element applies to all samples. All parameters are copied; on return, the caller can release them,
 free them, reuse them or whatever.  On return, the caller owns the returned CMSampleBuffer, and
 must release it when done with it.
 Example of usage for in-display-order video frames:
 <ul> dataBuffer: contains 7 Motion JPEG frames
 <li> dataFormatDescription: describes Motion JPEG video
 <li> numSamples: 7
 <li> numSampleTimingEntries: 1
 <li> sampleTimingArray: one entry = {duration = 1001/30000, presentationTimeStamp = 0/30000, decodeTimeStamp = invalid }
 <li> numSampleSizeEntries: 7
 <li> sampleSizeArray: {105840, 104456, 103464, 116460, 100412, 94808, 120400}
 </ul>
 Example of usage for out-of-display-order video frames:
 <ul> dataBuffer: contains 6 H.264 frames in decode order (P2,B0,B1,I5,B3,B4)
 <li> dataFormatDescription: describes H.264 video
 <li> numSamples: 6
 <li> numSampleTimingEntries: 6
 <li> sampleTimingArray: 6 entries = {
 <ul> {duration = 1001/30000, presentationTimeStamp = 12012/30000, decodeTimeStamp = 10010/30000},
 <li> {duration = 1001/30000, presentationTimeStamp = 10010/30000, decodeTimeStamp = 11011/30000},
 <li> {duration = 1001/30000, presentationTimeStamp = 11011/30000, decodeTimeStamp = 12012/30000},
 <li> {duration = 1001/30000, presentationTimeStamp = 15015/30000, decodeTimeStamp = 13013/30000},
 <li> {duration = 1001/30000, presentationTimeStamp = 13013/30000, decodeTimeStamp = 14014/30000},
 <li> {duration = 1001/30000, presentationTimeStamp = 14014/30000, decodeTimeStamp = 15015/30000}}
 </ul>
 <li> numSampleSizeEntries: 6
 <li> sampleSizeArray: {10580, 1234, 1364, 75660, 1012, 988}
 </ul>
 Example of usage for compressed audio:
 <ul> dataBuffer: contains 24 compressed AAC packets
 <li> dataFormatDescription: describes 44.1kHz AAC audio
 <li> numSamples: 24
 <li> numSampleTimingEntries: 1
 <li> sampleTimingArray: one entry = {
 <ul> {duration = 1024/44100, presentationTimeStamp = 0/44100, decodeTimeStamp = invalid }}
 </ul>
 <li> numSampleSizeEntries: 24
 <li> sampleSizeArray:
 <ul> {191, 183, 208, 213, 202, 206, 209, 206, 204, 192, 202, 277,
 <li> 282, 240, 209, 194, 193, 197, 196, 198, 168, 199, 171, 194}
 </ul>
 </ul>
 Example of usage for uncompressed interleaved audio:
 <ul> dataBuffer: contains 24000 uncompressed interleaved stereo frames, each containing 2 Float32s =
 <ul> {{L,R},
 <li> {L,R},
 <li> {L,R}, ...}
 </ul>
 <li> dataFormatDescription: describes 48kHz Float32 interleaved audio
 <li> numSamples: 24000
 <li> numSampleTimingEntries: 1
 <li> sampleTimingArray: one entry = {
 <ul> {duration = 1/48000, presentationTimeStamp = 0/48000, decodeTimeStamp = invalid }}
 </ul>
 <li> numSampleSizeEntries: 1
 <li> sampleSizeArray: {8}
 </ul>
 Example of usage for uncompressed non-interleaved audio:
 <ul> dataBuffer: contains 24000 uncompressed non-interleaved stereo frames, each containing 2 (non-contiguous) Float32s =
 <ul> {{L,L,L,L,L,...},
 <li> {R,R,R,R,R,...}}
 </ul>
 <li> dataFormatDescription: describes 48kHz Float32 non-interleaved audio
 <li> numSamples: 24000
 <li> numSampleTimingEntries: 1
 <li> sampleTimingArray: one entry = {duration = 1/48000, presentationTimeStamp = 0/48000, decodeTimeStamp = invalid }
 <li> numSampleSizeEntries: 0
 <li> sampleSizeArray: NULL (because the samples are not contiguous)
 </ul>
 */

以上天书我看也烦,通俗的讲一下,CMSampleBufferRef内包含了一个时间信息,一个数据包,一个数据的描述信息。好奇的童鞋可以尝试在当前的vc上叠加一个imageview。然后从CMSampleBufferRef视频流(记住!是视频流,别弄成音频流了)中获取图像信息并放在imageview上。

        CVImageBufferRef buffer;
        buffer = CMSampleBufferGetImageBuffer(sampleBuffer);        


        CVPixelBufferLockBaseAddress(buffer, 0);
        uint8_t *base;
        size_t width, height, bytesPerRow;
        base = CVPixelBufferGetBaseAddress(buffer);
        width = CVPixelBufferGetWidth(buffer);
        height = CVPixelBufferGetHeight(buffer);
        bytesPerRow = CVPixelBufferGetBytesPerRow(buffer);


        CGColorSpaceRef colorSpace;
        CGContextRef cgContext;
        colorSpace = CGColorSpaceCreateDeviceRGB();
        cgContext = CGBitmapContextCreate(base, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
        CGColorSpaceRelease(colorSpace);


        CGImageRef finalCGImage;
        finalCGImage = CGBitmapContextCreateImage(cgContext);


        dispatch_async(dispatch_get_main_queue(), ^{
            <#yourImageView#>.layer.contents = (__bridge id)finalCGImage;
        });

本文到此为止总规是把capture的东西讲完了。。。至于我们获取CMSampleBufferRef,请听下回书~视频写入

iOS HTTPS请求教程

App Transport Security

App Transport Security (ATS) enforces best practices in the secure connections between an app and its back end. ATS prevents accidental disclosure, provides secure default behavior, and is easy to adopt; it is also on by default in iOS 9 and OS X v10.11. You should adopt ATS as soon as possible, regardless of whether you’re creating a new app or updating an existing one.

If you’re developing a new app, you should use HTTPS exclusively. If you have an existing app, you should use HTTPS as much as you can right now, and create a plan for migrating the rest of your app as soon as possible. In addition, your communication through higher-level APIs needs to be encrypted using TLS version 1.2 with forward secrecy. If you try to make a connection that doesn‘t follow this requirement, an error is thrown. If your app needs to make a request to an insecure domain, you have to specify this domain in your app‘s Info.plist file

即,从iOS9开始,所有的http请求都改成了https,采用TLS 1.2协议,目的是增强数据安全。如果不更新的话,暂时可以在Info.plist中声明,使用不安全的网络请求。

如果你的APP依旧采用http请求的方式,在plist文件中加入一下声明可以继续使用http请求,另外URL scheme也需要声明为信任域名才可以进行APP跳转(社交或支付功能)

声明代码如下:

<key>LSApplicationQueriesSchemes</key>
<array>
<string>sinaweibosso</string>
<string>mqqOpensdkSSoLogin</string>
<string>mqzone</string>
<string>sinaweibo</string>
<string>alipayauth</string>
<string>alipay</string>
<string>safepay</string>
<string>mqq</string>
<string>mqqapi</string>
<string>mqqopensdkapiV3</string>
<string>mqqopensdkapiV2</string>
<string>mqqapiwallet</string>
<string>mqqwpa</string>
<string>mqqbrowser</string>
<string>wtloginmqq2</string>
<string>weixin</string>
<string>wechat</string>
</array>

将以上代码插入info.plist即可实现常用app跳转

<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsArbitraryLoads</key>
<true/>
</dict>

将以上代码插入info.plist即可实现http请求,但是这并不是官方推荐的做法。

如果转型为https请求

转型为https请求后,服务端配置我就不说了,ios客户端需要改变请求方式(以AFNetworking为例)

https单向验证

AFHTTPRequestOperationManager *manager = [[AFHTTPRequestOperationManager alloc]init];

// 服务端证书路径

NSString * cerPath = [[NSBundle mainBundle]pathForResource:@"server"ofType:@"cer"];

NSData * cerrData = [NSData dataWithContentsOfFile:cerPath];

AFSecurityPolicy * securityPolicy = [AFSecurityPolicy policyWithPinningMode:AFSSLPinningModeCertificate];

[securityPolicy setAllowInvalidCertificates:YES];
[securityPolicy setPinnedCertificates:@[certData]];

 manager.securityPolicy = securityPolicy;
 [manager GET:WFHTTPURL parameters:nil success:^(AFHTTPRequestOperation *operation, id responseObject)        {
       NSLog(@"response object: %@",responseObject);
  } failure:^(AFHTTPRequestOperation *operation, NSError *error) {
        NSLog(@"error: %@",error);
    }];

https双向验证

 AFHTTPRequestSerializer *reqSerializer = [AFHTTPRequestSerializer serializer];
    NSMutableURLRequest *request;
    request = [reqSerializer requestWithMethod:@"GET" URLString:WFHTTPURL parameters:nil error:nil];
    AFSecurityPolicy *securityPolicy = [[AFSecurityPolicy alloc] init];
    [securityPolicy setAllowInvalidCertificates:kAllowsInvalidSSLCertificate];
    AFHTTPRequestOperation *operation = [[AFHTTPRequestOperation alloc] initWithRequest:request];
    operation.responseSerializer = [AFHTTPResponseSerializer serializer];
    [operation setSecurityPolicy:securityPolicy];
    [operation setWillSendRequestForAuthenticationChallengeBlock:^(NSURLConnection *connection, NSURLAuthenticationChallenge *challenge) {
        if ([challenge previousFailureCount] > 0) {
            //this will cause an authentication failure
            [[challenge sender] cancelAuthenticationChallenge:challenge];
            NSLog(@"Bad Username Or Password");
            return;
        }
        //this is checking the server certificate
        if ([challenge.protectionSpace.authenticationMethod isEqualToString:NSURLAuthenticationMethodServerTrust]) {
            SecTrustResultType result;
            //This takes the serverTrust object and checkes it against your keychain
            SecTrustEvaluate(challenge.protectionSpace.serverTrust, &result);
            //if we want to ignore invalid server for certificates, we just accept the server
            if (kAllowsInvalidSSLCertificate) {
                [challenge.sender useCredential:[NSURLCredential credentialForTrust: challenge.protectionSpace.serverTrust] forAuthenticationChallenge: challenge];
                return;
            } else if(result == kSecTrustResultProceed || result == kSecTrustResultUnspecified) {
                //When testing this against a trusted server I got kSecTrustResultUnspecified every time. But the other two match the description of a trusted server
                [challenge.sender useCredential:[NSURLCredential credentialForTrust: challenge.protectionSpace.serverTrust] forAuthenticationChallenge: challenge];
                return;
            }
        } else if ([[challenge protectionSpace] authenticationMethod] == NSURLAuthenticationMethodClientCertificate) {
            //this handles authenticating the client certificate
            /*
             What we need to do here is get the certificate and an an identity so we can do this:
             NSURLCredential *credential = [NSURLCredential credentialWithIdentity:identity certificates:myCerts persistence:NSURLCredentialPersistencePermanent];
             [[challenge sender] useCredential:credential forAuthenticationChallenge:challenge];
             It's easy to load the certificate using the code in -installCertificate
             It's more difficult to get the identity.
             We can get it from a .p12 file, but you need a passphrase:
             */
            NSData *p12Data = [NSData dataWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"ios" ofType:@"pfx"]];
            // your p12 password
            CFStringRef password = CFSTR("p12 PASSPHRASE");
            const void *keys[] = { kSecImportExportPassphrase };
            const void *values[] = { password };
            CFDictionaryRef optionsDictionary = CFDictionaryCreate(NULL, keys, values, 1, NULL, NULL);
            CFArrayRef p12Items;
            OSStatus result = SecPKCS12Import((__bridge CFDataRef)p12Data, optionsDictionary, &p12Items);
            if(result == noErr) {
                CFDictionaryRef identityDict = CFArrayGetValueAtIndex(p12Items, 0);
                SecIdentityRef identityApp =(SecIdentityRef)CFDictionaryGetValue(identityDict,kSecImportItemIdentity);
                SecCertificateRef certRef;
                SecIdentityCopyCertificate(identityApp, &certRef);
                SecCertificateRef certArray[1] = { certRef };
                CFArrayRef myCerts = CFArrayCreate(NULL, (void *)certArray, 1, NULL);
                CFRelease(certRef);
                NSURLCredential *credential = [NSURLCredential credentialWithIdentity:identityApp certificates:(__bridge NSArray *)myCerts persistence:NSURLCredentialPersistencePermanent];
                CFRelease(myCerts);
                [[challenge sender] useCredential:credential forAuthenticationChallenge:challenge];
            } else {
                [[challenge sender] cancelAuthenticationChallenge:challenge];
            }
        } else if ([[challenge protectionSpace] authenticationMethod] == NSURLAuthenticationMethodDefault || [[challenge protectionSpace] authenticationMethod] == NSURLAuthenticationMethodNTLM) {
            // For normal authentication based on username and password. This could be NTLM or Default.
            /*
             DAVCredentials *cred = _parentSession.credentials;
             NSURLCredential *credential = [NSURLCredential credentialWithUser:cred.username password:cred.password persistence:NSURLCredentialPersistenceForSession];
             [[challenge sender] useCredential:credential forAuthenticationChallenge:challenge];
             */
            NSLog(@"BASIC AUTHENTICATION");
        } else {
            //If everything fails, we cancel the challenge.
            [[challenge sender] cancelAuthenticationChallenge:challenge];
        }
    }];
    [operation setCompletionBlockWithSuccess:^(AFHTTPRequestOperation *operation, id responseObject) {
        // 对AFHTTPRequestOperation返回的responseObject获取字符串形式的数据,在转成json
        NSString *html = operation.responseString;
        NSData* data = [html dataUsingEncoding:NSUTF8StringEncoding];
        id dict = [NSJSONSerialization  JSONObjectWithData:data options:0 error:nil];
        NSLog(@"获取到的数据为:%@",dict);
    } failure:^(AFHTTPRequestOperation *operation, NSError *error) {
        NSLog(@"error:%@",error);
        }];
    [[NSOperationQueue mainQueue] addOperation:operation];
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值