iOS实现视频录制

本文转载自:http://mostec.cn-hangzhou.aliapp.com

注:原文中音频部分没有解释,本文会把音频部分的解释补充上去。

pixelbuffer

上回书说到iOS视频采集并使用AVCaptureMovieFileOutput类进行视频录制,以及使用AVCaptureVideoDataOutput,AVCaptureAudioDataOutput进行音视频流获取。本期我讲介绍如何进行视频文件的写入。

捕获视频流的录制:

上回书中我们为capture session添加了音视频输出,但是并没有对其进行文件写入,现在我们需要对捕获到的音视频数据进行写入。首先添加以下成员变量。

    // assetWriter

    AVAssetWriter * asserWriter;
    AVAssetWriterInput * videoWriterInput;
    AVAssetWriterInput * audioWriterInput;

    BOOL recording;
    CMTime lastSampleTime;
    NSString * videoFileUrl;

AVAssetWriter为视频写入类,它负责输出视频文件到本地,你可以为他指定其输入内容。AVAssetWriterInput即为输入类,对其进行正确的参数配置后,即可将音视频元数据输入。我们需要做一个开关来确定当前的录制状态,以及当前的录制时间(CMTime)还有一个录制文件的文件名

step1:创建writer

-(void)createWriter
{

1.首先确定录制视频的长宽大小

    CGSize size = CGSizeMake(720, 1280);

2.创建视频文件路径,同时删除可能的同名文件

    videoFileUrl = [NSHomeDirectory() stringByAppendingString:@"/Documents/test.mov"];
    unlink([videoFileUrl UTF8String]);

3.创建AVAssetWriter对象,同时需要创建错误指针,用来收集错误信息

    NSError * error = nil;
    asserWriter = [[AVAssetWriter alloc]initWithURL:[NSURL fileURLWithPath:videoFileUrl] fileType:AVFileTypeQuickTimeMovie error:&error];

3.1-1这里使用一个断言快速捕获错误。断言虽然是个好的编程习惯但是对我来说这个东西其实非常可恶,因为即使它真的出错了,也没什么太大的意义,此外粗心的时候还会有可能把断言留在代码中忘记删除,日后上线的时候尿路不畅痛苦不堪。

    NSParameterAssert(asserWriter);

3.1-2人性化的办法就是捕获错误,至少这样给程序提供了一个选择的余地。

    if(error)
    {
        NSLog(@"error = %@", [error localizedDescription]);
    }

4.配置视频输入

4.1首先视频的压缩配置.比特率

    // add video input
    NSDictionary * videoCompressionPropertys = @{AVVideoAverageBitRateKey:[NSNumber numberWithDouble:128.0 * 1024.0]};

4.2视频的参数配置:h264编码

    NSDictionary * videoSettings = @{AVVideoCodecKey:AVVideoCodecH264,
                                     AVVideoWidthKey:[NSNumber numberWithFloat:size.width],
                                     AVVideoHeightKey:[NSNumber numberWithFloat:size.height],
              AVVideoCompressionPropertiesKey:videoCompressionPropertys};

4.3初始化视频输入

    videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

4.4捕获一下异常

    NSParameterAssert(videoWriterInput);

4.5设置为实时数据输入

    videoWriterInput.expectsMediaDataInRealTime = YES;

5.我们需要确保能够添加视频输入

    NSParameterAssert([asserWriter canAddInput:videoWriterInput]);              

    if ([asserWriter canAddInput:videoWriterInput])
        NSLog(@"I can add this input");
    else
        NSLog(@"i can't add this input");

6.初始化音频输入

注:本文补充的音频部分

    // Add audio input

    AudioChannelLayout all;       
    bzero( &acl, sizeof(acl));   //初始化音频通道

    acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;  //设置为单通道模式

    NSDictionary* audioOutputSettings = nil;

    //配置音频参数
    audioOutputSettings = [ NSDictionary dictionaryWithObjectsAndKeys:                           [ NSNumber numberWithInt: kAudioFormatMPEG4AAC ], AVFormatIDKey, /*codec为AAC*/                           [ NSNumber numberWithInt:64000], AVEncoderBitRateKey, /*码率64kbps*/                           [ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey, /*采样率44.1*/
                           [ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey, /*单通道*/
                           [ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,                           nil ];      audioWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioOutputSettings]; /*初始化AVAssetWriterInput*/
    audioWriterInput.expectsMediaDataInRealTime = YES; /*设置为实时模式*/

7.最后一步,添加源,并打印writer的状态

    // add input  
    [asserWriter addInput:audioWriterInput];
    [asserWriter addInput:videoWriterInput];

    NSLog(@"%ld",(long)asserWriter.status);

}

step2:在回调函数中添加录制功能

创建好asset writer并且status为0就可以进行正常的录制工作了,那么录制将会在-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection回调中进行,我们需要从中区分出音频流还是视频流,并且我们需要实时更新一个session时间用来开始写入。

 @autoreleasepool {
        lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

        if (!recording) {
            return;
        }

        if (captureOutput == videoDataOutput) {

            // video

            if (asserWriter.status > AVAssetWriterStatusWriting)
            {
                NSLog(@"Warning: writer status is %ld", (long)asserWriter.status);

                if (asserWriter.status == AVAssetWriterStatusFailed)
                {
                    NSLog(@"Error: %@", asserWriter.error);
                    return;
                }
            }
            if ([videoWriterInput isReadyForMoreMediaData])
            {
                // writer buffer
                if (![videoWriterInput appendSampleBuffer:sampleBuffer])
                {
                    NSLog(@"unable to write video frame : %lld",lastSampleTime.value);
                }
                else
                {
                    NSLog(@"recorded frame time %lld",lastSampleTime.value/lastSampleTime.timescale);
                }
            }
        }
        else
        {
            // audio

            if (asserWriter.status > AVAssetWriterStatusWriting)
            {
                NSLog(@"Warning: writer status is %ld", (long)asserWriter.status);

                if (asserWriter.status == AVAssetWriterStatusFailed)
                {
                    NSLog(@"Error: %@", asserWriter.error);
                    return;
                }
            }

            if ([audioWriterInput isReadyForMoreMediaData])
            {
                // writer buffer
                if (![audioWriterInput appendSampleBuffer:sampleBuffer])
                {
                    NSLog(@"unable to write audio frame : %lld",lastSampleTime.value);
                }
                else
                {
                    NSLog(@"recorded audio frame time %lld",lastSampleTime.value/lastSampleTime.timescale);
                }
            }
        }
    }

因为代码是洒家一笔一划写完测试过后粘贴过来的,所以按照文中的步骤来。不会有错误,不过我在这里奉劝大家把代码作为参考,自己尝试着实现一遍,assetwritter的坑非常多,从上面的花式if和尿性的断言大家应该就能看出来这一点,最好自己把坑都踩一遍,这样才能领会到东西,光靠看别人的东西你就会渐渐的变成一个中二。

“真正牛逼的人不是什么都会,而是能快速摸清那些不会的东西。也不是犯错误最少的,而是能够快速发现错误并找到办法修改的。请记住,没有人生下来就什么都会,什么都知道,你们眼中的大神都是从一个一个坑中爬出来并铭记在心的海盗。”

最后一步,录制按钮都调用函数实现,同样,我们希望在录制结束的时候能够把录制好的视频存到本地相册方便测试

  startCapture.selected = !startCapture.selected;
    if (startCapture.selected)
    {
        recording = YES;
        if (recording && asserWriter.status != AVAssetWriterStatusWriting)
        {
            [asserWriter startWriting];
            [asserWriter startSessionAtSourceTime:lastSampleTime];
        }
    }
    else
    {
        recording = NO;
        [asserWriter finishWritingWithCompletionHandler:^{
            ALAssetsLibrary * library = [[ALAssetsLibrary alloc] init];
            if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:[NSURL fileURLWithPath:videoFileUrl]]) {
                [library writeVideoAtPathToSavedPhotosAlbum:[NSURL fileURLWithPath:videoFileUrl] completionBlock:^(NSURL *assetURL, NSError *error){

                    dispatch_async(dispatch_get_main_queue(), ^{

                        if (error) {
                            // error
                        }else
                        {
                            // success
                        }

                    });
                }];
            }
        }];
    }

大家可以发现我多次判断asserWriter.status这个属性,这并不是因为我的胆小如鼠的代码风格。。而是这个破b东西太容易造成以下错误了:

2016-02-17 15:57:52.443 videoCompresserDemo[1539:776248] *** Terminating app due to uncaught exception ‘NSInternalInconsistencyException’, reason: ‘*** -[AVAssetWriter startWriting] Cannot call method when status is 2′(也可能是3也可能不是这个错误)

以上我们就成功的实现了使用assetwriter进行对实时捕获的音视频流进行文件写入,到此大家可能觉得此种录制方式与上回书说到的方式2相比较,此方式有种脱裤子放屁的感觉。并且细心的盆友会发现在旋转摄像头的时候会出现报错,这是因为我们在调整视频输出方向的时候,做了一个方向调整,记得上回书说到的前置摄像头左转90度,后置摄像头右转90度的问题么。这个问题的本质,其实是这样的:

/*!

 @property videoOrientation
 @abstract
    Indicates whether the video flowing through the connection should be rotated
    to a given orientation.
 @discussion
    This property is only applicable to AVCaptureConnection instances involving video.  If -isVideoOrientationSupported returns YES, videoOrientation may be set to rotate the video buffers being consumed by the connection's output.  Note that setting videoOrientation does not necessarily result in a physical rotation of video buffers.  For instance, a video connection to an AVCaptureMovieFileOutput handles orientation using a Quicktime track matrix.  In the AVCaptureStillImageOutput,orientation is handled using Exif tags.

*/

这个方式仅仅在使用moviefileoutput的时候有效,并且在某些播放器中无效,因为像素并没有被旋转,我们仅仅通过调整output的输出方向使得输出的视频影像都遵循目标方向而已。并且,这个方法并不是每次都好用(可以说这个方法基本不好用。。),首先在这里添加判断。

 if ([av isVideoOrientationSupported]) 
 {
      av.videoOrientation = orientation;
 }

逻辑深的人会发现这一步似乎给自己挖了一个坑。如果有的支持,有的不支持,那不支持的该怎么办,如果有办法,难道在处理这个方向问题的时候还需要判断一下是否需要处理么?这听起来有种学术派和实力派的抗衡(代码逻辑与代码效率的抉择),我的观点是,不使用这个系统属性。而对所有输出讯号进行方向矫正处理。至少GPUImage就是这么做的。除非你更牛逼

其实到此为止,如果录制视频的时候大家不需要对视频进行特殊的实时效果处理,没必要搞得这么复杂。上回书说到的方式二:moviefileoutput是一个很好的视频录制解决方案。那么既然moviefileoutput那么牛逼,我们何必还要学习这个坑爹的assetwritter呢?

用图片数组生成一个视频文件:

本文标题点名video writing 所以把捕获到的视频影像写入视频文件只是一个润色的过程。那么接下来我们先不考虑用摄像头捕获视频影像,how about。用一个图像序列生成一个视频。

新ViewController,随便起个名字,随便设计你的布局,搞一个数组,再搞一系列图片素材,我们来做一个视频相册。添加以下成员变量

    NSArray * imageArr;

    AVAssetWriter * videoWriter;
    AVAssetWriterInput * writerInput;
    AVAssetWriterInputPixelBufferAdaptor * adaptor;

    NSString * fileUrl;

初始化图片:

-(instancetype)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
{
    if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil])
    {
        imageArr = @[[UIImage imageNamed:@"welcome1"],
                     [UIImage imageNamed:@"welcome2"],
                     [UIImage imageNamed:@"welcome3"],
                     [UIImage imageNamed:@"welcome4"],
                     [UIImage imageNamed:@"welcome5"],
                     [UIImage imageNamed:@"welcome6"]];
    }
    return self;
}

step1:初始化asset writer

视频文件写入用asset writer跟上面一样,我们依旧是进行基本的初始化,但是之前我们使用videoDataOutput作为输出信号源,现在我们的数据来源是一系列图片,那么我们就需要一个能把图片拼接到writerinput的adaptor

AVAssetWriterInputPixelBufferAdaptor这个类让我感到很神奇,因为名字太长而且听着像一个电源
-(void)createMovieWriter
{
    fileUrl = [NSHomeDirectory() stringByAppendingString:@"/Documents/001.mov"];
    unlink([fileUrl UTF8String]);

    NSError * err = nil;
    videoWriter = [[AVAssetWriter alloc]initWithURL:[NSURL fileURLWithPath:fileUrl] fileType:AVFileTypeQuickTimeMovie error:&err];

    NSParameterAssert(videoWriter);

    if (err) 
    {
        NSLog(@"videoWriterFailed");
    }

    NSDictionary * videoSettings = @{AVVideoCodecKey:AVVideoCodecH264,
                                     AVVideoWidthKey:[NSNumber numberWithInt:640],
                                     AVVideoHeightKey:[NSNumber numberWithInt:640]};

    writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

    adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:nil];

    NSParameterAssert(writerInput);
    NSParameterAssert([videoWriter canAddInput:writerInput]);

    if ([videoWriter canAddInput:writerInput]) 
    {
        [videoWriter addInput:writerInput];
    }
}

AVAssetWriterInputPixelBufferAdaptor的作用就是把CVPixelBufferRef视频贞图像拼接到视频中,这里有一个参数设置PixelBufferAttributes。请看文档

Pixel buffer attributes keys for the pixel buffer pool are defined in <CoreVideo/CVPixelBuffer.h>. To specify the pixel format type, the pixelBufferAttributes dictionary should contain a value for kCVPixelBufferPixelFormatTypeKey.  For example, use [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] for 8-bit-per-channel BGRA. See the discussion under appendPixelBuffer:withPresentationTime: for advice on choosing a pixel format.
Clients that do not need a pixel buffer pool for allocating buffers should set sourcePixelBufferAttributes to nil.

这部分的难度系数比较高,大家先了解一下。笔者正在整理相关资料,在后续讲解滤镜的时候我将会给大家详细讲解图像处理的相关信息。我们继续来说AVAssetWriterInputPixelBufferAdaptor,他有一个方法:

- (BOOL)appendPixelBuffer:(CVPixelBufferRef)pixelBuffer withPresentationTime:(CMTime)presentationTime;

这个方法就是奇迹出现的关键,把一个CVPixelBufferRef按照presentationTime拼接到视频当中,说到这一步我们不得不仔细的说说这个CMTime

step1-1:CMTime

CMTime structs are non-opaque mutable structs representing times (either timestamps or durations).
CMTime is represented as a rational number, with a numerator (an int64_t value), and a denominator (an int32_t timescale). Conceptually, the timescale specifies the fraction of a second each unit in the numerator occupies. Thus if the timescale is 4, each unit represents a quarter of a second; if the timescale is 10, each unit represents a tenth of a second, and so on. In addition to a simple time value, a CMTime can represent non-numeric values: +infinity, -infinity, and indefinite. Using a flag CMTime indicates whether the time been rounded at some point.
CMTimes contain an epoch number, which is usually set to 0, but can be used to distinguish unrelated timelines: for example, it could be incremented each time through a presentation loop, to differentiate between time N in loop 0 from time N in loop 1.
You can convert CMTimes to and from immutable CFDictionaries (see CFDictionaryRef) using CMTimeCopyAsDictionary and CMTimeMakeFromDictionary, for use in annotations and various Core Foundation containers.

大家应该都是知道老电影的胶片,很长很长的胶卷上面有一贞一贞的图像,那么也许童鞋们应该知道,在放电影的时候,一贞一贞连续播放图像就形成了连续的动画,那么我们如何精确的表述电影时间呢。

typedef struct { 
    CMTimeValue value;
    CMTimeScale timescale;
    CMTimeFlags flags;
    CMTimeEpoch epoch; 
} CMTime;

apple定义了这个结构体来表示电影时间,这就是CMTime的数据结构,api有详细的介绍

value

The value of the CMTime. // 当前第几帧
value/timescale = seconds.
timescale

The timescale of the CMTime. // 一秒多少贞
value/timescale = seconds.
flags

// 状态
A bitfield representing the flags set for the CMTime.
For example, kCMTimeFlags_Valid. See CMTime Flags for possible values.
epoch

The epoch of the CMTime.
You use the epoch to differentiate between equal timestamps that are actually different because of looping, multi-item sequencing, and so on.
The epoch is used during comparison: greater epochs happen after lesser ones. Addition or subtraction is only possible within a single epoch, however, since the epoch length may be unknown or variable.

现在大家了解了CMTime就可以确定我们插入的帧对应的时间了,那么CVPixelBufferRef是什么呢,暂且你先不能需要做过于复杂的了解,只需要知道这个:

A Core Video pixel buffer is an image buffer that holds pixels in main memory. Applications generating frames, compressing or decompressing video, or using Core Image can all make use of Core Video pixel buffers.

在ios视频处理开发中,CVPixelBuffer就好比CGImageRef。同时

/*!
    @typedef CVPixelBufferRef
    @abstract   Based on the image buffer type. The pixel buffer implements the memory storage for an image buffer.
*/

typedef CVImageBufferRef CVPixelBufferRef;

CVPixelBuffer就是我们要插入的帧图像(这个说法很不严谨但是却非常易懂)

step2:startAppendImage

- (BOOL)appendPixelBuffer:(CVPixelBufferRef)pixelBuffer withPresentationTime:(CMTime)presentationTime;

我们知道这个方法就可以把图像写入到视频中,但是这个图像我们从何而来呢。ios的图像类似乎只有UIImage,那么UIImage如何变成我们需要的CVPixelBuffer呢。

首先定义一个方法用来输出CVPixelBuffer,输入源我们使用CGImageRef

-(CVPixelBufferRef)imageToPixelBuffer:(CGImageRef)image

我们先声明一个CVPixelBuffer

    CVPixelBufferRef pixelBuffer = NULL;

初始化CVPixelBuffer的方法比较复杂:

CVReturn CVPixelBufferCreate ( CFAllocatorRef allocator, size_t width, size_t height, OSType pixelFormatType, CFDictionaryRefpixelBufferAttributes, CVPixelBufferRef  _Nullable *pixelBufferOut );

返回值是一个CVReturn枚举,来通知我们创建的结果是成功还是失败。如果失败会告知我们失败的类型,枚举有以下情况

/*
     kCVReturnSuccess                         = 0,
     kCVReturnFirst                           = -6660,

     kCVReturnError                           = kCVReturnFirst,
     kCVReturnInvalidArgument                 = -6661,
     kCVReturnAllocationFailed                = -6662,
     kCVReturnUnsupported                     = -6663,

     // DisplayLink related errors
     kCVReturnInvalidDisplay                  = -6670,
     kCVReturnDisplayLinkAlreadyRunning       = -6671,
     kCVReturnDisplayLinkNotRunning           = -6672,
     kCVReturnDisplayLinkCallbacksNotSet      = -6673,

     // Buffer related errors
     kCVReturnInvalidPixelFormat              = -6680,
     kCVReturnInvalidSize                     = -6681,
     kCVReturnInvalidPixelBufferAttributes    = -6682,
     kCVReturnPixelBufferNotOpenGLCompatible  = -6683,
     kCVReturnPixelBufferNotMetalCompatible   = -6684,

     // Buffer Pool related errors
     kCVReturnWouldExceedAllocationThreshold  = -6689,
     kCVReturnPoolAllocationFailed            = -6690,
     kCVReturnInvalidPoolAttributes           = -6691,

     kCVReturnLast                            = -6699
*/

看似种类繁多的错误,当你在自己进行开发时,会渐渐的一个一个的遇到。

创建CVPixelBuffer的参数有6个,api有较为详细的讲解:

allocator

The allocator to use to create the pixel buffer. Pass NULL to specify the default allocator.

width

Width of the pixel buffer, in pixels.

height

Height of the pixel buffer, in pixels.

pixelFormatType

The pixel format identified by its respective four-character code (type OSType).

pixelBufferAttributes

A dictionary with additional attributes for a pixel buffer. This parameter is optional. See Pixel Buffer Attribute Keys for more details.

pixelBufferOut

On output, the newly created pixel buffer. Ownership follows the The Create Rule.

那么创建方式就是下面这样:

 NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
 int width = 640;
 int height = 640;
 CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width,height,kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,&pixelBuffer);

其中formate是个很复杂的参数:

 kCVPixelFormatType_1Monochrome    = 0x00000001, /* 1 bit indexed */
  kCVPixelFormatType_2Indexed       = 0x00000002, /* 2 bit indexed */
  kCVPixelFormatType_4Indexed       = 0x00000004, /* 4 bit indexed */
  kCVPixelFormatType_8Indexed       = 0x00000008, /* 8 bit indexed */
  kCVPixelFormatType_1IndexedGray_WhiteIsZero = 0x00000021, /* 1 bit indexed gray, white is zero */
  kCVPixelFormatType_2IndexedGray_WhiteIsZero = 0x00000022, /* 2 bit indexed gray, white is zero */
  kCVPixelFormatType_4IndexedGray_WhiteIsZero = 0x00000024, /* 4 bit indexed gray, white is zero */
  kCVPixelFormatType_8IndexedGray_WhiteIsZero = 0x00000028, /* 8 bit indexed gray, white is zero */
  kCVPixelFormatType_16BE555        = 0x00000010, /* 16 bit BE RGB 555 */
  kCVPixelFormatType_16LE555        = 'L555',     /* 16 bit LE RGB 555 */
  kCVPixelFormatType_16LE5551       = '5551',     /* 16 bit LE RGB 5551 */
  kCVPixelFormatType_16BE565        = 'B565',     /* 16 bit BE RGB 565 */
  kCVPixelFormatType_16LE565        = 'L565',     /* 16 bit LE RGB 565 */
  kCVPixelFormatType_24RGB          = 0x00000018, /* 24 bit RGB */
  kCVPixelFormatType_24BGR          = '24BG',     /* 24 bit BGR */
  kCVPixelFormatType_32ARGB         = 0x00000020, /* 32 bit ARGB */
  kCVPixelFormatType_32BGRA         = 'BGRA',     /* 32 bit BGRA */
  kCVPixelFormatType_32ABGR         = 'ABGR',     /* 32 bit ABGR */
  kCVPixelFormatType_32RGBA         = 'RGBA',     /* 32 bit RGBA */
  kCVPixelFormatType_64ARGB         = 'b64a',     /* 64 bit ARGB, 16-bit big-endian samples */
  kCVPixelFormatType_48RGB          = 'b48r',     /* 48 bit RGB, 16-bit big-endian samples */
  kCVPixelFormatType_32AlphaGray    = 'b32a',     /* 32 bit AlphaGray, 16-bit big-endian samples, black is zero */
  kCVPixelFormatType_16Gray         = 'b16g',     /* 16 bit Grayscale, 16-bit big-endian samples, black is zero */
  kCVPixelFormatType_30RGB          = 'R10k',     /* 30 bit RGB, 10-bit big-endian samples, 2 unused padding bits (at least significant end). */
  kCVPixelFormatType_422YpCbCr8     = '2vuy',     /* Component Y'CbCr 8-bit 4:2:2, ordered Cb Y'0 Cr Y'1 */
  kCVPixelFormatType_4444YpCbCrA8   = 'v408',     /* Component Y'CbCrA 8-bit 4:4:4:4, ordered Cb Y' Cr A */
  kCVPixelFormatType_4444YpCbCrA8R  = 'r408',     /* Component Y'CbCrA 8-bit 4:4:4:4, rendering format. full range alpha, zero biased YUV, ordered A Y' Cb Cr */
  kCVPixelFormatType_4444AYpCbCr8   = 'y408',     /* Component Y'CbCrA 8-bit 4:4:4:4, ordered A Y' Cb Cr, full range alpha, video range Y'CbCr. */
  kCVPixelFormatType_4444AYpCbCr16  = 'y416',     /* Component Y'CbCrA 16-bit 4:4:4:4, ordered A Y' Cb Cr, full range alpha, video range Y'CbCr, 16-bit little-endian samples. */
  kCVPixelFormatType_444YpCbCr8     = 'v308',     /* Component Y'CbCr 8-bit 4:4:4 */
  kCVPixelFormatType_422YpCbCr16    = 'v216',     /* Component Y'CbCr 10,12,14,16-bit 4:2:2 */
  kCVPixelFormatType_422YpCbCr10    = 'v210',     /* Component Y'CbCr 10-bit 4:2:2 */
  kCVPixelFormatType_444YpCbCr10    = 'v410',     /* Component Y'CbCr 10-bit 4:4:4 */
  kCVPixelFormatType_420YpCbCr8Planar = 'y420',   /* Planar Component Y'CbCr 8-bit 4:2:0.  baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrPlanar struct */
  kCVPixelFormatType_420YpCbCr8PlanarFullRange    = 'f420',   /* Planar Component Y'CbCr 8-bit 4:2:0, full range.  baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrPlanar struct */
  kCVPixelFormatType_422YpCbCr_4A_8BiPlanar = 'a2vy', /* First plane: Video-range Component Y'CbCr 8-bit 4:2:2, ordered Cb Y'0 Cr Y'1; second plane: alpha 8-bit 0-255 */
  kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v', /* Bi-Planar Component Y'CbCr 8-bit 4:2:0, video-range (luma=[16,235] chroma=[16,240]).  baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */
  kCVPixelFormatType_420YpCbCr8BiPlanarFullRange  = '420f', /* Bi-Planar Component Y'CbCr 8-bit 4:2:0, full-range (luma=[0,255] chroma=[1,255]).  baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */ 
  kCVPixelFormatType_422YpCbCr8_yuvs = 'yuvs',     /* Component Y'CbCr 8-bit 4:2:2, ordered Y'0 Cb Y'1 Cr */
  kCVPixelFormatType_422YpCbCr8FullRange = 'yuvf', /* Component Y'CbCr 8-bit 4:2:2, full range, ordered Y'0 Cb Y'1 Cr */
  kCVPixelFormatType_OneComponent8  = 'L008',     /* 8 bit one component, black is zero */
  kCVPixelFormatType_TwoComponent8  = '2C08',     /* 8 bit two component, black is zero */
  kCVPixelFormatType_OneComponent16Half  = 'L00h',     /* 16 bit one component IEEE half-precision float, 16-bit little-endian samples */
  kCVPixelFormatType_OneComponent32Float = 'L00f',     /* 32 bit one component IEEE float, 32-bit little-endian samples */
  kCVPixelFormatType_TwoComponent16Half  = '2C0h',     /* 16 bit two component IEEE half-precision float, 16-bit little-endian samples */
  kCVPixelFormatType_TwoComponent32Float = '2C0f',     /* 32 bit two component IEEE float, 32-bit little-endian samples */
  kCVPixelFormatType_64RGBAHalf          = 'RGhA',     /* 64 bit RGBA IEEE half-precision float, 16-bit little-endian samples */
  kCVPixelFormatType_128RGBAFloat        = 'RGfA',     /* 128 bit RGBA IEEE float, 32-bit little-endian samples */

以上的颜色格式非常重要,我们将在后续滤镜部分进行详细讲解,大家好奇的可以先百度一下,或者。可以自己尝试把颜色设置成这里面的,看看会有什么效果(可以变黑白的)

创建后一定要检测这个返回状态:这里值得注意的是,此处最好写为断言,没错,断言,因为如果真的创建失败,继续运行下去也没有任何意义。

    NSParameterAssert(status == kCVReturnSuccess && pixelBuffer != NULL);

创建后一定要检测这个返回状态:这里值得注意的是,此处最好写为断言,没错,断言,因为如果真的创建失败,继续运行下去也没有任何意义。

那么CVPixelBuffer已经有了,但是我们创建的CVPixelBuffer目前只是一块内存区域,我们需要给他内容:

    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pixelBuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();

    CGContextRef context = CGBitmapContextCreate(pxdata, width,height, 8, 4*width, rgbColorSpace,kCGImageAlphaNoneSkipFirst);

    NSParameterAssert(context);
    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));

    CGContextDrawImage(context, CGRectMake(0, 0,CGImageGetWidth(image),CGImageGetHeight(image)), image);

    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

这样我们就成功的添加里内容到我们指定的内存区域中,返回pixelBuffer即可,如果上述代码童鞋们不太懂,以后我会单独写关于CoreGraphic的教程给大家讲解。

现在我们知道如何转化uiimage为pixelBuffer就可以开始进行添加了,我准备了6张图片。那么我希望输出一个60秒的60fps视频,每10秒切换一张图片。没什么值得讲解的,请大家注意其中的内存管理,视频合成是一个内存消耗高能的事情,你可以想象一下这一过程会产生3600个图片内存,如果管理不慎你就会在模拟器上发现1g的运行内存或者真机直接闪退掉。

-(void)createMovieFileWithImageSequence
{
    [videoWriter startWriting];
    [videoWriter startSessionAtSourceTime:kCMTimeZero];

    for (int i = 0; i < 60 * 60; i ++)
    {
        @autoreleasepool {
            CGImageRef inputImage = [[imageArr objectAtIndex:i/600]CGImage];

            if (writerInput.readyForMoreMediaData)
            {
                [self appendNewFrame:inputImage frame:i];
            }
            else
            {
                i--;
            }
        }
    }

    [writerInput markAsFinished];
    [videoWriter finishWritingWithCompletionHandler:^{

        ALAssetsLibrary * library = [[ALAssetsLibrary alloc] init];
        if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:[NSURL fileURLWithPath:fileUrl]])
        {
            [library writeVideoAtPathToSavedPhotosAlbum:[NSURL fileURLWithPath:fileUrl] completionBlock:^(NSURL *assetURL, NSError *error){
                dispatch_async(dispatch_get_main_queue(), ^{

                    if (error)
                    {
                        // erre
                    }else
                    {
                        // success
                        UIAlertView * aleart = [[UIAlertView alloc]initWithTitle:@"saved" message:nil delegate:nil cancelButtonTitle:@"ok" otherButtonTitles:nil, nil];
                        [aleart show];
                    }
                });
            }];
        }
    }];
}
-(void)appendNewFrame:(CGImageRef)inputImage frame:(int)frame
{
    NSLog(@"frameTime::::%d",frame);

    CVPixelBufferRef pixelBuffer = [self imageToPixelBuffer:inputImage];
    [adaptor appendPixelBuffer:pixelBuffer withPresentationTime:CMTimeMake(frame, 60)];
    CFRelease(pixelBuffer);
}

到此我们就实现了把图片序列合并为一整个视频,思维活跃的童鞋已经有一点头绪了吧,如果我们能够获取视频的帧数据,我们能自己合成视频帧数据并输出视频,那么我们是不是就可以在获取到视频帧数据后对视频帧进行处理,并写入到新的视频文件呢?是的。但是这不是下一篇博客要讲解的令人激动的滤镜部分,我觉得在讲解滤镜之前,我们需要先让流程连贯下来,所以下回书我会先告诉大家如果在ios中播放视频或流媒体(hls)

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值