目前大多数iOS端的视频渲染都使用OpenGLES,但如果仅仅为了渲染而不做其他的例如美颜等效果,其实可以使用iOS8.0新出的AVSampleBufferDisplayLayer。对AVSampleBufferDisplayLayer,官方说明中有一句话,“The AVSampleBufferDisplayLayer class is a subclass of CALayer that displays compressed or uncompressed video frames.”,即AVSampleBufferDisplayLayer既可以用来渲染解码后的视频图片,也可以直接把未解码的视频帧送给它,完成先解码再渲染出去的步骤。
由于本人在使用AVSampleBufferDisplayLayer之前已经videotoolbox中相关api完成了h264视频的硬解,所以这里仅仅使用AVSampleBufferDisplayLayer来渲染,即送给它pixelBuffer。
个人选择了UIImageView作为渲染的view(没有直接使用UIView的原因后面会提到),而且也没有重载UIView的layerClass函数来使AVSampleBufferDisplayLayer成为这个view的默认layer(不这么做的原因后面提到)。
具体做法,首先,建立AVSampleBufferDisplayLayer并把它添加成为当前view的子layer:
self.sampleBufferDisplayLayer = [[AVSampleBufferDisplayLayer alloc] init];
self.sampleBufferDisplayLayer.frame = self.bounds;
self.sampleBufferDisplayLayer.position = CGPointMake(CGRectGetMidX(self.bounds), CGRectGetMidY(self.bounds));
self.sampleBufferDisplayLayer.videoGravity = AVLayerVideoGravityResizeAspect;
self.sampleBufferDisplayLayer.opaque = YES;
[self.layer addSublayer:self.sampleBufferDisplayLayer];
其次,把得到的pixelbuffer包装成CMSampleBuffer并设置时间信息:
//把pixelBuffer包装成samplebuffer送给displayLayer
- (void)dispatchPixelBuffer:(CVPixelBufferRef) pixelBuffer
{
if (!pixelBuffer){
return;
}
@synchronized(self) {
if (self.previousPixelBuffer){
CFRelease(self.previousPixelBuffer);
self.previousPixelBuffer = nil;
}
self.previousPixelBuffer = CFRetain(pixelBuffer);
}
//不设置具体时间信息
CMSampleTimingInfo timing = {kCMTimeInvalid, kCMTimeInvalid, kCMTimeInvalid};
//获取视频信息
CMVideoFormatDescriptionRef videoInfo = NULL;
OSStatus result = CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixelBuffer, &videoInfo);
NSParameterAssert(result == 0 && videoInfo != NULL);
CMSampleBufferRef sampleBuffer = NULL;
result = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,pixelBuffer, true, NULL, NULL, videoInfo, &timing, &sampleBuffer);
NSParameterAssert(result == 0 && sampleBuffer != NULL);
CFRelease(pixelBuffer);
CFRelease(videoInfo);
CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, YES);
CFMutableDictionaryRef dict = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0);
CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue);
[self enqueueSampleBuffer:sampleBuffer toLayer:self.sampleBufferDisplayLayer];
CFRelease(sampleBuffer);
}
这里不设置具体时间信息且设置
kCMSampleAttachmentKey_DisplayImmediately为true,是因为这里只需要渲染不需要解码,所以不必根据dts设置解码时间、根据pts设置渲染时间。
最后,数据送给AVSampleBufferDisplayLayer渲染就可以了。
<p class="p1"><pre name="code" class="objc">- (void)enqueueSampleBuffer:(CMSampleBufferRef) sampleBuffer toLayer:(AVSampleBufferDisplayLayer*) layer
{
if (sampleBuffer){
CFRetain(sampleBuffer);
[layer enqueueSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
if (layer.status == AVQueuedSampleBufferRenderingStatusFailed){
NSLog(@"ERROR: %@", layer.error);
if (-11847 == layer.error.code){
[self rebuildSampleBufferDisplayLayer];
}
}else{
// NSLog(@"STATUS: %i", (int)layer.status);
}
}else{
NSLog(@"ignore null samplebuffer");
}
}
个人在遇到上述问题后,联想到之前使用videotoolbox解码视频时遇到类似后台事件时VTDecompressionSession会失效从而需要撤销当前VTDecompressionSession来重新建立VTDecompressionSession的过程,在AVSampleBufferDisplayLayer失效时,也去撤销当前这个AVSampleBufferDisplayLayer再重建一个;这里说到之前卖的一个关子,如果这个AVSampleBufferDisplayLayer是view的默认layer,这时就没法只撤销layer而不动view,所以把AVSampleBufferDisplayLayer作为view的子layer更方便,撤销重建的过程如下:
- (void)rebuildSampleBufferDisplayLayer{
@synchronized(self) {
[self teardownSampleBufferDisplayLayer];
[self setupSampleBufferDisplayLayer];
}
}
- (void)teardownSampleBufferDisplayLayer
{
if (self.sampleBufferDisplayLayer){
[self.sampleBufferDisplayLayer stopRequestingMediaData];
[self.sampleBufferDisplayLayer removeFromSuperlayer];
self.sampleBufferDisplayLayer = nil;
}
}
- (void)setupSampleBufferDisplayLayer{
if (!self.sampleBufferDisplayLayer){
self.sampleBufferDisplayLayer = [[AVSampleBufferDisplayLayer alloc] init];
self.sampleBufferDisplayLayer.frame = self.bounds;
self.sampleBufferDisplayLayer.position = CGPointMake(CGRectGetMidX(self.bounds), CGRectGetMidY(self.bounds));
self.sampleBufferDisplayLayer.videoGravity = AVLayerVideoGravityResizeAspect;
self.sampleBufferDisplayLayer.opaque = YES;
[self.layer addSublayer:self.sampleBufferDisplayLayer];
}else{
[CATransaction begin];
[CATransaction setDisableActions:YES];
self.sampleBufferDisplayLayer.frame = self.bounds;
self.sampleBufferDisplayLayer.position = CGPointMake(CGRectGetMidX(self.bounds), CGRectGetMidY(self.bounds));
[CATransaction commit];
}
[self addObserver];
}
当然,需要监听后台事件,如下:
- (void)addObserver{
if (!hasAddObserver){
NSNotificationCenter * notificationCenter = [NSNotificationCenter defaultCenter];
[notificationCenter addObserver: self selector:@selector(didResignActive) name:UIApplicationWillResignActiveNotification object:nil];
[notificationCenter addObserver: self selector:@selector(didBecomeActive) name:UIApplicationDidBecomeActiveNotification object:nil];
hasAddObserver = YES;
}
}
做到这里,基本的问题都解决了,视频可以正常渲染了;不过还有一个稍令人不悦的小问题,即app被切到后台再切回来时,由于这个时候 AVSampleBufferDisplayLayer已经失效,所以这个时候渲染的view会是黑屏,这会有一到两秒的时间,直到layer重新建立好并开始渲染。那怎么让这个时候不出现黑屏呢?就需要前面提到的UIImageView,做法如下:
首先,对于每个到来的pixelbuffer,要保留它直到下一个pixelbuffer到来,如下函数中粗体所示:
- (void)dispatchPixelBuffer:(CVPixelBufferRef) pixelBuffer
{
if (!pixelBuffer){
return;
}
<strong> @synchronized(self) {
if (self.previousPixelBuffer){
CFRelease(self.previousPixelBuffer);
self.previousPixelBuffer = nil;
}
self.previousPixelBuffer = CFRetain(pixelBuffer);
}</strong>
...........略去其他
}
- (UIImage*)getUIImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer
{
UIImage *uiImage = nil;
if (pixelBuffer){
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
uiImage = [UIImage imageWithCIImage:ciImage];
UIGraphicsBeginImageContext(self.bounds.size);
[uiImage drawInRect:self.bounds];
uiImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return uiImage;
}
- (void)didResignActive{
NSLog(@"resign active");
[self setupPlayerBackgroundImage];
}
- (void) setupPlayerBackgroundImage{
if (self.isVideoHWDecoderEnable){
@synchronized(self) {
if (self.previousPixelBuffer){
self.image = [self getUIImageFromPixelBuffer:self.previousPixelBuffer];
CFRelease(self.previousPixelBuffer);
self.previousPixelBuffer = nil;
}
}
}
}
这样,切完后台回来前台,在layer还没有重新建立好之前,看到的就是设置的
UIImageView的image而不是黑屏了,而这个image就是切后台开始时渲染的最后一帧画面。
对于前面说到的AVSampleBufferDisplayLayer失效后重建导致的黑屏时间,个人通过验证发现,如果这个重建动作,即下面这句代码,
[[AVSampleBufferDisplayLayer alloc] init]
发生在app刚从后台会到前台就会非常耗时,接近两秒,而如果是正在前台正常播放的过程中执行这句话,只需要十几毫秒;前者如此耗时的原因,经过请教其他iOS开发的同事,可能是这个时候系统优先恢复整个app的UI,其他操作被delay;