目前大多数iOS端的视频渲染都使用OpenGLES,但如果仅仅为了渲染而不做其他的例如美颜等效果,其实可以使用iOS8.0新出的AVSampleBufferDisplayLayer。对AVSampleBufferDisplayLayer,官方说明中有一句话,“The AVSampleBufferDisplayLayer class is a subclass of CALayer that displays compressed or uncompressed video frames.”,即AVSampleBufferDisplayLayer既可以用来渲染解码后的视频图片,也可以直接把未解码的视频帧送给它,完成先解码再渲染出去的步骤。
由于本人在使用AVSampleBufferDisplayLayer之前已经videotoolbox中相关api完成了h264视频的硬解,所以这里仅仅使用AVSampleBufferDisplayLayer来渲染,即送给它pixelBuffer。
个人选择了UIImageView作为渲染的view(没有直接使用UIView的原因后面会提到),而且也没有重载UIView的layerClass函数来使AVSampleBufferDisplayLayer成为这个view的默认layer(不这么做的原因后面提到)。
具体做法,首先,建立AVSampleBufferDisplayLayer并把它添加成为当前view的子layer:
self.sampleBufferDisplayLayer = [[AVSampleBufferDisplayLayer alloc] init];
self.sampleBufferDisplayLayer.frame = self.bounds;
self.sampleBufferDisplayLayer.position = CGPointMake(CGRectGetMidX(self.bounds), CGRectGetMidY(self.bounds));
self.sampleBufferDisplayLayer.videoGravity = AVLayerVideoGravityResizeAspect;
self.sampleBufferDisplayLayer.opaque = YES;
[self.layer addSublayer:self.sampleBufferDisplayLayer];
其次,把得到的pixelbuffer包装成CMSampleBuffer并设置时间信息:
//把pixelBuffer包装成samplebuffer送给displayLayer
- (void)dispatchPixelBuffer:(CVPixelBufferRef) pixelBuffer
{
if (!pixelBuffer){
return;
}
@synchronized(self) {
if (self.previousPixelBuffer){
CFRelease(self.previousPixelBuffer);
self.previousPixelBuffer = nil;
}
self.previousPixelBuffer = CFRetain(pixelBuffer);
}
//不设置具体时间信息
CMSampleTimingInfo timing = {kCMTimeInvalid, kCMTimeInvalid, kCMTimeInvalid};
//获取视频信息
CMVideoFormatDescriptionRef videoInfo = NULL;
OSStatus result = CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixelBuffer, &videoInfo);
NSParameterAssert(result == 0 && videoInfo != NULL);
CMSampleBufferRef sampleBuffer = NULL;
result = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,pixelBuffer, true, NULL, NULL, videoInfo, &timing, &sampleBuffer);
NSParameterAssert(result == 0 && sampleBuffer != NULL);
CFRelease(pixelBuffer);
CFRelease(videoIn