Therefore, session 513 gives you all the information you need to allow frame-by-frame decoding on iOS. In short, per that session:
Generate individual network abstraction layer units (NALUs) from your H.264 elementary stream. There is much information on how this is done online. VCL NALUs (IDR and non-IDR) contain your video data and are to be fed into the decoder.
Re-package those NALUs according to the "AVCC" format, removing NALU start codes and replacing them with a 4-byte NALU length header.
Create a CMVideoFormatDescriptionRef from your SPS and PPS NALUs via CMVideoFormatDescriptionCreateFromH264ParameterSets()
Package NALU frames as CMSampleBuffers per session 513.
Create a VTDecompressionSessionRef, and feed VTDecompressionSessionDecodeFrame() with the sample buffers
Alternatively, use AVSampleBufferDisplayLayer, whose -enqueueSampleBuffer: method obviates the need to create your own decoder.
This works as of iOS 8. Note that the 4-byte NALU length header is in big-endian format, so if you have a UInt32 value it must be byte-swapped before copying to the CMBlockBuffer (use CFSwapInt32).
这与iOS 8一样。注意,4字节NALU长度头是大尾数格式,因此如果你有一个UInt32值,它必须在复制到CMBlockBuffer(使用CFSwapInt32)之前进行字节交换。
iOS不提供任何直接对硬件解码引擎的公开访问,因为硬件始终用于解码iOS上的H.264视频。
因此,会话513给出了在iOS上允许逐帧解码所需的所有信息。简而言之,每个会话:
从H.264基本流生成单独的网络抽象层单元(NALU)。有很多关于如何在线完成的信息。 VCL NALUs(IDR和非IDR)包含您的视频数据,并送入解码器。
根据“AVCC”格式重新打包那些NALU,去除NALU起始代码并用4字节的NALU长度头部替换它们。
通过CMVideoFormatDescriptionCreateFromH264ParameterSets()从SPS和PPS NALU创建CMVideoFormatDescriptionRef,
将NALU帧打包为每个会话的CMSampleBuffer 513。
创建VTDecompressSessionRef,并使用示例缓冲区提供VTDecompressionSessionDecodeFrame()
或者,使用AVSampleBufferDisplayLayer,其-enqueueSampleBuffer:方法避免了创建自己的解码器的需要。