AVAssetWriterInput H.264 Passthrough to QuickTime (.mov) - Passing in SPS/PPS to create avcc atom?
I have a stream of H.264/AVC NALs consisting of types 1 (P frame), 5 (I frame), 7 (SPS), and 8 (PPS). I want to write them into an .mov file without re-encoding. I'm attempting to use AVAssetWriter
to do this. The documentation for AVAssetWriterInput
states:
我具有由类型1(P帧),5(I帧),7(SPS)和8(PPS)组成的H.264 / AVC NAL流。我想把它们写入一个.mov文件,而不需要重新编码。我试图使用AVAssetWriter来做到这一点。 AVAssetWriterInput的文档说明:
Passing nil for outputSettings instructs the input to pass through appended samples, doing no processing before they are written to the output file. This is useful if, for example, you are appending buffers that are already in a desirable compressed format. However, passthrough is currently supported only when writing to QuickTime Movie files (i.e. the AVAssetWriter was initialized with AVFileTypeQuickTimeMovie). For other file types, you must specify non-nil output settings.
对outputSettings传递nil指示输入通过附加的样本,在将它们写入输出文件之前不进行处理。这是有用的,例如,如果您附加已经处于所需压缩格式的缓冲区。但是,直通当前仅支持写入QuickTime影片文件(即AVAssetWriter使用AVFileTypeQuickTimeMovie初始化)。对于其他文件类型,必须指定非nil输出设置。
I'm trying to create CMSampleBuffers out of these NALs and append them to the asset writer input, but I am unable to input the data in a way that yields a well-formed .mov file, and I can't find any clue anywhere on how to do this.
我试图从这些NAL中创建CMSampleBuffers,并将它们附加到资产编写器输入,但我无法输入数据,以一种方式,产生一个格式良好的.mov文件,我找不到任何线索关于如何做到这一点。
The best result I've gotten so far was passing in the NALs in Annex B byte stream format (in the order 7 8 5 1 1 1....repeating) and playing the result in VLC. Because of this, I know the NALs contain valid data, but because the .mov file did not have an avcC atom and the mdat atom was filled with an Annex B byte stream, QuickTime will not play the video.
到目前为止,我得到的最好的结果是传递附件B字节流格式的NAL(以7 8 5 1 1 1 ....重复的顺序),并在VLC中播放结果。因为这个,我知道NAL包含有效的数据,但是因为.mov文件没有avcC原子和mdat原子填充了附件B字节流,QuickTime将不播放视频。
Now I'm trying to pass in the NALs with a 4-byte (as specified by the lengthSizeMinusOne
field) length field instead of the Annex B delimiter, which is how they're supposed to be packed into the mdat atom as far as I know.
I am at a loss for how to get the asset writer to write an avcC atom. Every sample I append just gets shoved into the mdat atom.
Does anyone know how I can pass raw H.264 data into an AVAssetWriterInput configured for pass through (nil outputSettings) and have it generate a properly formed QuickTime file?
现在我试图传递NALs与一个4字节(由lengthSizeMinusOne字段指定)长度字段,而不是附件B定界符,这是如何他们应该被打包到mdat原子到我知道。
我对如何让资产写作者写一个avcC原子感到失落。我附加的每个样本都被推入mdat原子。
有谁知道如何将原始的H.264数据传递到AVAssetWriterInput配置为通过(nil outputSettings),并让它生成一个正确形成的QuickTime文件?
|
I have submitted a TSI with apple and found the answer. I hope this saves someone time in the future.
The CMSampleBuffers have associated with them a CMFormatDescription, which contains a description of the data in the sample buffer.
The function prototype for creating the format description is as follows:
OSStatus CMVideoFormatDescriptionCreate (
CFAllocatorRef allocator,
CMVideoCodecType codecType,
int32_t width,
int32_t height,
CFDictionaryRef extensions,
CMVideoFormatDescriptionRef *outDesc
);
I learned, from the Apple technician, that I can use the extensions argument to pass in a dictionary containing the avcC atom data.
The extensions dictionary should be of the following form:
[kCMFormatDescriptionExtension_SampleDescriptionExtensionAtoms ---> ["avcC" ---> <avcC Data>]]
The []'s represent dictionaries. This dictionary can potentially be used to pass in data for arbitrary atoms aside from avcC.
Here is the code I used to create the extensions dictionary that I pass into CMVideoFormatDescriptionCreate :
const char *avcC = "avcC";
const CFStringRef avcCKey = CFStringCreateWithCString(kCFAllocatorDefault, avcC, kCFStringEncodingUTF8);
const CFDataRef avcCValue = CFDataCreate(kCFAllocatorDefault, [_avccData bytes], [_avccData length]);
const void *atomDictKeys[] = { avcCKey };
const void *atomDictValues[] = { avcCValue };
CFDictionaryRef atomsDict = CFDictionaryCreate(kCFAllocatorDefault, atomDictKeys, atomDictValues, 1, nil, nil);
const void *extensionDictKeys[] = { kCMFormatDescriptionExtension_SampleDescriptionExtensionAtoms };
const void *extensionDictValues[] = { atomsDict };
CFDictionaryRef extensionDict = CFDictionaryCreate(kCFAllocatorDefault, extensionDictKeys, extensionDictValues, 1, nil, nil);
|
|
I have submitted a TSI with apple and found the answer. I hope this saves someone time in the future.
The CMSampleBuffers have associated with them a CMFormatDescription, which contains a description of the data in the sample buffer.
The function prototype for creating the format description is as follows:
OSStatus CMVideoFormatDescriptionCreate (
CFAllocatorRef allocator,
CMVideoCodecType codecType,
int32_t width,
int32_t height,
CFDictionaryRef extensions,
CMVideoFormatDescriptionRef *outDesc
);
I learned, from the Apple technician, that I can use the extensions argument to pass in a dictionary containing the avcC atom data.
The extensions dictionary should be of the following form:
[kCMFormatDescriptionExtension_SampleDescriptionExtensionAtoms ---> ["avcC" ---> <avcC Data>]]
The []'s represent dictionaries. This dictionary can potentially be used to pass in data for arbitrary atoms aside from avcC.
Here is the code I used to create the extensions dictionary that I pass into CMVideoFormatDescriptionCreate :
const char *avcC = "avcC";
const CFStringRef avcCKey = CFStringCreateWithCString(kCFAllocatorDefault, avcC, kCFStringEncodingUTF8);
const CFDataRef avcCValue = CFDataCreate(kCFAllocatorDefault, [_avccData bytes], [_avccData length]);
const void *atomDictKeys[] = { avcCKey };
const void *atomDictValues[] = { avcCValue };
CFDictionaryRef atomsDict = CFDictionaryCreate(kCFAllocatorDefault, atomDictKeys, atomDictValues, 1, nil, nil);
const void *extensionDictKeys[] = { kCMFormatDescriptionExtension_SampleDescriptionExtensionAtoms };
const void *extensionDictValues[] = { atomsDict };
CFDictionaryRef extensionDict = CFDictionaryCreate(kCFAllocatorDefault, extensionDictKeys, extensionDictValues, 1, nil, nil);
|
// Create the videoFile.m4v AVAssetWriter.
AVAssetWriter *videoFileWriter = [[AVAssetWriter alloc] initWithURL:destinationURL fileType:AVFileTypeQuickTimeMovie error:&error];
NSParameterAssert(videoFileWriter);
if (error) {
NSLog(@"AVAssetWriter initWithURL failed with error= %@", [error localizedDescription]);
}
// Create the video file settings dictionary.
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: AVVideoCodecH264, AVVideoCodecKey, [NSNumber numberWithInt:1280],
AVVideoWidthKey, [NSNumber numberWithInt:720], AVVideoHeightKey, nil];
// Perform video settings check.
if ([videoFileWriter canApplyOutputSettings:videoSettings forMediaType:AVMediaTypeVideo]) {
NSLog(@"videoFileWriter can apply videoSettings...");
}
// Create the input to the videoFileWriter AVAssetWriter.
AVAssetWriterInput *videoFileWriterInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
videoFileWriterInput.expectsMediaDataInRealTime = YES;
NSParameterAssert(videoFileWriterInput);
NSParameterAssert([videoFileWriter canAddInput:videoFileWriterInput]);
// Connect the videoFileWriterInput to the videoFileWriter.
if ([videoFileWriter canAddInput:videoFileWriterInput]) {
[videoFileWriter addInput:videoFileWriterInput];
}
// Get the contents of videoFile.264 (using current Mac OSX methods).
NSData *sourceData = [NSData dataWithContentsOfURL:sourceURL];
const char *videoFileData = [sourceData bytes];
size_t sourceDataLength = [sourceData length];
NSLog(@"The value of 'sourceDataLength' is: %ld", sourceDataLength);
// Set up to create the videoSampleBuffer.
int32_t videoWidth = 1280;
int32_t videoHeight = 720;
CMBlockBufferRef videoBlockBuffer = NULL;
CMFormatDescriptionRef videoFormat = NULL;
CMSampleBufferRef videoSampleBuffer = NULL;
CMItemCount numberOfSampleTimeEntries = 1;
CMItemCount numberOfSamples = 1;
// More set up to create the videoSampleBuffer.
CMVideoFormatDescriptionCreate(kCFAllocatorDefault, kCMVideoCodecType_H264, videoWidth, videoHeight, NULL, &videoFormat);
result = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault, NULL, 150000, kCFAllocatorDefault, NULL, 0, 150000, kCMBlockBufferAssureMemoryNowFlag,
&videoBlockBuffer);
NSLog(@"After 'CMBlockBufferCreateWithMemoryBlock', 'result' is: %d", result);
// The CMBlockBufferReplaceDataBytes method is supposed to write videoFile.264 data bytes into the videoSampleBuffer.
result = CMBlockBufferReplaceDataBytes(videoFileData, videoBlockBuffer, 0, 150000);
NSLog(@"After 'CMBlockBufferReplaceDataBytes', 'result' is: %d", result);
CMSampleTimingInfo videoSampleTimingInformation = {CMTimeMake(1, 30)};
result = CMSampleBufferCreate(kCFAllocatorDefault, videoBlockBuffer, TRUE, NULL, NULL, videoFormat, numberOfSamples, numberOfSampleTimeEntries,
&videoSampleTimingInformation, 0, NULL, &videoSampleBuffer);
NSLog(@"After 'CMSampleBufferCreate', 'result' is: %d", result);
// Set the videoSampleBuffer to ready (is this needed?).
result = CMSampleBufferMakeDataReady(videoSampleBuffer);
NSLog(@"After 'CMSampleBufferMakeDataReady', 'result' is: %d", result);
// Start writing...
if ([videoFileWriter startWriting]) {
[videoFileWriter startSessionAtSourceTime:kCMTimeZero];
}
// Start the first while loop (DEBUG)...