http://www.jianshu.com/p/aeb441816a7d
AV Foundation提供了直接处理媒体样本的低级功能,其中需要使用的两个重要的类,AVAssetReader和AVAssetWrite,AVAssetReader用于从AVAsset资源读取媒体样本,AVAssetWrite用于对媒体资源进行编码并写入到容器文件中。下面简单的使用一下:
初始化AVAssetReader
-(void)configAssetReader
{
NSURL *videoUrl = [NSURL fileURLWithPath:[self resoucePath]];
_asset = [AVAsset assetWithURL:videoUrl];
//获取资源的一个视频轨道
AVAssetTrack *track = [[_asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
_assetReader = [[AVAssetReader alloc] initWithAsset:_asset error:nil];
//指定将读取的样本数据压缩为BGRA格式
NSDictionary *setting = @{(id)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_32BGRA)};
//初始化输出,指定从track轨道中读取样本数据
_assetOutPut = [[AVAssetReaderTrackOutput alloc] initWithTrack:track outputSettings:setting];
//添加输出
[_assetReader addOutput:_assetOutPut];
//开始读取过程
[_assetReader startReading];
}
初始化AVAssetWrite
-(void)configWriteInput
{
NSString *storePath = nil;
NSString *path = [self resoucePath];
NSRange range = [path rangeOfString:@"/" options:NSBackwardsSearch];
if (range.location != NSNotFound) {
NSString *pathRoot = [path substringToIndex:range.location];
storePath = [pathRoot stringByAppendingPathComponent:@"copy.mp4"];
}
if (storePath) {
_assetWrite = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:storePath] fileType:AVFileTypeQuickTimeMovie error:nil];
//指定编码格式,像素宽高等信息
NSDictionary *setting = @{
AVVideoCodecKey:AVVideoCodecH264,
AVVideoWidthKey:@1280,
AVVideoHeightKey:@720,
AVVideoCompressionPropertiesKey:@{
AVVideoMaxKeyFrameIntervalKey:@1,
AVVideoAverageBitRateKey:@10500000,
AVVideoProfileLevelKey:AVVideoProfileLevelH264Main31
}
};
初始化写入器,并制定了媒体格式
_assetInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:setting];
//添加写入器
[_assetWrite addInput:_assetInput];
[_assetWrite startWriting];
}
}
将读取的数据写入到_assetInput写入器中
-(void)assertReadToAssetInput
{
dispatch_queue_t queue = dispatch_queue_create("com.writequeue", DISPATCH_QUEUE_CONCURRENT);
if (_assetInput) {
__block NSInteger count = 0;
__block BOOL isComplete = NO;
//开启写入会话,并指定样本的开始时间
[_assetWrite startSessionAtSourceTime:kCMTimeZero];
[_assetInput requestMediaDataWhenReadyOnQueue:queue usingBlock:^{
if (!isComplete && _assetInput.readyForMoreMediaData)
{
//样本数据
CMSampleBufferRef buffer = [_assetOutPut copyNextSampleBuffer];
if (buffer) {
[_assetInput appendSampleBuffer:buffer];
count++;
// 展示第2000帧数据
if (count == 2000) {
CGImageRef imgref = [UIImage imageFromSampleBufferRef:buffer];
//读取CMSampleBuffer中的数据,将其转化为CGImageRef
参考代码见:http://www.jianshu.com/p/3d5ccbde0de1
UIImage *img = [UIImage imageWithCGImage:imgref];
dispatch_sync(dispatch_get_main_queue(), ^{
_imageView.image = img;
});
}
}
else
{
isComplete = YES;
}
if(isComplete)
{
//关闭写入会话
[_assetWrite finishWritingWithCompletionHandler:^{
AVAssetWriterStatus status = self.assetWrite.status;
if (status == AVAssetWriterStatusCompleted) {
NSLog(@"finsished");
}
else
{
NSLog(@"failure");
}
}];
}
}
}];
}
}
运行结果生成了copy.mp4视频文件,点击播放,发现只有视频没有音频信息,因为我们只有读取了视频的样本数据并写入,并没有读取里面的音频数据。所以没有音频,AVAsset往往对应的是一个格式容器,里面包含了很多格式的数据,音频,视频,字幕等。
文/jiangamh(简书作者)
原文链接:http://www.jianshu.com/p/aeb441816a7d
著作权归作者所有,转载请联系作者获得授权,并标注“简书作者”。
原文链接:http://www.jianshu.com/p/aeb441816a7d
著作权归作者所有,转载请联系作者获得授权,并标注“简书作者”。