AVFoundation Programming Guide - Editing

Editing

AVFoundation框架提供了一系列有特性的类帮助编辑音视频资产.AVFoundation编辑的核心API是组合☐(compositions)。组合是来自于一个或者多个不同轨道(tracks)的媒体资产集合.AVMutableComposition类提供了相应的API接口用于插入或者去除tracks,也能够管理tracks之间的时间序列。图3-1显示了一个新的组合是怎样从一些现有的资产拼凑起来,形成新的资产。如果你想合并多个asset按顺序进入一个单一文件,你需要知道许多细节。 如果我们想在我们自己的compositiontracks上执行任意自定义的audio或者video,我们需要单独的合并音频混合或视频组成

Figure 3-1  AVMutableComposition assembles assets together


使用AVMutableAudioMix 类,你能够在自己的组合中的音频轨道上执行自定义音频的过程。Figure 3-2显示了相应的过程,这里,你能够指定最大的音量,或者为音轨(audio track)设置音量斜坡.

Figure 3-2  AVMutableAudioMix performs audio mixing


为了进行编辑操作我们能够使用AVMutableVideoComposition 类直接工作于我们组件中的视频轨道(video tracks)。如Figure 3-3所示:对于单一的视频组件,我们可以为输出视频(video)指定期望渲染的大小和缩放以及帧的时间。通过使用视频组件指令(composition’s instructions)由AVMutableVideoCompositionInstruction 类提供,我们能够使用指令来修改我们视频的背景颜色和应用层的指令,这些应用层指令由 AVMutableVideoCompositionLayerInstruction 类提供。应用层的相关指令能够应用于应用变换,变换坡道,不透明度以及不透明度的坡道到你的组件中的视频轨道。视频组件类也能够让我们使用animationTool属性引入核心动画框架对视频的影响。

Figure 3-3  AVMutableVideoComposition

为了组合组件的视频和视频成分,能够使用AVAssetExportSession对象,如下图所示:我们使用自己的组合初始化export session这时候简单将音频部分和视频组件分别赋值到audioMix 和 videoComposition 属性。

Figure 3-4  Use AVAssetExportSession to combine media elements into an output file


Creating a Composition

为了创建我们自己的组件,我们需要使用AVMutableComposition 类,为了添加媒体(media data)数据到我们创建的组件对象,我们必须使用AVMutableCompositionTrack类创建一个或者多个组件轨道。最简单的例子就是使用一个视频轨道和一个音频轨道创建一个可变组件:


AVMutableComposition *mutableComposition = [AVMutableComposition composition];
// Create the video composition track.
AVMutableCompositionTrack *mutableCompositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
// Create the audio composition track.
AVMutableCompositionTrack *mutableCompositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];

Options for Initializing a Composition Track

当我们添加新的轨道到组件,我们必须提供一个媒体类型(media type)和一个轨道标识(track ID)。虽然音频和视频是最普遍使用的媒体类型,但是我们同样也能够指定其他媒体类型,比如:AVMediaTypeSubtitleAVMediaTypeText
每一个轨道都关联着一些视听数据,并且拥有着独特的标识符,我们称之为track ID。如果我们指定轨道标识为 kCMPersistentTrackID_Invalid,独特标识符将自动被创建并关联对应的轨道。

Adding Audiovisual Data to a Composition

一旦我们有一个组件包含有一个或者多个轨道,我们能够开始添加我们的媒体数据(media data)到合适的轨道上。为了添加媒体数据到组件的轨道上,我们需要访问AVAsset对象获取所在位置的媒体数据。我们能够使用可变的组件轨道接口将多个拥有相同媒体类型添加到同一个轨道上。下面代码解释了怎样添加两个不同的视频资源轨道到同一个组件轨道上。
 
     

// You can retrieve AVAssets from a number of places, like the camera roll for example.
AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAsset *anotherVideoAsset = <#another AVAsset with at least one video track#>;
// Get the first video track from each asset.
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *anotherVideoAssetTrack = [[anotherVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
// Add them both to the composition.
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,videoAssetTrack.timeRange.duration) ofTrack:videoAssetTrack atTime:kCMTimeZero error:nil];
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,anotherVideoAssetTrack.timeRange.duration) ofTrack:anotherVideoAssetTrack atTime:videoAssetTrack.timeRange.duration error:nil];

Retrieving Compatible Composition Tracks

如果可以,每一个媒体类型都应该对应唯一的组件轨道。统一兼容资产轨道将使用最小量的资源。当连续显示媒体数据时,我们应该把任意相同类型的媒体数据放置在同样的组件轨道中,我们可以查寻可变的组件,来找到是否这里有任意的组件轨道是与我们所期望的资产轨道兼容。
AVMutableCompositionTrack *compatibleCompositionTrack = [mutableComposition mutableTrackCompatibleWithTrack:<#the AVAssetTrack you want to insert#>]; 
if (compatibleCompositionTrack) { 
  // Implementation continues.
 }

注意:在相同的组件轨道上放置多个视频片段,可能会导致在两个视频过渡过程中发生弃帧,特别是在嵌入的设备上。尤其是在嵌入式设备下。你的视频片段的组件轨道数量取决于你的应用程序预期和它的平台设计。

Generating a Volume Ramp

单一的AVMutableAudioMix对象能够执行自定义音频过程,并处理组件中的所有音频轨道。我们能够使用audioMix类方法创建音频混合,而且能够使用AVMutableAudioMixInputParameters 的实例关联我们组件中具体轨道上的音频混合. 音频混合能够用于改变的音频轨道的音量,下面例子显示了怎样设置在指定的音频轨道上的音量坡度,使得在组件的持续时间让音频缓慢淡出:

AVMutableAudioMix *mutableAudioMix = [AVMutableAudioMix audioMix];
// Create the audio mix input parameters object. 
AVMutableAudioMixInputParameters *mixParameters = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:mutableCompositionAudioTrack];
// Set the volume ramp to slowly fade the audio out over the duration of the composition. 
[mixParameters setVolumeRampFromStartVolume:1.f toEndVolume:0.f timeRange:CMTimeRangeMake(kCMTimeZero, mutableComposition.duration)];
// Attach the input parameters to the audio mix. 
mutableAudioMix.inputParameters = @[mixParameters];

Performing Custom Video Processing

作为混合音频(audio mix),我们只需要一个AVMutableVideoComposition对象就能够在我们自己组件视频轨道(composition video track)上执行所有自定义的视频过程。使用一个视频组件,我们能够直接为我们的组件音频轨道设置合理的渲染大小,缩放,帧率。对于更多的设置属性的详细细节,请看: Setting the Render Size and Frame Duration.

Changing the Composition’s Background Color

所有的视频组件必须有一个包含AVVideoCompositionInstruction对象的数组,该数组中的每一个对象都至少包含一个视频组件指令(video composition instruction)。我们能够使用AVMutableVideoCompositionInstruction类来创建自己的视频组件指令。使用视频组件指令,我们能够修改组件的背景颜色, 指定是否需要处理推迟处理或者应用到层指令

下面例子解释了怎样创建视频组件指令,并使用指令改变整个组件的背景颜色为红色:

AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mutableVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration);
mutableVideoCompositionInstruction.backgroundColor = [[UIColor redColor] CGColor];


Applying Opacity Ramps

视频组件指令也能够被用于视频组件层指令。AVMutableVideoCompositionLayerInstruction对象可以应用转换,转换坡道,不透明度和坡道的不透明度到某个组件内的视频轨道,视频组件指令的 layerInstructions数组中层指令的顺序决定了组件指令期间,资源轨道中的视频框架应该如何被应用和组合。下面的代码展示了如何设置一个不透明的坡度使得第二个视频之前,让第一个视频慢慢淡出:

AVAsset *firstVideoAssetTrack = <#AVAssetTrack representing the first video segment played in the composition#>;
AVAsset *secondVideoAssetTrack = <#AVAssetTrack representing the second video segment played in the composition#>;
// Create the first video composition instruction.
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
// Create the layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Create the opacity ramp to fade out the first video track over its entire duration.
[firstVideoLayerInstruction setOpacityRampFromStartOpacity:1.f toEndOpacity:0.f timeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration)];
// Create the second video composition instruction so that the second video track isn't transparent.
AVMutableVideoCompositionInstruction *secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
// Create the second layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Attach the first layer instruction to the first video composition instruction.
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
// Attach the second layer instruction to the second video composition instruction.
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
// Attach both of the video composition instructions to the video composition.
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
Incorporating Core Animation Effects

一个视频组件可以通过 animationTool 属性将核心动画的力量添加到你的组件中。通过这个动画制作工具,可以完成一些任务,例如视频水印,添加片头或者动画覆盖。核心动画可以有两种不同的方式被用于视频组件:可以添加一个核心动画层到自己的个人组件轨道,或者可以渲染核心动画效果(使用一个核心动画层)直接进入组件的视频框架。下面的代码展示了在视频中央添加一个水印显示出来的效果。

CALayer *watermarkLayer = <#CALayer representing your desired watermark image#>;
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
videoLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
[parentLayer addSublayer:videoLayer];
watermarkLayer.position = CGPointMake(mutableVideoComposition.renderSize.width/2, mutableVideoComposition.renderSize.height/4);
[parentLayer addSublayer:watermarkLayer];
mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];

Putting It All Together: Combining Multiple Assets and Saving the Result to the Camera Roll

这个简短的代码示例说明了如何将两个视频资产轨道和一个音频资产轨道结合起来,创建一个单独的视频文件。有下面几个方面:

1:创建一个 AVMutableComposition 对象并且添加多个 AVMutableCompositionTrack 对象
2:添加 AVAssetTrack 对象的时间范围,兼容组件轨道
3:检查视频资产轨道的 preferredTransform 的属性,决定视频的方向
4:使用 AVMutableVideoCompositionLayerInstruction 对象给组件内的视频轨道应用转换。
5:给视频组件的 renderSize  frameDuration 属性设置适当的值。
6:当导出视频文件时,使用一个视频组件组合物中的组件
7:保存视频文件到相机胶卷

注意:关注最相关的代码,这个例子省略了一个完整应用程序的几个方面,如内存处理和错误处理。利用 AVFoundation ,希望你有足够的使用 Cocoa 的经验去判断丢失的碎片

Creating the Composition

使用 AVMutableComposition 对象将分离的资产拼凑成轨道。创建组件并且添加一个音频轨道和一个视频轨道。

AVMutableComposition *mutableComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];

Adding the Assets

一个空的资产并不是好。往组件中添加两个视频资产轨道和音频资产轨道。
AVAssetTrack *firstVideoAssetTrack = [[firstVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *secondVideoAssetTrack = [[secondVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration) ofTrack:firstVideoAssetTrack atTime:kCMTimeZero error:nil];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, secondVideoAssetTrack.timeRange.duration) ofTrack:secondVideoAssetTrack atTime:firstVideoAssetTrack.timeRange.duration error:nil];
[audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)) ofTrack:[[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:kCMTimeZero error:nil];
注意:这里假定你有两个资产,每个资产中都至少包含一个视频轨道,第三个资产至少包含一个音频轨道。视频可以从相机胶卷中检索到,音频轨道可以从音乐库或者视频本身检索到。

Checking the Video Orientations

一旦给组件添加了音频和视频轨道,你需要确保两个视频轨道的方向都是正确的。默认情况下,所有的视频轨道都被假定为横向模式。如果你的视频轨道是在纵向模式下拍摄的,当它被导出的时候方向将出现错误。同样,如果你尝试将横向模式下拍摄的视频与纵向的视频结合在一起,导出会话将无法完成。

BOOL isFirstVideoPortrait = NO;
CGAffineTransform firstTransform = firstVideoAssetTrack.preferredTransform;
// Check the first video track's preferred transform to determine if it was recorded in portrait mode.
if (firstTransform.a == 0&& firstTransform.d == 0&& (firstTransform.b == 1.0|| firstTransform.b == -1.0) && (firstTransform.c == 1.0|| firstTransform.c == -1.0)) {
    isFirstVideoPortrait = YES;
}
BOOL isSecondVideoPortrait = NO;
CGAffineTransform secondTransform = secondVideoAssetTrack.preferredTransform;
// Check the second video track's preferred transform to determine if it was recorded in portrait mode.
if (secondTransform.a == 0&& secondTransform.d == 0&& (secondTransform.b == 1.0|| secondTransform.b == -1.0) && (secondTransform.c == 1.0|| secondTransform.c == -1.0)) {
    isSecondVideoPortrait = YES;
}
if ((isFirstVideoAssetPortrait &&!isSecondVideoAssetPortrait) || (!isFirstVideoAssetPortrait && isSecondVideoAssetPortrait)) {
    UIAlertView *incompatibleVideoOrientationAlert = [[UIAlertView alloc] initWithTitle:@"Error!" message:@"Cannot combine a video shot in portrait mode with a video shot in landscape mode." delegate:self cancelButtonTitle:@"Dismiss" otherButtonTitles:nil];
    [incompatibleVideoOrientationAlert show];
    return;
}
Applying the Video Composition Layer Instructions

如果你知道视频片段对方向有兼容性,可以将必要的层指令应用到每个视频片段,并将这些层指令添加到视频组合中。

AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the first instruction to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
AVMutableVideoCompositionInstruction * secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the second instruction to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the first layer instruction to the preferred transform of the first video track.
[firstVideoLayerInstruction setTransform:firstTransform atTime:kCMTimeZero];
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the second layer instruction to the preferred transform of the second video track.
[secondVideoLayerInstruction setTransform:secondTransform atTime:firstVideoAssetTrack.timeRange.duration];
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
所有的 AVAssetTrack 对象都有一个 preferredTransform 属性,包含了资产轨道的方向信息。当资产轨道被展示到屏幕上时就进行这些转换。在之前的代码中,层指令信息的转换被设置为资产轨道的转换,使得一旦你调整了它的渲染大小,视频在新的组件中都能正确的显示。

Setting the Render Size and Frame Duration

为了完成视频方向的固定,必须调整相应的 renderSize 属性。也应该给 frameDuration 属性设置一个合适的值,比如1/30th of a second (或者每秒30)。默认情况下,renderScale 属性设置1.0,对于组件是比较合适的。

CGSize naturalSizeFirst, naturalSizeSecond;
// If the first video asset was shot in portrait mode, then so was the second one if we made it here.
if (isFirstVideoAssetPortrait) {
    // Invert the width and height for the video tracks to ensure that they display properly.
    naturalSizeFirst = CGSizeMake(firstVideoAssetTrack.naturalSize.height, firstVideoAssetTrack.naturalSize.width);
    naturalSizeSecond = CGSizeMake(secondVideoAssetTrack.naturalSize.height, secondVideoAssetTrack.naturalSize.width);
}
else {
    // If the videos weren't shot in portrait mode, we can just use their natural sizes.
    naturalSizeFirst = firstVideoAssetTrack.naturalSize;
    naturalSizeSecond = secondVideoAssetTrack.naturalSize;
}
float renderWidth, renderHeight;
// Set the renderWidth and renderHeight to the max of the two videos widths and heights.
if (naturalSizeFirst.width > naturalSizeSecond.width) {
    renderWidth = naturalSizeFirst.width;
}
else {
    renderWidth = naturalSizeSecond.width;
}
if (naturalSizeFirst.height > naturalSizeSecond.height) {
    renderHeight = naturalSizeFirst.height;
}
else {
    renderHeight = naturalSizeSecond.height;
}
mutableVideoComposition.renderSize = CGSizeMake(renderWidth, renderHeight);
// Set the frame duration to an appropriate value (i.e. 30 frames per second for video).
mutableVideoComposition.frameDuration = CMTimeMake(1,30);

Exporting the Composition and Saving it to the Camera Roll

这个过程的最后一步,是将整个组件导出到一个单独的视频文件,并且将视频存到相机胶卷中。使用 AVAssetExportSession 对象去创建新的视频文件,并且给输出文件传递一个期望的URL。然后可以使用 ALAssetsLibrary 类去将视频文件结果保存到相机胶卷。

// Create a static date formatter so we only have to initialize it once.
static NSDateFormatter *kDateFormatter;
if (!kDateFormatter) {
    kDateFormatter = [[NSDateFormatter alloc] init];
    kDateFormatter.dateStyle = NSDateFormatterMediumStyle;
    kDateFormatter.timeStyle = NSDateFormatterShortStyle;
}
// Create the export session with the composition and set the preset to the highest quality.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPresetHighestQuality];
// Set the desired output URL for the file created by the export process.
exporter.outputURL = [[[[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:@YES error:nil] URLByAppendingPathComponent:[kDateFormatter stringFromDate:[NSDate date]]] URLByAppendingPathExtension:CFBridgingRelease(UTTypeCopyPreferredTagWithClass((CFStringRef)AVFileTypeQuickTimeMovie, kUTTagClassFilenameExtension))];
// Set the output file type to be a QuickTime movie.
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = mutableVideoComposition;
// Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
[exporter exportAsynchronouslyWithCompletionHandler:^{
          dispatch_async(dispatch_get_main_queue(), ^{
          if (exporter.status == AVAssetExportSessionStatusCompleted) {
              ALAssetsLibrary *assetsLibrary = [[ALAssetsLibrary alloc] init];
              if ([assetsLibrary videoAtPathIsCompatibleWithSavedPhotosAlbum:exporter.outputURL]) {
              [assetsLibrary writeVideoAtPathToSavedPhotosAlbum:exporter.outputURL completionBlock:NULL];
            }
        }
    });
}];



  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值