AVFoundation Programming Guide(官方文档翻译1)About AVFoundation - AVFoundation概述

新博客:
完整版 - AVFoundation Programming Guide

分章节版:
– 第1章:About AVFoundation - AVFoundation概述
– 第2章:Using Assets - 使用Assets
– 第3章:Playback - 播放
– 第4章:Editing - 编辑
– 第5章:Still and Video Media Capture - 静态视频媒体捕获
– 第6章:Export - 输出
– 第7章:Time and Media Representations 时间和媒体表现

CSDN博客:
完整版 - AVFoundation Programming Guide

分章节版:
– 第1章:About AVFoundation - AVFoundation概述
– 第2章:Using Assets - 使用Assets
– 第3章:Playback - 播放
– 第4章:Editing - 编辑
– 第5章:Still and Video Media Capture - 静态视频媒体捕获
– 第6章:Export - 输出
– 第7章:Time and Media Representations 时间和媒体表现

版权声明:本文为博主原创翻译,如需转载请注明出处。

苹果源文档地址 - 点击这里

About AVFoundation - AVFoundation概述

AVFoundation is one of several frameworks that you can use to play and create time-based audiovisual media. It provides an Objective-C interface you use to work on a detailed level with time-based audiovisual data. For example, you can use it to examine, create, edit, or reencode media files. You can also get input streams from devices and manipulate video during realtime capture and playback. Figure I-1 shows the architecture on iOS.

AVFoundation 是可以用它来播放和创建基于时间的视听媒体的几个框架之一。它提供了基于时间的视听数据的详细级别上的Objective-C接口。例如,你可以用它来检查,创建,编辑或重新编码媒体文件。您也可以从设备得到输入流和在实时捕捉回放过程中操控视频。图I-1显示了iOS上的架构。


Figure I-1  AVFoundation stack on iOS

Figure I-2 shows the corresponding media architecture on OS X.

图1-2显示了OS X上相关媒体的架构:


Figure I-2  AVFoundation stack on OS X

You should typically use the highest-level abstraction available that allows you to perform the tasks you want.

  • If you simply want to play movies, use the AVKit framework.

  • On iOS, to record video when you need only minimal control over format, use the UIKit framework(UIImagePickerController)

Note, however, that some of the primitive data structures that you use in AV Foundation—including time-related data structures and opaque objects to carry and describe media data—are declared in the Core Media framework.

通常,您应该使用可用的最高级别的抽象接口,执行所需的任务。

  • 如果你只是想播放电影,使用 AVKit 框架。

  • 在iOS上,当你在格式上只需要最少的控制,使用UIKit框架录制视频。(UIImagePickerController).

但是请注意,某些在AV Foundation 中使用的原始数据结构,包括时间相关的数据结构和不透明数据对象的传递和描述媒体数据是在Core Media framework声明的。

At a Glance - 摘要

There are two facets to the AVFoundation framework—APIs related to video and APIs related just to audio. The older audio-related classes provide easy ways to deal with audio. They are described in the Multimedia Programming Guide, not in this document.

You can also configure the audio behavior of your application using AVAudioSession; this is described in Audio Session Programming Guide.

AVFoundation框架包含视频相关的APIs和音频相关的APIs。旧的音频相关类提供了简便的方法来处理音频。他们在Multimedia Programming Guide,中介绍,不在这个文档中。

您还可以使用 AVAudioSession 来配置应用程序的音频行为;这是在 Audio Session Programming Guide 文档中介绍的。

Representing and Using Media with AVFoundation - 用AVFoundation 表示和使用媒体

The primary class that the AV Foundation framework uses to represent media is AVAsset. The design of the framework is largely guided by this representation. Understanding its structure will help you to understand how the framework works. An AVAssetinstance is an aggregated representation of a collection of one or more pieces of media data (audio and video tracks). It provides information about the collection as a whole, such as its title, duration, natural presentation size, and so on. AVAsset is not tied to particular data format. AVAsset is the superclass of other classes used to create asset instances from media at a URL (see Using Assets) and to create new compositions (see Editing).

AV Foundation框架用来表示媒体的主要类是 AVAsset。框架的设计主要是由这种表示引导。了解它的结构将有助于您了解该框架是如何工作的。一个 AVAsset 实例的媒体数据的一个或更多个(音频和视频轨道)的集合的聚集表示。它规定将有关集合的信息作为一个整体,如它的名称,时间,自然呈现大小等的信息。 AVAsset 是不依赖于特定的数据格式。 AVAsset是常常从URL中的媒体创建资产实例的这种类父类(请参阅 Using Assets),并创造新的成分(见 Editing)。

Each of the individual pieces of media data in the asset is of a uniform type and called a track. In a typical simple case, one track represents the audio component, and another represents the video component; in a complex composition, however, there may be multiple overlapping tracks of audio and video. Assets may also have metadata.

Asset中媒体数据的各个部分,每一个都是一个统一的类型,把这个类型称为“轨道”。在一个典型简单的情况下,一个轨道代表这个音频组件,另一个代表视频组件。然而复杂的组合中,有可能是多个重叠的音频和视频轨道。Assets也可能有元数据。

A vital concept in AV Foundation is that initializing an asset or a track does not necessarily mean that it is ready for use. It may require some time to calculate even the duration of an item (an MP3 file, for example, may not contain summary information). Rather than blocking the current thread while a value is being calculated, you ask for values and get an answer back asynchronously through a callback that you define using a block.

AV Foundation 中一个非常重要的概念是:初始化一个 asset 或者一个轨道并不一定意味着它已经准备好可以被使用。这可能需要一些时间来计算一个项目的持续时间(例如一个MP3文件,其中可能不包含摘要信息)。而不是当一个值被计算的时候阻塞当前线程,你访问这个值,并且通过调用你定义的一个 block 来得到异步返回。

Relevant Chapters: Using Assets, Time and Media Representations

相关章节:Using Assets, Time and Media Representations

Playback - 播放

AVFoundation allows you to manage the playback of asset in sophisticated ways. To support this, it separates the presentation state of an asset from the asset itself. This allows you to, for example, play two different segments of the same asset at the same time rendered at different resolutions. The presentation state for an asset is managed by a player item object; the presentation state for each track within an asset is managed by a player item track object. Using the player item and player item tracks you can, for example, set the size at which the visual portion of the item is presented by the player, set the audio mix parameters and video composition settings to be applied during playback, or disable components of the asset during playback.

AVFoundation允许你用一种复杂的方式来管理asset的播放。为了支持这一点,它将一个asset的呈现状态从asset自身中分离出来。例如允许你在不同的分辨率下同时播放同一个asset中的两个不同的片段。一个asset的呈现状态是由player item对象管理的。Asset中的每个轨道的呈现状态是由player item track对象管理的。例如使用player itemplayer item tracks,你可以设置被播放器呈现的项目中可视的那一部分,设置音频的混合参数以及被应用于播放期间的视频组合设定,或者播放期间的禁用组件。

You play player items using a player object, and direct the output of a player to the Core Animation layer. You can use a player queue to schedule playback of a collection of player items in sequence.

你可以使用一个 player 对象来播放播放器项目,并且直接输出一个播放器给核心动画层。你可以使用一个 player queue(player对象的队列)去给队列中player items集合中的播放项目安排序列。

Relevant Chapter: Playback

相关章节:Playback

Reading, Writing, and Reencoding Assets - 读取,写入和重新编码Assets

AVFoundation allows you to create new representations of an asset in several ways. You can simply reencode an existing asset, or—in iOS 4.1 and later—you can perform operations on the contents of an asset and save the result as a new asset.

AVFoundation 允许你用几种方式创建新的 asset 的表现形式。你可以简单将已经存在的 asset 重新编码,或者在iOS4.1以及之后的版本中,你可以在一个 asset 的目录中执行一些操作并且将结果保存为一个新的 asset

You use an export session to reencode an existing asset into a format defined by one of a small number of commonly-used presets. If you need more control over the transformation, in iOS 4.1 and later you can use an asset reader and asset writer object in tandem to convert an asset from one representation to another. Using these objects you can, for example, choose which of the tracks you want to be represented in the output file, specify your own output format, or modify the asset during the conversion process.

你可以使用 export session 将一个现有的asset重新编码为一个小数字,这个小数字是常用的预先设定好的一些小数字中的一个。如果在转换中你需要更多的控制,在iOS4.1已经以后的版本中,你可以使用 asset readerasset writer 对象串联的一个一个的转换。例如你可以使用这些对象选择在输出的文件中想要表示的轨道,指定你自己的输出格式,或者在转换过程中修改这个asset

To produce a visual representation of the waveform, you use an asset reader to read the audio track of an asset.

为了产生波形的可视化表示,你可以使用asset reader去读取asset中的音频轨道。

Relevant Chapter: Using Assets

相关章节:Using Assets

Thumbnails - 缩略图

To create thumbnail images of video presentations, you initialize an instance of AVAssetImageGenerator using the asset from which you want to generate thumbnails. AVAssetImageGenerator uses the default enabled video tracks to generate images.

创建视频演示图像的缩略图,使用想要生成缩略图的asset初始化一个 AVAssetImageGenerator 的实例。AVAssetImageGenerator 使用默认启用视频轨道来生成图像。

Relevant Chapter: Using Assets

相关章节:Using Assets

Editing - 编辑

AVFoundation uses compositions to create new assets from existing pieces of media (typically, one or more video and audio tracks). You use a mutable composition to add and remove tracks, and adjust their temporal orderings. You can also set the relative volumes and ramping of audio tracks; and set the opacity, and opacity ramps, of video tracks. A composition is an assemblage of pieces of media held in memory. When you export a composition using an export session, it’s collapsed to a file.

AVFoundation 使用 compositions 去从现有的媒体片段(通常是一个或多个视频和音频轨道)创建新的 assets 。你可以使用一个可变成分去添加和删除轨道,并调整它们的时间排序。你也可以设置相对音量和增加音频轨道;并且设置不透明度,浑浊坡道,视频跟踪。一种组合物,是一种在内存中存储的介质的组合。当年你使用 export session 导出一个成份,它会坍塌到一个文件中。

You can also create an asset from media such as sample buffers or still images using an asset writer.

你也可以从媒体上创建一个asset,比如使用asset writer.的示例缓冲区或静态图像。

Relevant Chapter: Editing

相关章节:Editing

Still and Video Media Capture - 静态和视频媒体捕获

Recording input from cameras and microphones is managed by a capture session. A capture session coordinates the flow of data from input devices to outputs such as a movie file. You can configure multiple inputs and outputs for a single session, even when the session is running. You send messages to the session to start and stop data flow.

从相机和麦克风记录输入是由一个 capture session 管理的。一个 capture session 协调从输入设备到输出的数据流,比如一个电影文件。你可以为一个单一的 session 配置多个输入和输出,甚至 session 正在运行的时候也可以。你将消息发送到 session 去启动和停止数据流。

In addition, you can use an instance of a preview layer to show the user what a camera is recording.

此外,你可以使用 preview layer 的一个实例来向用户显示一个相机是正在录制的。

Relevant Chapter: Still and Video Media Capture

相关章节:Still and Video Media Capture

Concurrent Programming with AVFoundation - AVFoundation并发编程

Callbacks from AVFoundation—invocations of blocks, key-value observers, and notification handlers—are not guaranteed to be made on any particular thread or queue. Instead, AVFoundation invokes these handlers on threads or queues on which it performs its internal tasks.

AVFoundation 回调,比如块的调用、键值观察者以及通知处理程序,都不能保证在任何特定的线程或队列进行。相反,AVFoundation 在线程或者执行其内部任务的队列上调用这些处理程序。

There are two general guidelines as far as notifications and threading:

  • UI related notifications occur on the main thread.
  • Classes or methods that require you create and/or specify a queue will return notifications on that queue.

Beyond those two guidelines (and there are exceptions, which are noted in the reference documentation) you should not assume that a notification will be returned on any specific thread.

下面是两个有关通知和线程的准则

  • 在主线程上发生的与用户界面相关的通知。
  • 需要创建并且/或者 指定一个队列的类或者方法将返回该队列的通知。

除了这两个准则(当然是有一些例外,在参考文档中会被指出),你不应该假设一个通知将在任何特定的线程返回。

If you’re writing a multithreaded application, you can use the NSThread method isMainThread or [[NSThread currentThread] isEqual:<#A stored thread reference#>] to test whether the invocation thread is a thread you expect to perform your work on. You can redirect messages to appropriate threads using methods such as performSelectorOnMainThread:withObject:waitUntilDone: and performSelector:onThread:withObject:waitUntilDone:modes:. You could also use dispatch_async to “bounce” to your blocks on an appropriate queue, either the main queue for UI tasks or a queue you have up for concurrent operations. For more about concurrent operations, see Concurrency Programming Guide; for more about blocks, see Blocks Programming Topics. The AVCam-iOS: Using AVFoundation to Capture Images and Movies sample code is considered the primary example for all AVFoundation functionality and can be consulted for examples of thread and queue usage with AVFoundation.

如果你在写一个多线程的应用程序,你可以使用 NSThread 方法 isMainThread 或者 [[NSThread currentThread] isEqual:<#A stored thread reference#>] 去测试是否调用了你期望执行你任务的线程。你可以使用方法重定向 消息给适合的线程,比如 performSelectorOnMainThread:withObject:waitUntilDone: 以及 performSelector:onThread:withObject:waitUntilDone:modes:.你也可以使用 dispatch_async弹回到适当队列的 blocks 中,无论是在主界面的任务队列还是有了并发操作的队列。更多关于并行操作,请查看 Concurrency Programming Guide;更多关于块,请查看 Blocks Programming Topics. AVCam-iOS: Using AVFoundation to Capture Images and Movies 示例代码是所有 AVFoundation 功能最主要的例子,可以对线程和队列使用 AVFoundation 实例参考。

Prerequisites - 预备知识

AVFoundation is an advanced Cocoa framework. To use it effectively, you must have:

  • A solid understanding of fundamental Cocoa development tools and techniques
  • A basic grasp of blocks
  • A basic understanding of key-value coding and key-value observing
  • For playback, a basic understanding of Core Animation (see Core Animation Programming Guide or, for basic playback, the AVKit Framework Reference.

AVFoundation 是一种先进的 Cocoa 框架,为了有效的使用,你必须掌握下面的知识:

  • 扎实的了解基本的 Cocoa 开发工具和框架
  • 对块有基本的了解
  • 了解基本的键值编码(key-value coding)和键值观察(key-value observing
  • 对于播放,对核心动画的基本理解 (see Core Animation Programming Guide )或者,对于基本播放, 请看 AVKit Framework Reference.

See Also - 参考

There are several AVFoundation examples including two that are key to understanding and implementation Camera capture functionality:

  • AVCam-iOS: Using AVFoundation to Capture Images and Movies is the canonical sample code for implementing any program that uses the camera functionality. It is a complete sample, well documented, and covers the majority of the functionality showing the best practices.
  • AVCamManual: Extending AVCam to Use Manual Capture API is the companion application to AVCam. It implements Camera functionality using the manual camera controls. It is also a complete example, well documented, and should be considered the canonical example for creating camera applications that take advantage of manual controls.
  • RosyWriter is an example that demonstrates real time frame processing and in particular how to apply filters to video content. This is a very common developer requirement and this example covers that functionality.
  • AVLocationPlayer: Using AVFoundation Metadata Reading APIs demonstrates using the metadata APIs.

有几个 AVFoundation 的例子,包括两个理解和实现摄像头捕捉功能的关键点:

  • AVCam-iOS: Using AVFoundation to Capture Images and Movies 是实现任何想使用摄像头功能的程序的典型示例代码。它是一个完整的样本,以及记录,并涵盖了大部分主要的功能。
  • AVCamManual: Extending AVCam to Use Manual Capture API 是AVCam相对应的应用程序。它使用手动相机控制实现相机功能。它也是一个完成的例子,以及记录,并且应该被视为利用手动控制创建相机应用程序的典型例子。
  • RosyWriter 是一个演示实时帧处理的例子,特别是如果过滤器应用到视频内容。这是一个非常普遍的开发人员的需求,这个例子涵盖了这个功能。
  • AVLocationPlayer: 使用 AVFoundation Metadata Reading APIs 演示使用 the metadata APIs.
  • 3
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
1. 什么是 AVFoundationAVFoundation苹果公司提供的一个多媒体处理框架,它能够处理音频、视频、文本和图像等媒体类型,还能够实现录制、编辑、播放等多种操作。 2. AVFoundation的优点是什么? AVFoundation有以下几个优点: - 它可以在多个平台上使用,包括iOS、macOS和tvOS等。 - 它提供了灵活的API,可以对多种媒体类型进行处理。 - 它支持硬件加速,能够提高处理速度和性能。 - 它支持多种格式的媒体文件,包括MP3、AAC、H.264和MPEG-4等。 3. 如何使用AVFoundation实现音频录制? 在使用AVFoundation实现音频录制时,需要执行以下步骤: - 创建一个AVAudioSession对象,用于管理音频会话。 - 创建一个AVAudioRecorder对象,用于录制音频。 - 配置录音参数,例如音频格式、采样率、通道数、音频质量等。 - 调用AVAudioRecorder的record方法开始录音。 - 调用AVAudioRecorder的stop方法停止录音。 4. AVFoundation中的AVPlayerLayer是什么? AVPlayerLayer是一个CALayer子类,用于在iOS和macOS应用程序中显示视频内容。它可以显示一个AVPlayer对象的输出,并且支持全屏播放、画中画、视频内容缩放等功能。 5. 如何使用AVFoundation实现视频播放? 在使用AVFoundation实现视频播放时,需要执行以下步骤: - 创建一个AVPlayer对象,用于播放视频。 - 创建一个AVPlayerLayer对象,用于显示视频内容。 - 将AVPlayerLayer对象添加到视图层次结构中。 - 创建一个AVPlayerItem对象,用于管理视频资源。 - 调用AVPlayer的replaceCurrentItemWithPlayerItem方法将AVPlayerItem与AVPlayer关联。 - 调用AVPlayer的play方法开始播放视频。 6. 如何在AVFoundation中实现视频编辑? 在AVFoundation中实现视频编辑通常需要使用AVAsset、AVAssetTrack、AVComposition、AVMutableComposition等类。以下是实现视频编辑的大致步骤: - 创建一个AVAsset对象,用于表示视频资源。 - 创建一个AVMutableComposition对象,用于管理视频资源。 - 使用AVAssetTrack获取视频的音频和视频轨道。 - 使用AVMutableCompositionTrack将音频和视频轨道添加到AVMutableComposition中。 - 使用AVAssetExportSession导出编辑后的视频。 7. AVFoundation中的AVCaptureSession是什么? AVCaptureSession是用于管理视频和音频输入的会话对象。它可以管理多个输入设备(例如摄像头、麦克风等)并且可以将它们合并到单个输出中。使用AVCaptureSession可以方便地实现视频录制、视频流传输、实时视频分析等功能。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值