iOS Audio unit(音频单元)详解

翻译 2017年06月16日 09:45:12

About Audio Unit Hosting

iOS provides audio processing plug-ins that support mixing, equalization, format conversion, and realtime input/output for recording, playback, offline rendering, and live conversation such as for VoIP (Voice over Internet Protocol). You can dynamically load and use—that is, host—these powerful and flexible plug-ins, known as audio units, from your ios application.

Audio units usually do their work in the context of an enclosing object called an audio processing graph, as shown in the figure. In this example, your app sends audio to the first audio units in the graph by way of one or more callback functions and exercises individual control over each audio unit. The output of the I/O unit—the last audio unit in this or any audio processing graph—connects directly to the output hardware.


At a Glance

Because audio units constitute the lowest programming layer in the iOS audio stack, to use them effectively requires deeper understanding than you need for other iOS audio technologies. Unless you require realtime playback of synthesized sounds, low-latency I/O (input and output), or specific audio unit features, look first at the Media Player, AV Foundation, OpenAL, or Audio Toolbox frameworks. These higher-level technologies employ audio units on your behalf and provide important additional features, as described in Multimedia Programming Guide.

Audio Units Provide Fast, Modular Audio Processing

The two greatest advantages of using audio units directly are:

  • Excellent responsiveness. Because you have access to a realtime priority thread in an audio unit render callback function, your audio code is as close as possible to the metal. Synthetic musical instruments and realtime simultaneous voice I/O benefit the most from using audio units directly.

  • Dynamic reconfiguration. The audio processing graph API, built around the AUGraph opaque type, lets you dynamically assemble, reconfigure, and rearrange complex audio processing chains in a thread-safe manner, all while processing audio. This is the only audio API in iOS offering this capability.

An audio unit’s life cycle proceeds as follows:

  1. At runtime, obtain a reference to the dynamically-linkable library that defines an audio unit you want to use.

  2. Instantiate the audio unit.

  3. Configure the audio unit as required for its type and to accomodate the intent of your app.

  4. Initialize the audio unit to prepare it to handle audio.

  5. Start audio flow.

  6. Control the audio unit.

  7. When finished, deallocate the audio unit.

Audio units provide highly useful individual features such as stereo panning, mixing, volume control, and audio level metering. Hosting audio units lets you add such features to your app. To reap these benefits, however, you must gain facility with a set of fundamental concepts including audio data stream formats, render callback functions, and audio unit architecture.

Relevant Chapter:  Audio Unit Hosting Fundamentals

Choosing a Design Pattern and Constructing Your App

An audio unit hosting design pattern provides a flexible blueprint to customize for the specifics of your app. Each pattern indicates:

  • How to configure the I/O unit. I/O units have two independent elements, one that accepts audio from the input hardware, one that sends audio to the output hardware. Each design pattern indicates which element or elements you should enable.

  • Where, within the audio processing graph, you must specify audio data stream formats. You must correctly specify formats to support audio flow.

  • Where to establish audio unit connections and where to attach your render callback functions. An audio unit connection is a formal construct that propagates a stream format from an output of one audio unit to an input of another audio unit. A render callback lets you feed audio into a graph or manipulate audio at the individual sample level within a graph.

No matter which design pattern you choose, the steps for constructing an audio unit hosting app are basically the same:

  1. Configure your application audio session to ensure your app works correctly in the context of the system and device hardware.

  2. Construct an audio processing graph. This multistep process makes use of everything you learned in Audio Unit Hosting Fundamentals.

  3. Provide a user interface for controlling the graph’s audio units.

Become familiar with these steps so you can apply them to your own projects.

Relevant Chapter: Constructing Audio Unit Apps

Get the Most Out of Each Audio Unit

Most of this document teaches you that all iOS audio units share important, common attributes. These attributes include, for example, the need for your app to specify and load the audio unit at runtime, and then to correctly specify its audio stream formats.

At the same time, each audio unit has certain unique features and requirements, ranging from the correct audio sample data type to use, to required configuration for correct behavior. Understand the usage details and specific capabilities of each audio unit so you know, for example, when to use the 3D Mixer unit and when to instead use the Multichannel Mixer.

Relevant Chapter: Using Specific Audio Units

How to Use This Document

If you prefer to begin with a hands-on introduction to audio unit hosting in iOS, download one of the sample apps available in the iOS Dev Center, such as Audio Mixer (MixerHost). Come back to this document to answer questions you may have and to learn more.

If you want a solid conceptual grounding before starting your project, read Audio Unit Hosting Fundamentals first. This chapter explains the concepts behind the APIs. Continue with Constructing Audio Unit Apps to learn about picking a design pattern for your project and the workflow for building your app.

If you have some experience with audio units and just want the specifics for a given type, you can start with Using Specific Audio Units.


Before reading this document, it’s a good idea to read the section A Little About Digital Audio and Linear PCM in Core Audio Overview. Also, review Core Audio Glossary for terms you may not already be familiar with. To check if your audio needs might be met by a higher-level technology, review Using Audio in Multimedia Programming Guide.

See Also

Essential reference documentation for building an audio unit hosting app includes the following:

IOS音频架构之Audio Unit

在前面的章节部分我们已经对IOS音频结构有了一个清晰的认识,知道Audio Unit是位于整个音频结构的最底层,这一层很多API已经开始和硬件打交道了。所以比较复杂,有了前面的基础再来看这个部分就比较...
  • u014011807
  • u014011807
  • 2015年07月21日 09:11
  • 3101

ios之audio unit的录音和播放一起,解决audioqueue播放PCM延迟问题

因为用audioqueue的录音播放,或者用audioqueue录音,openal播放都有延迟。     然后用底层些的audio unit,果然延迟问题就好很多了,至少一边录一边播的问题可...
  • likui1989
  • likui1989
  • 2017年02月27日 11:33
  • 1783

iPhone OS提供的音频单元

音频单元 描述 转换器单元 转换器单元,类型为kAudioUnitSubType_AUConverter,用于音频数据的格式转换。 ...
  • scribbler
  • scribbler
  • 2014年06月14日 16:17
  • 1140

iOS ijkplayer Audio Unit 播放音频

设置AudioUnit的播放的方法参考上一篇文章 此处不再多说 这里给出AudioUnit的代码请查阅 /* * IJKSDLAudioUnitController.h * * Copyri...
  • jeffasd
  • jeffasd
  • 2017年05月31日 15:38
  • 616

ios AudioUnit 播放 pcm 数据

ios AudioUnit 播放 pcm 数据 由于公司要做音频播放,配合ffmpeg,把音乐解码承pcm裸数据,然后通过ios AudioUnit播放。 1,...
  • c553110519
  • c553110519
  • 2017年05月31日 20:13
  • 1269

iOS Audio unit(音频单元)详解
  • shihuboke
  • shihuboke
  • 2017年06月16日 09:45
  • 2084


这篇同样是编者对自己学习的小总结以及对iOS初级开发者的小福利,大神莫吐槽... 首先来谈谈AVFoundation框架:它是iOS中用于处理基于时间的媒体数据的高级框架,也是基于Core Audi...
  • Lerryteng
  • Lerryteng
  • 2016年04月21日 20:05
  • 6486


需要在 didFinishLaunchingWithOptions:(NSDictionary *)launchOptions方法中加入一下监听事件:...
  • u013101843
  • u013101843
  • 2014年08月28日 09:36
  • 2259

audio unit 和audio queue实现音频流的采集

1 audio queue #import #import #import #import #define kNumberBuffers      3 #define t_sample  ...
  • w824585608
  • w824585608
  • 2015年02月06日 09:49
  • 2488

iOS Audio Unit 录音 AudioStreamBasicDescription 详解

在使用Audio Unit 录音和混音时发现耳机的声音只有一个声道有耳机的声音,经过仔细调试发现问题在 AudioStreamBasicDescription 配置的不对,当使用 AVAudioFo...
  • jeffasd
  • jeffasd
  • 2017年05月23日 16:09
  • 1070
您举报文章:iOS Audio unit(音频单元)详解