iOS上的音调生成器

在这篇文章中,我会介绍一个小型iOS程序,它可以生成一个持续的固定频率的音调,该频率由一个滑动条来调节。这个程序将会展示如何简单地通过喇叭输出自己生成的声音。

介绍

我之前写了一篇关于播放mp3或AAC格式的音频流的文章。这些文章介绍了关于采用AudioQuerer API来播放音频。AudioQueue的接口能够提取音频数据(即使是压缩格式),之后通过外放设备播放。AudioQueue API的解码能力是它的关键之处,而且采用它是唯一的方式来发挥iOS设备硬件解码的能力。

然而,如果你打算生成自定义的音频(而且它已经压缩为线性脉冲编码调制形式)的话,你不一定非要使用AudioQueue。但是现实是你将加入更多的控制而且需要做更多如果你准备利用最底层的音频API:AudioUnit,当然这里你仍然可以利用AudioQueue来播放音频。

程序实例:音调生成器

这个实例程序相当简单:一个滑块来控制频率并且一个按钮来控制播放。

这个音调是持续发声的同时你可以通过滑块调整频率让它发出哨子似的的声音来需求乐趣。

You can download the ToneGenerator and the complete sample project used in this post here ToneGenerator.zip (25kb)

Audio Units

AudioUnits 是iOS中最底层的声音生成器同时也是Mac中最底层的硬件抽象层。它们会生成原始音频样例并且将音频值放到输出缓冲区中。这就是它们全部的功能。

AudioUnit在它的渲染函数中生成音频样例。音频渲染函数在一个专用音频线程中来调用。

为了使代码关注AudioUnits的基础应用,我将展示如何创建单独的AudioUnit而且仅仅使用它来输出声音。虽然传统的方法是将AudioUnits作为AUGraph的一部分来使用,AUGraph串联一系列不同的单元来丰富声音数据,添加混响效果等等。也许在将来我可以在这里讲述更多但是现在,我们的目标就是使事情变得简单。

生成自定义的音频样例

生成音频样例意味着我们必须以特定的时间间隔计算出音频波形的值(采样点)。

在示例程序中,我们将使用32位float型数据来表示每一个采样值。这只是为了方便,这样的格式并不是iOS中最好的格式,该格式是8.24有符号定点采样(每个采样点32bits,24bits代表小数部分)。但是在这个示例程序中,方便性比不成熟的效率更重要。

采样值可以在-1到+1之间,但是程序会将采样值限制在一个更小的范围之内。

我们利用一个基本的正弦波来生成音调。每一个采样点的值通过下面的等式来决定:

f(n) = a sin ( θ(n) )

n代表当前采样的序号,a是幅度,当前的波形的相位θ(n)是:

θ(n) = 2πƒ n / r

f代表音调频率,r是音频的采样率

在AudioUnit的渲染函数中实现这一些,得到的函数如下所示:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
OSStatus RenderTone(
    void *inRefCon,
    AudioUnitRenderActionFlags *ioActionFlags,
    const AudioTimeStamp *inTimeStamp,
    UInt32 inBusNumber,
    UInt32 inNumberFrames,
    AudioBufferList *ioData)
 
{
    // 固定幅度对于我们来说是合适的
    const double amplitude = 0.25;
 
    // 由viewController来获得音调参数
    ToneGeneratorViewController *viewController =
        (ToneGeneratorViewController *)inRefCon;
    double theta = viewController->theta;
    double theta_increment =
        2.0 * M_PI * viewController->frequency / viewController->sampleRate;
 
    //这是一个单声道音频生成器因此我们只需要一个缓冲区
    const int channel = 0;
    Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
 
    // 生成采样值
    for (UInt32 frame = 0; frame < inNumberFrames; frame++)
    {
        buffer[frame] = sin(theta) * amplitude;
 
        theta += theta_increment;
        if (theta > 2.0 * M_PI)
        {
            theta -= 2.0 * M_PI;
        }
    }
 
    // Store the updated theta back in the view controller
    viewController->theta = theta;
 
    return noErr;
}

创建一个AudioUnit 来输出音频

AudioUnit是一个底层的API,因此有许多设置必须来配置——同时你必须配置许多方面以使得回放工作正常。幸运的是,它是相当直接的。

下面是创建并且配置输出32位、单声道、浮点值、线性PCM调制音频的代码,该代码利用RenderTone函数。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
// 配置搜索参数来寻找默认的回放输出单元
// (called the kAudioUnitSubType_RemoteIO on iOS but
// kAudioUnitSubType_DefaultOutput on Mac OS X)
AudioComponentDescription defaultOutputDescription;
defaultOutputDescription.componentType = kAudioUnitType_Output;
defaultOutputDescription.componentSubType = kAudioUnitSubType_RemoteIO;
defaultOutputDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
defaultOutputDescription.componentFlags = 0;
defaultOutputDescription.componentFlagsMask = 0;
 
// 获得默认回放单元
AudioComponent defaultOutput = AudioComponentFindNext(NULL, &defaultOutputDescription);
NSAssert(defaultOutput, @"Can't find default output");
 
// 以此为基础创建一个新的单元来播放
OSErr err = AudioComponentInstanceNew(defaultOutput, &toneUnit);
NSAssert1(toneUnit, @"Error creating unit: %ld", err);
 
// 在该单元上设置音调渲染函数
AURenderCallbackStruct input;
input.inputProc = RenderTone;
input.inputProcRefCon = self;
err = AudioUnitSetProperty(toneUnit,
    kAudioUnitProperty_SetRenderCallback,
    kAudioUnitScope_Input,
    0,
    &input,
    sizeof(input));
NSAssert1(err == noErr, @"Error setting callback: %ld", err);
 
//设置格式为32bits,单声道,浮点值,线性PCM
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = sampleRate;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags =
    kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
streamFormat.mBytesPerPacket = four_bytes_per_float;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerFrame = four_bytes_per_float;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBitsPerChannel = four_bytes_per_float * eight_bits_per_byte;
err = AudioUnitSetProperty (toneUnit,
    kAudioUnitProperty_StreamFormat,
    kAudioUnitScope_Input,
    0,
    &streamFormat,
    sizeof(AudioStreamBasicDescription));
NSAssert1(err == noErr, @"Error setting stream format: %ld", err);

开始播放

一旦你创建了AudioUnit,你需要使用AudioUnitInitialize(它检查类变量是否都有效)来对它进行初始化。之后你可以通过调用AudioOutputUnitStart来开始运行它。一旦AudioUnit初始化完毕,你就不能再改变它的参数了,所以如果你需要改变参数,你需要使用AudioUnitUninitialize。

在示例程序中,回放之前需要执行重建AudioUnit,如下所示:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
- (IBAction)togglePlay:(UIButton *)selectedButton
{
    if (!toneUnit)
    {
        // Create the audio unit as shown above
        [self createToneUnit];
 
        // Finalize parameters on the unit
        OSErr err = AudioUnitInitialize(toneUnit);
        NSAssert1(err == noErr, @"Error initializing unit: %ld", err);
 
        //开始回放
        err = AudioOutputUnitStart(toneUnit);
        NSAssert1(err == noErr, @"Error starting unit: %ld", err);
 
        [selectedButton setTitle:NSLocalizedString(@"Stop", nil) forState:0];
    }
    else
    {
        // 使该单元消失
        AudioOutputUnitStop(toneUnit);
        AudioUnitUninitialize(toneUnit);
        AudioComponentInstanceDispose(toneUnit);
        toneUnit = nil;
 
        [selectedButton setTitle:NSLocalizedString(@"Play", nil) forState:0];
    }
}

结论

You can download the ToneGenerator and the complete sample project used in this post here ToneGenerator.zip (25kb)

这篇文章的目标是iOS程序示例中展示AudioUnits的简单用法。大部分的苹果程序是更精心设计的,包括复杂的AUGraphs或者不能在iOS上使用因为它们使用了Mac上才有的APIs。

因而,这个示例程序有些非典型因为它的简化。在更复杂的程序中,你可能需要用到AUGraph来串联来自文件的节点,可能是混频器或者效果节点,然后汇集到一个输出节点上。

原文作者: Matt Gallagher
原文链接:http://cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html

An iOS tone generator (an introduction to AudioUnits) 
In this post, I present a tiny iOS app that generates a continuous tone at a frequency determined by a slider. It’s a small sample app intended to show the simplest way to send audio data you generate to the speaker.

Introduction
I’ve previously written posts on playing audio from MP3/AAC files and streams. These posts use the AudioQueue API to play audio. The AudioQueue interface can take audio data — still in compressed formats — and play it on the device output. This decoding/decompressing is the key strength of the AudioQueue APIs and it is the only way to take advantage of the hardware decoding on iOS devices.

However, if you’re generating your own audio (and it is therefore already decompressed, linear PCM) you don’t need to use AudioQueue. You can still play audio in this format using AudioQueue but the reality is that you’ll have more control and be able to do much more if you use the lowest level audio API (lowest in iOS, anyway): the AudioUnit.

Sample application: ToneGenerator
The sample application for this post is really simple: a slider to control frequency and a play

The tone is generated continuously while you change the frequency so you can play with the tone generator like a slide whistle for hours of neighbour-annoying

You can download the ToneGenerator and the complete sample project used in this post here ToneGenerator.zip (25kb)

Audio Units
AudioUnits are lowest level of sound generation on iOS and the lowest hardware abstracted layer commonly used on the Mac. They generate raw audio samples when requested and place them into output buffers. That’s their entire function.

An AudioUnit generates these samples in its render function. The render function is invoked to generate samples on a dedicated audio thread.

In an effort to keep the code as focussed as possible on the basics of AudioUnits, I will be showing you how to create a single AudioUnit and use it on its own to output sound. Ordinarily though, AudioUnits are used as part of an AUGraph which chains a few different units together for feeding in sound data, mixing, effects and other purposes. Perhaps I’ll show more about that in a future post but for now, the aim is to keep it simple.

Generating your own audio samples
Generating audio samples means that we must calculate the values of an audio waveform at the required time locations (sample points).

In this sample application, we’re going to use 32-bit floating point values for each sample (since that’s the easiest). This is for convenience, it’s not the best format for performance on iOS — that would be the canonical sample format of 8.24 signed, fixed-point samples (that’s 32-bits per sample with 24 bits of that used as the fractional component). But convenience is more important than raw efficiency in this sample application so floats are good enough.

The values can vary between -1.0 and +1.0 but we’re going to limit our samples to a smaller range than that to keep the volume to a reasonable level.

Our generated tone is a basic sine wave (a pure tone). The value at each sample point will then be determined by the following equation:

f(n) = a sin ( θ(n) )

where n is the index of the current sample, a is the amplitude and the current angle of the waveform, θ(n) is given by:

θ(n) = 2πƒ n / r

where ƒ is the frequency of the tone we want to generate and r is the sample rate of the audio we’re generating.

Implementing this in our AudioUnit’s render function, gives us a function that looks like

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
OSStatus RenderTone(
    void *inRefCon,
    AudioUnitRenderActionFlags *ioActionFlags,
    const AudioTimeStamp *inTimeStamp,
    UInt32 inBusNumber,
    UInt32 inNumberFrames,
    AudioBufferList *ioData)
 
{
    // Fixed amplitude is good enough for our purposes
    const double amplitude = 0.25;
 
    // Get the tone parameters out of the view controller
    ToneGeneratorViewController *viewController =
        (ToneGeneratorViewController *)inRefCon;
    double theta = viewController->theta;
    double theta_increment =
        2.0 * M_PI * viewController->frequency / viewController->sampleRate;
 
    // This is a mono tone generator so we only need the first buffer
    const int channel = 0;
    Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
 
    // Generate the samples
    for (UInt32 frame = 0; frame < inNumberFrames; frame++)
    {
        buffer[frame] = sin(theta) * amplitude;
 
        theta += theta_increment;
        if (theta > 2.0 * M_PI)
        {
            theta -= 2.0 * M_PI;
        }
    }
 
    // Store the updated theta back in the view controller
    viewController->theta = theta;
 
    return noErr;
}

Creating an AudioUnit for outputting audio
AudioUnit is a low-level API, so there are many settings you can configure — and you have to configure a lot of them to get playback to work at all. Fortunately, it’s all pretty straightforward.

Here’s the code to create and configure an output audio unit (toneUnit) for playing 32 bit, single channel, floating point, linear PCM sound that generates its audio using the function RenderTone

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
// Configure the search parameters to find the default playback output unit
// (called the kAudioUnitSubType_RemoteIO on iOS but
// kAudioUnitSubType_DefaultOutput on Mac OS X)
AudioComponentDescription defaultOutputDescription;
defaultOutputDescription.componentType = kAudioUnitType_Output;
defaultOutputDescription.componentSubType = kAudioUnitSubType_RemoteIO;
defaultOutputDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
defaultOutputDescription.componentFlags = 0;
defaultOutputDescription.componentFlagsMask = 0;
 
// Get the default playback output unit
AudioComponent defaultOutput = AudioComponentFindNext(NULL, &defaultOutputDescription);
NSAssert(defaultOutput, @"Can't find default output");
 
// Create a new unit based on this that we'll use for output
OSErr err = AudioComponentInstanceNew(defaultOutput, &toneUnit);
NSAssert1(toneUnit, @"Error creating unit: %ld", err);
 
// Set our tone rendering function on the unit
AURenderCallbackStruct input;
input.inputProc = RenderTone;
input.inputProcRefCon = self;
err = AudioUnitSetProperty(toneUnit,
    kAudioUnitProperty_SetRenderCallback,
    kAudioUnitScope_Input,
    0,
    &input,
    sizeof(input));
NSAssert1(err == noErr, @"Error setting callback: %ld", err);
 
// Set the format to 32 bit, single channel, floating point, linear PCM
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = sampleRate;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags =
    kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
streamFormat.mBytesPerPacket = four_bytes_per_float;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerFrame = four_bytes_per_float;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBitsPerChannel = four_bytes_per_float * eight_bits_per_byte;
err = AudioUnitSetProperty (toneUnit,
    kAudioUnitProperty_StreamFormat,
    kAudioUnitScope_Input,
    0,
    &streamFormat,
    sizeof(AudioStreamBasicDescription));
NSAssert1(err == noErr, @"Error setting stream format: %ld", err);

Start it playing
Once you’ve got an output AudioUnit created, you need to initialize it using AudioUnitInitialize (which verifies that all the parameters are valid) and then you can start it running using AudioOutputUnitStart. Once an AudioUnit is initialized, you can’t change further parameters, so if you need to change parameters again, you’ll need to use AudioUnitUninitialize.

In the sample program, toggling playback performs a full teardown and recreation of the AudioUnit as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
- (IBAction)togglePlay:(UIButton *)selectedButton
{
    if (!toneUnit)
    {
        // Create the audio unit as shown above
        [self createToneUnit];
 
        // Finalize parameters on the unit
        OSErr err = AudioUnitInitialize(toneUnit);
        NSAssert1(err == noErr, @"Error initializing unit: %ld", err);
 
        // Start playback
        err = AudioOutputUnitStart(toneUnit);
        NSAssert1(err == noErr, @"Error starting unit: %ld", err);
 
        [selectedButton setTitle:NSLocalizedString(@"Stop", nil) forState:0];
    }
    else
    {
        // Tear it down in reverse
        AudioOutputUnitStop(toneUnit);
        AudioUnitUninitialize(toneUnit);
        AudioComponentInstanceDispose(toneUnit);
        toneUnit = nil;
 
        [selectedButton setTitle:NSLocalizedString(@"Play", nil) forState:0];
    }
}

Conclusion

You can download the ToneGenerator and the complete sample project used in this post here ToneGenerator.zip (25kb)

The aim in this post was to present a sample iOS application that shows AudioUnits in the simplest way possible. Most of the Apple sample projects are considerably more elaborate, involve complex AUGraphs or can’t be used on iOS because they use Mac-only APIs.

However, this sample application is a little atypical because of this simplification. In more complex applications, you’ll probably want an AUGraph to chain an input node fed from a file, possibly a mixer and effects nodes, then an output node together

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值