ToneGenerator.cpp
ToneGenerator中构造函数或者startTone会调用initAudioTrack:
mpAudioTrack = new AudioTrack();
mpAudioTrack->set(..., audioCallback, ...);
创建了AudioTrack,并将audioCallback传入AudioTrack。
audioCallback调用ToneGenerator::WaveGenerator类的getSamples来获取tone sample,填入AudioTrack传过来的AudioTrack::Buffer结构的中:
ToneGenerator *lpToneGen = static_cast<ToneGenerator *>(user);
short *lpOut = buffer->i16;
lpWaveGen->getSamples(lpOut, lGenSmp, lWaveCmd);
AudioTrack.cpp
set函数检查有没有传入callback,若有,则创建AudioTrackThread线程:
if (cbf != 0) {
mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
AudioTrackThread线程调用AudioTrack::processAudioBuffer(),调用callback来获取frame:
obtainBuffer(&audioBuffer, 1);
mCbf(EVENT_MORE_DATA, mUserData, &audioBuffer); releaseBuffer(&audioBuffer);
obtainBuffer填充audioBuffer各域,最重要的是
audioBuffer->raw = (int8_t *)cblk->buffer(u);
cblk指向mCblk。mCblk在createTrack时赋值,实际为AudioFlinger通过IPC传递的内存空间:
sp<IAudioTrack> track = audioFlinger->createTrack()
sp<IMemory> cblk = track->getCblk();
mCblk = static_cast<audio_track_cblk_t*>(cblk->pointer());
AudioFinger.cpp
AudioFlinger::DirectOutputThread的函数threadLoop:
activeTrack->getNextBuffer(&buffer);
mOutput->write(mMixBuffer, mixBufferSize);
getNextBuffer从cblk中取得帧:
audio_track_cblk_t* cblk = this->cblk();
buffer->raw = getBuffer(s, framesReq);
对于MixerThread,情况会复杂一些。创建MixerThread时,创建了AudioMixer:
mAudioMixer = new AudioMixer(mFrameCount, mSampleRate);
MixerThread的threadLoop:
mAudioMixer->process(curBuf);
mOutput->write(curBuf, mixBufferSize);
进入AudioMixer::process
mState.hook(&mState, output);
hook为AudioMixer被enable时设定,以process__nop为例:
t.bufferProvider->getNextBuffer(&t.buffer);
t.bufferProvider->releaseBuffer(&t.buffer);
hook调用AudioFinger的getNextBuffer来获取buffer
总体来说,应用程序使用tone generator来发送tone音,tone generator与AudioTrack打交道。AudioTrack将audio frame放入audio flinger通过IPC传递过来的一块内存中。AudioFlinger作为消费者从这一块内存中读取audio frame。