AudioFlinger流程分析

转载自:

Android深入浅出之Audio 第二部分 AudioFlinger分析_阿拉神农的博客-CSDN博客

UML顺序图:

AudioFlinger.svg

https://download.csdn.net/download/u012906122/19589074

顺序图较大,一张图不能完全显示。要查看完整顺序图,请见AudioFlinger.mdj资源链接。

注意:AudioTrack是在客户端进程,AudioFlinger是MediaServer进程。

根据UML顺序图详细说明:

1 main

AudioFlinger的诞生

framework/base/media/mediaserver/Main_mediaServer.cpp

int main(int argc, char** argv)
{
    sp<ProcessState> proc(ProcessState::self());
    sp<IServiceManager> sm = defaultServiceManager();
    ....
    // AF的实例化
    AudioFlinger::instantiate();
    // APS的实例化
    AudioPolicyService::instantiate();
    ....
    ProcessState::self()->startThreadPool();
    IPCThreadState::self()->joinThreadPool();
}

2 AudioFlinger-instantiate

framework/base/lib/audioFlinger/AudioFlinger.cpp

void AudioFlinger::instantiate() {
    // 把AF实例加入系统服务
    defaultServiceManager()->addService( 
        String16("media.audio_flinger"), new AudioFlinger());
}

3 new AudioFlinger

AudioFlinger::AudioFlinger()
    : BnAudioFlinger(), mAudioHardware(0), mMasterVolume(1.0f), mMasterMute(false), mNextThreadId(0)
{
    mHardwareStatus = AUDIO_HW_IDLE;
    // 创建代表Audio硬件的HAL对象
    mAudioHardware = AudioHardwareInterface::create();
    mHardwareStatus = AUDIO_HW_INIT;
    if (mAudioHardware->initCheck() == NO_ERROR) {
        // 设置系统的声音模式等,其实就是设置硬件的模式
        setMode(AudioSystem::MODE_NORMAL);
        setMasterVolume(1.0f);
        setMasterMute(false);
    }
}

4 setMode

status_t AudioFlinger::setMode(int mode)
{
    mHardwareStatus = AUDIO_HW_SET_MODE;
    // 设置硬件的模式
    status_t ret = mAudioHardware->setMode(mode);
    mHardwareStatus = AUDIO_HW_IDLE;
    return ret;
}

Android系统启动的时候,看来AF也准备好硬件了。

5 AudioPolicyService-instantiate

framework/base/lib/libaudioflinger/AudioPolicyService.cpp

APS的创建

6 new AudioPolicyService

AudioPolicyService::AudioPolicyService()
    : BnAudioPolicyService() , mpPolicyManager(NULL)
{
    // 下面两个线程以后再说
    mTonePlaybackThread = new AudioCommandThread(String8(""));
    mAudioCommandThread = new AudioCommandThread(String8("ApmCommandThread"));
    #if (defined GENERIC_AUDIO) || (defined AUDIO_POLICY_TEST)
        // 使用普适的AudioPolicyManager,把自己this做为参数
        mpPolicyManager = new AudioPolicyManagerBase(this);
    #else
        // 使用硬件厂商提供的特殊的AudioPolicyManager
        mpPolicyManager = createAudioPolicyManager(this);
    #endif
}

7 new AudioPolicyManagerBase
framework/base/lib/audioFlinger/AudioPolicyManagerBase.cpp

AudioPolicyManagerBase::AudioPolicyManagerBase(AudioPolicyClientInterface *clientInterface)
    : mPhoneState(AudioSystem::MODE_NORMAL), mRingerMode(0), mMusicStopTime(0), mLimitRingtoneVolume(false)
{
    // 这个client就是APS,刚才通过this传进来了
    mpClientInterface = clientInterface;
    AudioOutputDescriptor *outputDesc = new AudioOutputDescriptor();
    outputDesc->mDevice = (uint32_t)AudioSystem::DEVICE_OUT_SPEAKER;
    // openOutput又交给APS的openOutput来完成
    mHardwareOutput = mpClientInterface->openOutput(&outputDesc->mDevice,
                                    &outputDesc->mSamplingRate,
                                    &outputDesc->mFormat,
                                    &outputDesc->mChannels,
                                    &outputDesc->mLatency,
                                    outputDesc->mFlags);
        ...
}

8 mpClientInterface-openOutput

audio_io_handle_t AudioPolicyService::openOutput(uint32_t *pDevices,
                                uint32_t *pSamplingRate,
                                uint32_t *pFormat,
                                uint32_t *pChannels,
                                uint32_t *pLatencyMs,
                                AudioSystem::output_flags flags)
{
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
        ...
    // 绕了这么一个大圈子,回到AudioFlinger
    return af->openOutput(pDevices, pSamplingRate, (uint32_t *)pFormat, pChannels, pLatencyMs, flags);
}

9 af = AudioSystem-get_audio_flinger

framework/base/media/libmedia/AudioSystem.cpp

10 af-openOutput

在APS构造的时候会open一个Output,而这个Output又会调用AF的openOutput。

int AudioFlinger::openOutput(uint32_t *pDevices,
                                uint32_t *pSamplingRate,
                                uint32_t *pFormat,
                                uint32_t *pChannels,
                                uint32_t *pLatencyMs,
                                uint32_t flags)
{
    status_t status;
    PlaybackThread *thread = NULL;
    mHardwareStatus = AUDIO_HW_OUTPUT_OPEN;
    uint32_t samplingRate = pSamplingRate ? *pSamplingRate : 0;
    uint32_t format = pFormat ? *pFormat : 0;
    uint32_t channels = pChannels ? *pChannels : 0;
    uint32_t latency = pLatencyMs ? *pLatencyMs : 0;
    Mutex::Autolock _l(mLock);
    // 由Audio硬件HAL对象创建一个AudioStreamOut对象
    AudioStreamOut *output = mAudioHardware->openOutputStream(*pDevices,
                                                             (int *)&format,
                                                             &channels,
                                                             &samplingRate,
                                                             &status);
    mHardwareStatus = AUDIO_HW_IDLE;
    if (output != 0) {
        // 创建一个Mixer线程
        thread = new MixerThread(this, output, ++mNextThreadId);
    }
    // 终于找到了,把这个线程加入线程管理组织中
    mPlaybackThreads.add(mNextThreadId, thread);
    return mNextThreadId;
}

11 output = mAudioHardware-openOutputStream

12 thread = new MixerThread

MixerThread : public PlaybackThread

用于混音的线程,注意它是从PlaybackThread派生下来的。

AudioFlinger::MixerThread::MixerThread(const sp<AudioFlinger>& audioFlinger, AudioStreamOut* output, int id)
    : PlaybackThread(audioFlinger, output, id), mAudioMixer(0)
{
    mType = PlaybackThread::MIXER;
    // 混音器对象,传进去的两个参数是基类ThreadBase的,都为0
    // 这个对象很复杂,最终混音的数据都由它生成
    mAudioMixer = new AudioMixer(mFrameCount, mSampleRate);
}

13 mAudioMixer = new AudioMixer

framework/base/libs/audioflinger/AudioMixer.cpp

AudioMixer::AudioMixer(size_t frameCount, uint32_t sampleRate)
    :   mActiveTrack(0), mTrackNames(0), mSampleRate(sampleRate)
{
    mState.enabledTracks= 0;
    mState.needsChanged = 0;
    mState.frameCount   = frameCount;
    mState.outputTemp   = 0;
    mState.resampleTemp = 0;
    // process__nop,是该类的静态函数
    mState.hook = process__nop;
    track_t* t = mState.tracks;
    // 支持32路混音
    for (int i=0 ; i<32 ; i++) {
        t->needs = 0;
        t->volume[0] = UNITY_GAIN;
        t->volume[1] = UNITY_GAIN;
        t->volumeInc[0] = 0;
        t->volumeInc[1] = 0;
        t->channelCount = 2;
        t->enabled = 0;
        t->format = 16;
        t->buffer.raw = 0;
        t->bufferProvider = 0;
        t->hook = 0;
        t->resampler = 0;
        t->sampleRate = mSampleRate;
        t->in = 0;
        t++;
    }
}

hook对应的可选函数实现有:

process__validate

process__nop

process__genericNoResampling

process__genericResampling

process__OneTrack16BitsStereoNoResampling

process__TwoTracks16BitsStereoNoResampling

14 new AudioTrack

15 set

下面是AT的set函数:

audio_io_handle_t output = AudioSystem::getOutput((AudioSystem::stream_type)streamType,
        sampleRate, format, channels, (AudioSystem::output_flags)flags);
status_t status = createTrack(streamType, sampleRate, format, channelCount,
                              frameCount, flags, sharedBuffer, output);

其中audio_io_handle_t类型就是一个int类型

16 output = AudioSystem-getOutput

audio_io_handle_t AudioSystem::getOutput(stream_type stream,
                                    uint32_t samplingRate,
                                    uint32_t format,
                                    uint32_t channels,
                                    output_flags flags)
{
    audio_io_handle_t output = 0;
    if ((flags & AudioSystem::OUTPUT_FLAG_DIRECT) == 0 &&
        ((stream != AudioSystem::VOICE_CALL && stream != AudioSystem::BLUETOOTH_SCO) ||
         channels != AudioSystem::CHANNEL_OUT_MONO ||
         (samplingRate != 8000 && samplingRate != 16000))) {
        Mutex::Autolock _l(gLock);
        // 从map中找到stream=music的output。第一次进来output一定是0
        output = AudioSystem::gStreamOutputMap.valueFor(stream);
        if (output == 0) {
            // 又到AudioPolicyService(APS),由它去getOutput
            const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
            output = aps->getOutput(stream, samplingRate, format, channels, flags);
            if ((flags & AudioSystem::OUTPUT_FLAG_DIRECT) == 0) {
                Mutex::Autolock _l(gLock);
                // 如果取到output了,再把output加入到AudioSystem维护的这个map中去
                // 说白了,就是保存一些信息吗。免得下次又这么麻烦去骚扰APS!
                AudioSystem::gStreamOutputMap.add(stream, output);
            }
        }
    return output;
}

17 aps = AudioSystem-get_audio_policy_service
18 aps-getOutput

audio_io_handle_t AudioPolicyService::getOutput(AudioSystem::stream_type stream,
                                    uint32_t samplingRate,
                                    uint32_t format,
                                    uint32_t channels,
                                    AudioSystem::output_flags flags)
{
    Mutex::Autolock _l(mLock);
    // 自己不干活,由AudioManagerBase干活
    return mpPolicyManager->getOutput(stream, samplingRate, format, channels, flags);
}

19 mpPolicyManager-getOutput

audio_io_handle_t AudioPolicyManagerBase::getOutput(AudioSystem::stream_type stream,
                                    uint32_t samplingRate,
                                    uint32_t format,
                                    uint32_t channels,
                                    AudioSystem::output_flags flags)
{
    audio_io_handle_t output = 0;
    uint32_t latency = 0;
    // open a non direct output
    // mHardwareOutput是在AMB构造的时候创建的
    output = mHardwareOutput; 
    return output;
}

20 output = mHardwareOutput

21 createTrack

const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
// 下面很重要,调用AF的createTrack获得一个IAudioTrack对象
sp<IAudioTrack> track = audioFlinger->createTrack();
// 获取共享内存的管理结构
sp<IMemory> cblk = track->getCblk();

总结一下创建的流程,AT调用AF的createTrack获得一个IAudioTrack对象,然后从这个对象中获得共享内存的对象。

22 AudioSystem-get_audio_flinger

23 audioFlinger-createTrack

sp<IAudioTrack> AudioFlinger::createTrack(
        pid_t pid,    // AT的pid号
        int streamType,// MUSIC,流类型
        uint32_t sampleRate,// 8000,采样率
        int format,    // PCM_16类型
        int channelCount,// 2,双声道
        int frameCount,    // 需要创建的buffer可包含的帧数
        uint32_t flags,
        const sp<IMemory>& sharedBuffer,// AT传入的共享buffer,这里为空
        int output,    // 这个是从AuidoSystem获得的对应MUSIC流类型的索引
        status_t *status)
{
    sp<PlaybackThread::Track> track;
    sp<TrackHandle> trackHandle;
    sp<Client> client;
    wp<Client> wclient;
    status_t lStatus;
    {
        Mutex::Autolock _l(mLock);
        // 根据output句柄,获得线程
        PlaybackThread *thread = checkPlaybackThread_l(output);
        // 看看这个进程是不是已经是AF的客户了
        // 这里说明一下,由于是C/S架构,那么作为服务端的AF肯定有地方保存作为C的AT的信息
        // 那么,AF是根据pid作为客户端的唯一标示的,mClients是一个类似map的数据组织结构
        wclient = mClients.valueFor(pid);
        if (wclient != NULL) {
            ...
        } else {
            // 如果还没有这个客户信息,就创建一个,并加入到map中去
            client = new Client(this, pid);
            mClients.add(pid, client);
        }
        // 从刚才找到的那个线程对象中创建一个track
        track = thread->createTrack_l(client, streamType, sampleRate, format,
                channelCount, frameCount, sharedBuffer, &lStatus);
    }
    // 返回到AF端的是这个trackHandle对象
    trackHandle = new TrackHandle(track);
    return trackHandle;
}

24 thread = checkPlaybackThread_l

这个函数的意思是根据output值,从一堆线程中找到对应的那个线程

AudioFlinger::PlaybackThread *AudioFlinger::checkPlaybackThread_l(int output) const
{
    PlaybackThread *thread = NULL;
    // 看到这种indexOfKey的东西,应该立即能想到:这可能是一个map之类的东西,根据key能找到实际的value
    if (mPlaybackThreads.indexOfKey(output) >= 0) {
        thread = (PlaybackThread *)mPlaybackThreads.valueFor(output).get();
    }
    return thread;
}

取出的thread应该是第12步的MixerThread。

25 thread-createTrack_l

sp<AudioFlinger::PlaybackThread::Track>  AudioFlinger::PlaybackThread::createTrack_l(
        const sp<AudioFlinger::Client>& client,
        int streamType,
        uint32_t sampleRate,
        int format,
        int channelCount,
        int frameCount,
        const sp<IMemory>& sharedBuffer,
        status_t *status)
{
    sp<Track> track;
    status_t lStatus;
    { 
        // scope for mLock
        Mutex::Autolock _l(mLock);
        // new 一个track对象
        // 看看这个参数吧,注意sharedBuffer这个,此时的值应是0
        track = new Track(this, client, streamType, sampleRate, format,
                channelCount, frameCount, sharedBuffer);
        // 把这个track加入到数组中,是为了管理用的
        mTracks.add(track); 
    }
    lStatus = NO_ERROR;
    return track;
}

一个MixerThread,内部有一个数组保存track。不管有多少个AudioTrack,最终在AF端都有一个track对象对应,而且这些所有的track对象都会由一个线程对象(MixerThread)来管理。

26 new Track

再去看看new Track,我们一直还没找到共享内存在哪里创建的!!!

AudioFlinger::PlaybackThread::Track::Track(
            const wp<ThreadBase>& thread,
            const sp<Client>& client,
            int streamType,
            uint32_t sampleRate,
            int format,
            int channelCount,
            int frameCount,
            const sp<IMemory>& sharedBuffer)
    :   TrackBase(thread, client, sampleRate, format, channelCount, frameCount, 0, sharedBuffer),
    mMute(false), mSharedBuffer(sharedBuffer), mName(-1)
{
    // mCblk!=NULL?什么时候创建的??只能看基类TrackBase
    if (mCblk != NULL) {
        mVolume[0] = 1.0f;
        mVolume[1] = 1.0f;
        mStreamType = streamType;
        mCblk->frameSize = AudioSystem::isLinearPCM(format) ? channelCount * sizeof(int16_t) : sizeof(int8_t);
    }
}
// 看看基类TrackBase
AudioFlinger::ThreadBase::TrackBase::TrackBase(
            const wp<ThreadBase>& thread,
            const sp<Client>& client,
            uint32_t sampleRate,
            int format,
            int channelCount,
            int frameCount,
            uint32_t flags,
            const sp<IMemory>& sharedBuffer)
    :   RefBase(),
        mThread(thread),
        mClient(client),
        mCblk(0),
        mFrameCount(0),
        mState(IDLE),
        mClientTid(-1),
        mFormat(format),
        mFlags(flags & ~SYSTEM_FLAGS_MASK)
{
    size_t size = sizeof(audio_track_cblk_t);
    size_t bufferSize = frameCount*channelCount*sizeof(int16_t);
    if (sharedBuffer == 0) {
       size += bufferSize;
    }
    // 调用client的allocate函数。我们在CreateTrack中创建的。这里会创建一块共享内存
    // 见23 audioFlinger-createTrack
    mCblkMemory = client->heap()->allocate(size);
    mCblk = static_cast<audio_track_cblk_t *>(mCblkMemory->pointer());
    // C++语法中的placement new。new后面的括号中是一块buffer,再后面是一个类的构造函数。这个placement new的意思就是在这块buffer中构造一个对象。
    // 我们普通new是没法让一个对象在某块指定的内存中创建的。而placement new却可以。
    // 这样不就达到我们的目的了吗?搞一块共享内存,再在这块内存上创建一个对象。这样,这个对象不也就能在两个内存中共享了吗?
    new(mCblk) audio_track_cblk_t();
    mCblk->frameCount = frameCount;
    mCblk->sampleRate = sampleRate;
    mCblk->channels = (uint8_t)channelCount;
}

这里看到共享内存alloc出来了!!!对于共享内存AT与AF的关系如下:
AT是从共享buffer中:
l         Lock缓存
l         写缓存
l         Unlock缓存
AF是从共享buffer中:
l         Lock,
l         读缓存,写硬件
l         Unlock

27 new TrackHandle

Trackhandle是AT端调用AF的CreateTrack得到的一个基于Binder机制的Track。

这个TrackHandle实际上是对真正干活的PlaybackThread::Track的一个跨进程支持的封装。

什么意思?本来PlaybackThread::Track是真正在AF中干活的东西,不过为了支持跨进程的话,我们用TrackHandle对其进行了一下包转。这样在AudioTrack调用TrackHandle的功能,实际都由TrackHandle调用PlaybackThread::Track来完成了。可以认为是一种Proxy模式吧。

class TrackHandle : public android::BnAudioTrack {
    public:
                            TrackHandle(const sp<PlaybackThread::Track>& track);
        virtual             ~TrackHandle();
        virtual status_t    start();
        virtual void        stop();
        virtual void        flush();
        virtual void        mute(bool);
        virtual void        pause();
        virtual void        setVolume(float left, float right);
        virtual sp<IMemory> getCblk() const;
        sp<PlaybackThread::Track> mTrack;
};

28 return trackHandle
29 track-getCblk

30 start
AT得到IAudioTrack对象后,调用start函数

status_t AudioFlinger::TrackHandle::start() {
    return mTrack->start();
}

自己又不干活,交给mTrack了,这个是PlaybackThread createTrack_l得到的Track对象。

31 mTrack-start

32 PlaybackThread-Track-start

status_t AudioFlinger::PlaybackThread::Track::start()
{
    status_t status = NO_ERROR;
    sp<ThreadBase> thread = mThread.promote();
    // 这个Thread就是调用createTrack_l的那个thread对象,这里是MixerThread
    if (thread != 0) {
        Mutex::Autolock _l(thread->mLock);
        int state = mState;
        if (mState == PAUSED) {
            mState = TrackBase::RESUMING;
        } else {
            mState = TrackBase::ACTIVE;
        }
        // 把自己加到addTrack_l了
        PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
        playbackThread->addTrack_l(this);
    }
    return status;
}

33 addTrack_l

status_t AudioFlinger::PlaybackThread::addTrack_l(const sp<Track>& track)
{
    status_t status = ALREADY_EXISTS;
    // set retry count for buffer fill
    track->mRetryCount = kMaxTrackStartupRetries;
    if (mActiveTracks.indexOf(track) < 0) {
        // 是加入到活跃Track的数组
        mActiveTracks.add(track);
        status = NO_ERROR;
    }
    // 看到这个broadcast,一定要想到:恩,在不远处有那么一个线程正等着这个CV(Condition)
    mWaitWorkCV.broadcast();
    return status;
}

34 mWaitWorkCV.broadcast

start是把某个track加入到PlayingThread的活跃Track队列,然后触发一个信号事件。由于这个事件是PlayingThread的内部成员变量,而PlayingThread又创建了一个线程,那么难道是那个线程在等待这个事件吗?这时候有一个活跃track,那个线程应该可以干活了吧?

这个线程是MixerThread。我们去看看它的线程函数threadLoop吧。

35 threadLoop

bool AudioFlinger::MixerThread::threadLoop()
{
    int16_t* curBuf = mMixBuffer;
    Vector< sp<Track> > tracksToRemove;
    while (!exitPending())
    {
        processConfigEvents();
        // Mixer进到这个循环中来
        mixerStatus = MIXER_IDLE;
        { 
            // scope for mLock
            Mutex::Autolock _l(mLock);
            const SortedVector< wp<Track> >& activeTracks = mActiveTracks;
            // 每次都取当前最新的活跃Track数组
            // 下面是预备操作,返回状态看看是否有数据需要获取
            mixerStatus = prepareTracks_l(activeTracks, &tracksToRemove);
        }
        // LIKELY,是GCC的一个东西,可以优化编译后的代码,就当做是TRUE吧
        if (LIKELY(mixerStatus == MIXER_TRACKS_READY)) {
            // mix buffers...
            // 调用混音器,把buf传进去,估计得到了混音后的数据了
            mAudioMixer->process(curBuf);
            sleepTime = 0;
            standbyTime = systemTime() + kStandbyTimeInNsecs;
        }
        // 有数据要写到硬件中,肯定不能sleep了呀
        if (sleepTime == 0) {
           // 把缓存的数据写到outPut中。这个mOutput是AudioStreamOut由Audio HAL的那个对象创建得到。等我们以后分析再说
           int bytesWritten = (int)mOutput->write(curBuf, mixBufferSize);
           mStandby = false;
        } else {
            // 如果没有数据,那就休息吧..
            usleep(sleepTime);
        }
    }
}

MixerThread的线程循环中,最重要的两个函数:
prepare_l和mAudioMixer->process

36 prepareTracks_l

uint32_t AudioFlinger::MixerThread::prepareTracks_l(const SortedVector< wp<Track> >& activeTracks, Vector< sp<Track> > *tracksToRemove)
{
    uint32_t mixerStatus = MIXER_IDLE;
    // 得到活跃track个数,这里假设就是我们创建的那个AT吧,那么count=1
    size_t count = activeTracks.size();
    float masterVolume = mMasterVolume;
    bool  masterMute = mMasterMute;
    for (size_t i=0 ; i<count ; i++) {
        sp<Track> t = activeTracks[i].promote();
        Track* const track = t.get();
        // 得到placement new分配的那个跨进程共享的对象
        audio_track_cblk_t* cblk = track->cblk();
        // 设置混音器,当前活跃的track
        mAudioMixer->setActiveTrack(track->name());
        if (cblk->framesReady() && (track->isReady() || track->isStopped()) &&
                !track->isPaused() && !track->isTerminated())
        {
            // compute volume for this track
            int16_t left, right;
            if (track->isMuted() || masterMute || track->isPausing() ||
                mStreamTypes[track->type()].mute) {
                left = right = 0;
                if (track->isPausing()) {
                    track->setPaused();
                }
            } else {
                // AT设置的音量假设不为零,我们需要聆听声音!所以走else流程
                // read original volumes with volume control
                // 计算音量
                float typeVolume = mStreamTypes[track->type()].volume;
                float v = masterVolume * typeVolume;
                float v_clamped = v * cblk->volume[0];
                if (v_clamped > MAX_GAIN) v_clamped = MAX_GAIN;
                left = int16_t(v_clamped);
                v_clamped = v * cblk->volume[1];
                if (v_clamped > MAX_GAIN) v_clamped = MAX_GAIN;
                right = int16_t(v_clamped);
            }
            // 注意,这里对混音器设置了数据提供来源,是一个track,还记得我们前面说的吗?Track从AudioBufferProvider派生
            mAudioMixer->setBufferProvider(track);
            mAudioMixer->enable(AudioMixer::MIXING);
            int param = AudioMixer::VOLUME;
            // 为这个track设置左右音量等
            mAudioMixer->setParameter(param, AudioMixer::VOLUME0, left);
            mAudioMixer->setParameter(param, AudioMixer::VOLUME1, right);
            mAudioMixer->setParameter(
                AudioMixer::TRACK,
                AudioMixer::FORMAT, track->format());
            mAudioMixer->setParameter(
                AudioMixer::TRACK,
                AudioMixer::CHANNEL_COUNT, track->channelCount());
            mAudioMixer->setParameter(
                AudioMixer::RESAMPLE,
                AudioMixer::SAMPLE_RATE,
                int(cblk->sampleRate));
        } else {
            if (track->isStopped()) {
                track->reset();
            }
            // 如果这个track已经停止了,那么把它加到需要移除的track队列tracksToRemove中去,同时停止它在AudioMixer中的混音
            if (track->isTerminated() || track->isStopped() || track->isPaused()) {
                tracksToRemove->add(track);
                mAudioMixer->disable(AudioMixer::MIXING);
            } else {
                mAudioMixer->disable(AudioMixer::MIXING);
            }
        }
    }
    // remove all the tracks that need to be...
    count = tracksToRemove->size();
    return mixerStatus;
}

prepare_l的功能是:根据当前活跃的track队列,来为混音器设置信息。
可想而知,一个track必然在混音器(AudioMixer)中有一个对应的东西。
37 mAudioMixer-setParameter
38 mAudioMixer-process

void AudioMixer::process(void* output)
{
    // hook:钩子函数
    mState.hook(&mState, output);
}

39 process__OneTrack16BitsStereoNoResampling
单track,16bit双声道,不需要重采样

void AudioMixer::process__OneTrack16BitsStereoNoResampling(state_t* state, void* output)
{
    const int i = 31 - __builtin_clz(state->enabledTracks);
    const track_t& t = state->tracks[i];
    AudioBufferProvider::Buffer& b(t.buffer);
    int32_t* out = static_cast<int32_t*>(output);
    size_t numFrames = state->frameCount;
    const int16_t vl = t.volume[0];
    const int16_t vr = t.volume[1];
    const uint32_t vrl = t.volumeRL;
    while (numFrames) {
        b.frameCount = numFrames;
        // 获得buffer
        t.bufferProvider->getNextBuffer(&b);
        int16_t const *in = b.i16;
        size_t outFrames = b.frameCount;
        if  UNLIKELY--->不走这.
        else {
            do {
                uint32_t rl = *reinterpret_cast<uint32_t const *>(in);
                in += 2;
                int32_t l = mulRL(1, rl, vrl) >> 12;
                int32_t r = mulRL(0, rl, vrl) >> 12;
                *out++ = (r<<16) | (l & 0xFFFF);
            } while (--outFrames);
        }
        numFrames -= b.frameCount;
        // 释放buffer
        t.bufferProvider->releaseBuffer(&b);
    }
}

40 t.bufferProvider-getNextBuffer

到现在,我们还没看到取共享内存里AT端write的数据呐。那只能到bufferProvider去看了。
注意,这里用的是AudioBufferProvider基类,实际的对象是Track。它从AudioBufferProvider派生。我们用得是PlaybackThread的这个Track。

status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(AudioBufferProvider::Buffer* buffer)
{
     // 终于见到cblk了
     audio_track_cblk_t* cblk = this->cblk();
     uint32_t framesReady;
     uint32_t framesReq = buffer->frameCount;
     // 看看数据准备好了没
     framesReady = cblk->framesReady();
     if (LIKELY(framesReady)) {
        uint32_t s = cblk->server;
        uint32_t bufferEnd = cblk->serverBase + cblk->frameCount;
        bufferEnd = (cblk->loopEnd < bufferEnd) ? cblk->loopEnd : bufferEnd;
        if (framesReq > framesReady) {
            framesReq = framesReady;
        }
        if (s + framesReq > bufferEnd) {
            framesReq = bufferEnd - s;
        }
        // 获得真实的数据地址
        buffer->raw = getBuffer(s, framesReq);
        if (buffer->raw == 0) goto getNextBuffer_exit;
        buffer->frameCount = framesReq;
        return NO_ERROR;
     }
getNextBuffer_exit:
    buffer->raw = 0;
    buffer->frameCount = 0;
    return NOT_ENOUGH_DATA;
}

原来AudioTrack中write的数据,最终是这么被使用的!

41 cblk = this-cblk
42 cblk-framesReady
43 buffer-raw = getBuffer
44 t.bufferProvider-releaseBuffer

void AudioFlinger::ThreadBase::TrackBase::releaseBuffer(AudioBufferProvider::Buffer* buffer)
{
    buffer->raw = 0;
    mFrameCount = buffer->frameCount;
    step();
    buffer->frameCount = 0;
}

  • 0
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值