Android Audio代码分析10 - audio_track_cblk_t::framesReady函数

在看AudioTrack的write函数的时候,了解到,音频数据最终都写到了audio_track_cblk_t的结构体中。
这个结构体是在AudioFlinger中创建的。
AudioFlinger是如何来使用这些数据的呢?
今天就来学习学习。


我们写数据的时候,调用了audio_track_cblk_t::framesAvailable_l函数,来判断是否有可用的空间,以供写用。
类audio_track_cblk_t中还有另外一个函数framesReady,看名字,应该是告诉我们已经准备好了多少东东。
看样子,AudioFlinger在使用音频数据的时候,应该是先调用了framesReady函数,来看看我们已经写进去多少音频数据了,然后再使用这些数据。


*****************************************源码*************************************************
uint32_t audio_track_cblk_t::framesReady()
{
    uint64_t u = this->user;
    uint64_t s = this->server;


    if (flags & CBLK_DIRECTION_MSK) {
        if (u < loopEnd) {
            return u - s;
        } else {
            Mutex::Autolock _l(lock);
            if (loopCount >= 0) {
                return (loopEnd - loopStart)*loopCount + u - s;
            } else {
                return UINT_MAX;
            }
        }
    } else {
        return s - u;
    }
}



**********************************************************************************************
源码路径:
frameworks\base\media\libmedia\AudioTrack.cpp


#######################说明################################

/*之前看代码都是顺藤摸瓜,今天要顺瓜摸藤了。
看看什么地方调用了函数framesReady。
搜了一下,调用的地方还真不少。
不过,很多地方只是用调用结果来作判断,
只有在函数AudioFlinger::PlaybackThread::Track::getNextBuffer中,把返回值保存了下来。
以前看代码知道,写数据的时候,framesAvailable返回值是被保存起来,并使用的。
类推一下,读数据的时候,framesReady的返回值也应该被保存使用。
于是就来到了函数AudioFlinger::PlaybackThread::Track::getNextBuffer中:*/
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(AudioBufferProvider::Buffer* buffer)
{
     audio_track_cblk_t* cblk = this->cblk();
     uint32_t framesReady;
     uint32_t framesReq = buffer->frameCount;


     // Check if last stepServer failed, try to step now
     if (mFlags & TrackBase::STEPSERVER_FAILED) {
         if (!step())  goto getNextBuffer_exit;
         LOGV("stepServer recovered");
         mFlags &= ~TrackBase::STEPSERVER_FAILED;
     }


     framesReady = cblk->framesReady();


     if (LIKELY(framesReady)) {
        uint64_t s = cblk->server;
        uint64_t bufferEnd = cblk->serverBase + cblk->frameCount;


        bufferEnd = (cblk->loopEnd < bufferEnd) ? cblk->loopEnd : bufferEnd;
        if (framesReq > framesReady) {
            framesReq = framesReady;
        }
        if (s + framesReq > bufferEnd) {
            framesReq = bufferEnd - s;
        }


         buffer->raw = getBuffer(s, framesReq);
         if (buffer->raw == 0) goto getNextBuffer_exit;
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
void* AudioFlinger::ThreadBase::TrackBase::getBuffer(uint32_t offset, uint32_t frames) const {
    audio_track_cblk_t* cblk = this->cblk();
    int8_t *bufferStart = (int8_t *)mBuffer + (offset-cblk->serverBase)*cblk->frameSize;
    int8_t *bufferEnd = bufferStart + frames * cblk->frameSize;


    // Check validity of returned pointer in case the track control block would have been corrupted.
    if (bufferStart < mBuffer || bufferStart > bufferEnd || bufferEnd > mBufferEnd ||
        ((unsigned long)bufferStart & (unsigned long)(cblk->frameSize - 1))) {
        LOGE("TrackBase::getBuffer buffer out of range:\n    start: %p, end %p , mBuffer %p mBufferEnd %p\n    \
                server %lld, serverBase %lld, user %lld, userBase %lld, channelCount %d",
                bufferStart, bufferEnd, mBuffer, mBufferEnd,
                cblk->server, cblk->serverBase, cblk->user, cblk->userBase, cblk->channelCount);
        return 0;
    }


    return bufferStart;
}
// ----------------------------------------------------------------


         buffer->frameCount = framesReq;
        return NO_ERROR;
     }


getNextBuffer_exit:
     buffer->raw = 0;
     buffer->frameCount = 0;
     LOGV("getNextBuffer() no more data for track %d on thread %p", mName, mThread.unsafe_get());
     return NOT_ENOUGH_DATA;
}
// ----------------------------------------------------------------

/*
下面看看哪个地方调用了函数AudioFlinger::PlaybackThread::Track::getNextBuffer。
搜索发现,AudioFlinger中只有函数AudioFlinger::DirectOutputThread::threadLoop调用了,
而我们说的AudioTrack是播放音乐用的,所以,肯定不是这儿。


另外,发现AudioMixer中有几个地方调用了函数getNextBuffer,不过调用代码如下:
t.bufferProvider->getNextBuffer(&t.buffer);
bufferProvider和AudioFlinger::PlaybackThread::Track又存在怎样的关系呢?


bufferProvider的定义代码:AudioBufferProvider*                bufferProvider;
存在以下继承关系:
class Track : public TrackBase
class TrackBase : public AudioBufferProvider, public RefBase


可见,bufferProvider最终指向的是AudioFlinger::PlaybackThread::Track对象,
所以,t.bufferProvider->getNextBuffer(&t.buffer)其实是调用的函数:AudioFlinger::PlaybackThread::Track::getNextBuffer


函数AudioFlinger::MixerThread::prepareTracks_l调用了函数AudioMixer::setBufferProvider。
函数AudioMixer::setBufferProvider中给bufferProvider进行赋值。


函数AudioFlinger::MixerThread::threadLoop和函数AudioFlinger::DuplicatingThread::threadLoop中
有调用函数AudioFlinger::MixerThread::prepareTracks_l。


先看看AudioMixer中调用函数AudioFlinger::PlaybackThread::Track::getNextBuffer的地方吧。
AudioMixer中的以下几个函数都调用了函数getNextBuffer:
process__nop函数
process__genericNoResampling函数
process__genericResampling函数
process__OneTrack16BitsStereoNoResampling函数
process__TwoTracks16BitsStereoNoResampling函数


这儿取其中的process__OneTrack16BitsStereoNoResampling函数来看看。*/
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// one track, 16 bits stereo without resampling is the most common case
void AudioMixer::process__OneTrack16BitsStereoNoResampling(state_t* state)
{
    const int i = 31 - __builtin_clz(state->enabledTracks);
    const track_t& t = state->tracks[i];


    AudioBufferProvider::Buffer& b(t.buffer);


    int32_t* out = t.mainBuffer;
    size_t numFrames = state->frameCount;


    const int16_t vl = t.volume[0];
    const int16_t vr = t.volume[1];
    const uint32_t vrl = t.volumeRL;
    while (numFrames) {
        b.frameCount = numFrames;
        t.bufferProvider->getNextBuffer(&b);
        int16_t const *in = b.i16;


        // in == NULL can happen if the track was flushed just after having
        // been enabled for mixing.
        if (in == NULL || ((unsigned long)in & 3)) {
            memset(out, 0, numFrames*MAX_NUM_CHANNELS*sizeof(int16_t));
            LOGE_IF(((unsigned long)in & 3), "process stereo track: input buffer alignment pb: buffer %p track %d, channels %d, needs %08x",
                    in, i, t.channelCount, t.needs);
            return;
        }
        size_t outFrames = b.frameCount;


        if (UNLIKELY(uint32_t(vl) > UNITY_GAIN || uint32_t(vr) > UNITY_GAIN)) {
            // volume is boosted, so we might need to clamp even though
            // we process only one track.
            do {
                uint32_t rl = *reinterpret_cast<uint32_t const *>(in);
                in += 2;
                int32_t l = mulRL(1, rl, vrl) >> 12;
                int32_t r = mulRL(0, rl, vrl) >> 12;
                // clamping...
                l = clamp16(l);
                r = clamp16(r);
                *out++ = (r<<16) | (l & 0xFFFF);
            } while (--outFrames);
        } else {
            do {
                uint32_t rl = *reinterpret_cast<uint32_t const *>(in);
                in += 2;
                int32_t l = mulRL(1, rl, vrl) >> 12;
                int32_t r = mulRL(0, rl, vrl) >> 12;
                *out++ = (r<<16) | (l & 0xFFFF);
            } while (--outFrames);
        }
        numFrames -= b.frameCount;
        t.bufferProvider->releaseBuffer(&b);
    }
}
// ----------------------------------------------------------------
/*可见,函数的主要功能是将数据从audio_track_cblk_t中copy到track_t的mainBuffer中。
track_t是个什么东东呢?是state_t的一个成员。
state_t又是个什么东东呢?是传入的函数参数。


现在有两个问题需要探讨:
1、process__OneTrack16BitsStereoNoResampling是如何被调用的?以及传入的参数是从哪儿来的?
2、放入track_t的mainBuffer中的数据,是如何被使用的?
*/

// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// 先看看process__OneTrack16BitsStereoNoResampling是怎么被调用的。
// 函数AudioMixer::process__validate中有使用process__OneTrack16BitsStereoNoResampling函数。
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
void AudioMixer::process__validate(state_t* state)
{
    LOGW_IF(!state->needsChanged,
        "in process__validate() but nothing's invalid");


    uint32_t changed = state->needsChanged;
    state->needsChanged = 0; // clear the validation flag


    // recompute which tracks are enabled / disabled
    uint32_t enabled = 0;
    uint32_t disabled = 0;
    while (changed) {
        const int i = 31 - __builtin_clz(changed);
        const uint32_t mask = 1<<i;
        changed &= ~mask;
        track_t& t = state->tracks[i];
        (t.enabled ? enabled : disabled) |= mask;
    }
    state->enabledTracks &= ~disabled;
    state->enabledTracks |=  enabled;


    // compute everything we need...
    int countActiveTracks = 0;
    int all16BitsStereoNoResample = 1;
    int resampling = 0;
    int volumeRamp = 0;
    uint32_t en = state->enabledTracks;
    while (en) {
        const int i = 31 - __builtin_clz(en);
        en &= ~(1<<i);


        countActiveTracks++;
        track_t& t = state->tracks[i];
        uint32_t n = 0;
        n |= NEEDS_CHANNEL_1 + t.channelCount - 1;
        n |= NEEDS_FORMAT_16;
        n |= t.doesResample() ? NEEDS_RESAMPLE_ENABLED : NEEDS_RESAMPLE_DISABLED;
        if (t.auxLevel != 0 && t.auxBuffer != NULL) {
            n |= NEEDS_AUX_ENABLED;
        }


        if (t.volumeInc[0]|t.volumeInc[1]) {
            volumeRamp = 1;
        } else if (!t.doesResample() && t.volumeRL == 0) {
            n |= NEEDS_MUTE_ENABLED;
        }
        t.needs = n;


        if ((n & NEEDS_MUTE__MASK) == NEEDS_MUTE_ENABLED) {
            t.hook = track__nop;
        } else {
            if ((n & NEEDS_AUX__MASK) == NEEDS_AUX_ENABLED) {
                all16BitsStereoNoResample = 0;
            }
            if ((n & NEEDS_RESAMPLE__MASK) == NEEDS_RESAMPLE_ENABLED) {
                all16BitsStereoNoResample = 0;
                resampling = 1;
                t.hook = track__genericResample;
            } else {
                if ((n & NEEDS_CHANNEL_COUNT__MASK) == NEEDS_CHANNEL_1){
                    t.hook = track__16BitsMono;
                    all16BitsStereoNoResample = 0;
                }
                if ((n & NEEDS_CHANNEL_COUNT__MASK) == NEEDS_CHANNEL_2){
                    t.hook = track__16BitsStereo;
                }
            }
        }
    }


    // select the processing hooks
    state->hook = process__nop;
    if (countActiveTracks) {
        if (resampling) {
            if (!state->outputTemp) {
                state->outputTemp = new int32_t[MAX_NUM_CHANNELS * state->frameCount];
            }
            if (!state->resampleTemp) {
                state->resampleTemp = new int32_t[MAX_NUM_CHANNELS * state->frameCount];
            }
            state->hook = process__genericResampling;
        } else {
            if (state->outputTemp) {
                delete [] state->outputTemp;
                state->outputTemp = 0;
            }
            if (state->resampleTemp) {
                delete [] state->resampleTemp;
                state->resampleTemp = 0;
            }
            state->hook = process__genericNoResampling;
            if (all16BitsStereoNoResample && !volumeRamp) {
                if (countActiveTracks == 1) {
                    state->hook = process__OneTrack16BitsStereoNoResampling;
                }
            }
        }
    }


    LOGV("mixer configuration change: %d activeTracks (%08x) "
        "all16BitsStereoNoResample=%d, resampling=%d, volumeRamp=%d",
        countActiveTracks, state->enabledTracks,
        all16BitsStereoNoResample, resampling, volumeRamp);


   state->hook(state);


   // Now that the volume ramp has been done, set optimal state and
   // track hooks for subsequent mixer process
   if (countActiveTracks) {
       int allMuted = 1;
       uint32_t en = state->enabledTracks;
       while (en) {
           const int i = 31 - __builtin_clz(en);
           en &= ~(1<<i);
           track_t& t = state->tracks[i];
           if (!t.doesResample() && t.volumeRL == 0)
           {
               t.needs |= NEEDS_MUTE_ENABLED;
               t.hook = track__nop;
           } else {
               allMuted = 0;
           }
       }
       if (allMuted) {
           state->hook = process__nop;
       } else if (all16BitsStereoNoResample) {
           if (countActiveTracks == 1) {
              state->hook = process__OneTrack16BitsStereoNoResampling;
           }
       }
   }
}
// ----------------------------------------------------------------
/* 把其赋值给了state的hook。
state是传入的参数。hook是函数指针。


又引出来两个问题:
1、既然hook是函数指针,那么它何时被调用的?
2、函数process__validate有是何时被调用的?


第一个问题比较简单,函数process__validate中调用了hook。


来看看哪儿有使用函数process__validate。
函数AudioMixer::invalidateState中将process__validate赋值给了mState的hook。
mState是AudioMixer的成员变量。
hook是个函数指针。*/
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 void AudioMixer::invalidateState(uint32_t mask)
 {
    if (mask) {
        mState.needsChanged |= mask;
        mState.hook = process__validate;
    }
 }
// ----------------------------------------------------------------

/*
与上面类似,又出来两个问题:
1、mState的hook被调用的地方。
2、函数invalidateState被调用的地方。
*/

// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
/* 先看第一个问题。
函数AudioMixer::process中调用了mState的hook。
void AudioMixer::process()
{
    mState.hook(&mState);
}
可见,函数process__OneTrack16BitsStereoNoResampling的参数state就是此处的mState:
mState.hook(&mState);
mState.hook = process__validate;
state->hook = process__OneTrack16BitsStereoNoResampling;
state->hook(state);*/
// ----------------------------------------------------------------
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
/* 再看第二个问题:函数invalidateState被调用的地方。
函数invalidateState被调用的地方还不是,不过,我们只关心以下两个函数:
AudioMixer::enable
AudioMixer::setParameter
这两个函数中函数AudioFlinger::MixerThread::prepareTracks_l中都有没调到。
而函数AudioFlinger::MixerThread::prepareTracks_l在AudioFlinger::MixerThread::threadLoop中被调到。
*/

// 一个一个来。
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

status_t AudioMixer::enable(int name)
{
    switch (name) {
        case MIXING: {
            if (mState.tracks[ mActiveTrack ].enabled != 1) {
                mState.tracks[ mActiveTrack ].enabled = 1;
                LOGV("enable(%d)", mActiveTrack);
                invalidateState(1<<mActiveTrack);
            }
        } break;
        default:
            return NAME_NOT_FOUND;
    }
    return NO_ERROR;
}


// ----------------------------------------------------------------
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioMixer::setParameter(int target, int name, void *value)
{
    int valueInt = (int)value;
    int32_t *valueBuf = (int32_t *)value;


    switch (target) {
    case TRACK:
        if (name == CHANNEL_COUNT) {
            if ((uint32_t(valueInt) <= MAX_NUM_CHANNELS) && (valueInt)) {
                if (mState.tracks[ mActiveTrack ].channelCount != valueInt) {
                    mState.tracks[ mActiveTrack ].channelCount = valueInt;
                    LOGV("setParameter(TRACK, CHANNEL_COUNT, %d)", valueInt);
                    invalidateState(1<<mActiveTrack);
                }
                return NO_ERROR;
            }
        }
        if (name == MAIN_BUFFER) {
            if (mState.tracks[ mActiveTrack ].mainBuffer != valueBuf) {
                mState.tracks[ mActiveTrack ].mainBuffer = valueBuf;
                LOGV("setParameter(TRACK, MAIN_BUFFER, %p)", valueBuf);
                invalidateState(1<<mActiveTrack);
            }
            return NO_ERROR;
        }
        if (name == AUX_BUFFER) {
            if (mState.tracks[ mActiveTrack ].auxBuffer != valueBuf) {
                mState.tracks[ mActiveTrack ].auxBuffer = valueBuf;
                LOGV("setParameter(TRACK, AUX_BUFFER, %p)", valueBuf);
                invalidateState(1<<mActiveTrack);
            }
            return NO_ERROR;
        }


        break;
    case RESAMPLE:
        if (name == SAMPLE_RATE) {
            if (valueInt > 0) {
                track_t& track = mState.tracks[ mActiveTrack ];
                if (track.setResampler(uint32_t(valueInt), mSampleRate)) {
                    LOGV("setParameter(RESAMPLE, SAMPLE_RATE, %u)",
                            uint32_t(valueInt));
                    invalidateState(1<<mActiveTrack);
                }
                return NO_ERROR;
            }
        }
        break;
    case RAMP_VOLUME:
    case VOLUME:
        if ((uint32_t(name-VOLUME0) < MAX_NUM_CHANNELS)) {
            track_t& track = mState.tracks[ mActiveTrack ];
            if (track.volume[name-VOLUME0] != valueInt) {
                LOGV("setParameter(VOLUME, VOLUME0/1: %04x)", valueInt);
                track.prevVolume[name-VOLUME0] = track.volume[name-VOLUME0] << 16;
                track.volume[name-VOLUME0] = valueInt;
                if (target == VOLUME) {
                    track.prevVolume[name-VOLUME0] = valueInt << 16;
                    track.volumeInc[name-VOLUME0] = 0;
                } else {
                    int32_t d = (valueInt<<16) - track.prevVolume[name-VOLUME0];
                    int32_t volInc = d / int32_t(mState.frameCount);
                    track.volumeInc[name-VOLUME0] = volInc;
                    if (volInc == 0) {
                        track.prevVolume[name-VOLUME0] = valueInt << 16;
                    }
                }
                invalidateState(1<<mActiveTrack);
            }
            return NO_ERROR;
        } else if (name == AUXLEVEL) {
            track_t& track = mState.tracks[ mActiveTrack ];
            if (track.auxLevel != valueInt) {
                LOGV("setParameter(VOLUME, AUXLEVEL: %04x)", valueInt);
                track.prevAuxLevel = track.auxLevel << 16;
                track.auxLevel = valueInt;
                if (target == VOLUME) {
                    track.prevAuxLevel = valueInt << 16;
                    track.auxInc = 0;
                } else {
                    int32_t d = (valueInt<<16) - track.prevAuxLevel;
                    int32_t volInc = d / int32_t(mState.frameCount);
                    track.auxInc = volInc;
                    if (volInc == 0) {
                        track.prevAuxLevel = valueInt << 16;
                    }
                }
                invalidateState(1<<mActiveTrack);
            }
            return NO_ERROR;
        }
        break;
    }
    return BAD_VALUE;
}
// ----------------------------------------------------------------
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// prepareTracks_l() must be called with ThreadBase::mLock held
uint32_t AudioFlinger::MixerThread::prepareTracks_l(const SortedVector< wp<Track> >& activeTracks, Vector< sp<Track> > *tracksToRemove)
{


    uint32_t mixerStatus = MIXER_IDLE;
    // find out which tracks need to be processed
    size_t count = activeTracks.size();
    size_t mixedTracks = 0;
    size_t tracksWithEffect = 0;


    float masterVolume = mMasterVolume;
    bool  masterMute = mMasterMute;


    if (masterMute) {
        masterVolume = 0;
    }
#ifdef LVMX
    bool tracksConnectedChanged = false;
    bool stateChanged = false;


    int audioOutputType = LifeVibes::getMixerType(mId, mType);
    if (LifeVibes::audioOutputTypeIsLifeVibes(audioOutputType))
    {
        int activeTypes = 0;
        for (size_t i=0 ; i<count ; i++) {
            sp<Track> t = activeTracks[i].promote();
            if (t == 0) continue;
            Track* const track = t.get();
            int iTracktype=track->type();
            activeTypes |= 1<<track->type();
        }
        LifeVibes::computeVolumes(audioOutputType, activeTypes, tracksConnectedChanged, stateChanged, masterVolume, masterMute);
    }
#endif
    // Delegate master volume control to effect in output mix effect chain if needed
    sp<EffectChain> chain = getEffectChain_l(AudioSystem::SESSION_OUTPUT_MIX);
    if (chain != 0) {
        uint32_t v = (uint32_t)(masterVolume * (1 << 24));
        chain->setVolume_l(&v, &v);
        masterVolume = (float)((v + (1 << 23)) >> 24);
        chain.clear();
    }


    for (size_t i=0 ; i<count ; i++) {
        sp<Track> t = activeTracks[i].promote();
        if (t == 0) continue;


        Track* const track = t.get();
        audio_track_cblk_t* cblk = track->cblk();


        // The first time a track is added we wait
        // for all its buffers to be filled before processing it
        mAudioMixer->setActiveTrack(track->name());
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioMixer::setActiveTrack(int track)
{
    if (uint32_t(track-TRACK0) >= MAX_NUM_TRACKS) {
        return BAD_VALUE;
    }
    mActiveTrack = track - TRACK0;
    return NO_ERROR;
}
// ----------------------------------------------------------------
        if (cblk->framesReady() && track->isReady() &&
                !track->isPaused() && !track->isTerminated())
        {
            //LOGV("track %d u=%08x, s=%08x [OK] on thread %p", track->name(), cblk->user, cblk->server, this);


            mixedTracks++;


            // track->mainBuffer() != mMixBuffer means there is an effect chain
            // connected to the track
            chain.clear();
            if (track->mainBuffer() != mMixBuffer) {
                chain = getEffectChain_l(track->sessionId());
                // Delegate volume control to effect in track effect chain if needed
                if (chain != 0) {
                    tracksWithEffect++;
                } else {
                    LOGW("prepareTracks_l(): track %08x attached to effect but no chain found on session %d",
                            track->name(), track->sessionId());
                }
            }




            int param = AudioMixer::VOLUME;
            if (track->mFillingUpStatus == Track::FS_FILLED) {
                // no ramp for the first volume setting
                track->mFillingUpStatus = Track::FS_ACTIVE;
                if (track->mState == TrackBase::RESUMING) {
                    track->mState = TrackBase::ACTIVE;
                    param = AudioMixer::RAMP_VOLUME;
                }
            } else if (cblk->server != 0) {
                // If the track is stopped before the first frame was mixed,
                // do not apply ramp
                param = AudioMixer::RAMP_VOLUME;
            }


            // compute volume for this track
            uint32_t vl, vr, va;
            if (track->isMuted() || track->isPausing() ||
                mStreamTypes[track->type()].mute) {
                vl = vr = va = 0;
                if (track->isPausing()) {
                    track->setPaused();
                }
            } else {


                // read original volumes with volume control
                float typeVolume = mStreamTypes[track->type()].volume;
#ifdef LVMX
                bool streamMute=false;
                // read the volume from the LivesVibes audio engine.
                if (LifeVibes::audioOutputTypeIsLifeVibes(audioOutputType))
                {
                    LifeVibes::getStreamVolumes(audioOutputType, track->type(), &typeVolume, &streamMute);
                    if (streamMute) {
                        typeVolume = 0;
                    }
                }
#endif
                float v = masterVolume * typeVolume;
                vl = (uint32_t)(v * cblk->volume[0]) << 12;
                vr = (uint32_t)(v * cblk->volume[1]) << 12;


                va = (uint32_t)(v * cblk->sendLevel);
            }
            // Delegate volume control to effect in track effect chain if needed
            if (chain != 0 && chain->setVolume_l(&vl, &vr)) {
                // Do not ramp volume if volume is controlled by effect
                param = AudioMixer::VOLUME;
                track->mHasVolumeController = true;
            } else {
                // force no volume ramp when volume controller was just disabled or removed
                // from effect chain to avoid volume spike
                if (track->mHasVolumeController) {
                    param = AudioMixer::VOLUME;
                }
                track->mHasVolumeController = false;
            }


            // Convert volumes from 8.24 to 4.12 format
            int16_t left, right, aux;
            uint32_t v_clamped = (vl + (1 << 11)) >> 12;
            if (v_clamped > MAX_GAIN_INT) v_clamped = MAX_GAIN_INT;
            left = int16_t(v_clamped);
            v_clamped = (vr + (1 << 11)) >> 12;
            if (v_clamped > MAX_GAIN_INT) v_clamped = MAX_GAIN_INT;
            right = int16_t(v_clamped);


            if (va > MAX_GAIN_INT) va = MAX_GAIN_INT;
            aux = int16_t(va);


#ifdef LVMX
            if ( tracksConnectedChanged || stateChanged )
            {
                 // only do the ramp when the volume is changed by the user / application
                 param = AudioMixer::VOLUME;
            }
#endif


            // XXX: these things DON'T need to be done each time
// 前面有说到,调用getNextBuffer函数的代码为:t.bufferProvider->getNextBuffer(&b);
// 猜到没错的话,此处应该就是给bufferProvider赋值的地方
            mAudioMixer->setBufferProvider(track);
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioMixer::setBufferProvider(AudioBufferProvider* buffer)
{
    mState.tracks[ mActiveTrack ].bufferProvider = buffer;
    return NO_ERROR;
}
// ----------------------------------------------------------------
// 这个就是前面说的AudioMixer::enable函数
            mAudioMixer->enable(AudioMixer::MIXING);


// 这个是前面说的AudioMixer::setParameter函数
            mAudioMixer->setParameter(param, AudioMixer::VOLUME0, (void *)left);
            mAudioMixer->setParameter(param, AudioMixer::VOLUME1, (void *)right);
            mAudioMixer->setParameter(param, AudioMixer::AUXLEVEL, (void *)aux);
            mAudioMixer->setParameter(
                AudioMixer::TRACK,
                AudioMixer::FORMAT, (void *)track->format());
            mAudioMixer->setParameter(
                AudioMixer::TRACK,
                AudioMixer::CHANNEL_COUNT, (void *)track->channelCount());
            mAudioMixer->setParameter(
                AudioMixer::RESAMPLE,
                AudioMixer::SAMPLE_RATE,
                (void *)(cblk->sampleRate));
            mAudioMixer->setParameter(
                AudioMixer::TRACK,
                AudioMixer::MAIN_BUFFER, (void *)track->mainBuffer());
            mAudioMixer->setParameter(
                AudioMixer::TRACK,
                AudioMixer::AUX_BUFFER, (void *)track->auxBuffer());


            // reset retry count
            track->mRetryCount = kMaxTrackRetries;
            mixerStatus = MIXER_TRACKS_READY;
        } else {
            //LOGV("track %d u=%08x, s=%08x [NOT READY] on thread %p", track->name(), cblk->user, cblk->server, this);
            if (track->isStopped()) {
                track->reset();
            }
            if (track->isTerminated() || track->isStopped() || track->isPaused()) {
                // We have consumed all the buffers of this track.
                // Remove it from the list of active tracks.
                tracksToRemove->add(track);
            } else {
                // No buffers for this track. Give it a few chances to
                // fill a buffer, then remove it from active list.
// 如果track中没有数据,给它个机会往里填数据。
// 在mRetryCount次之后还没数据,就把它从active list中咔嚓掉。
                if (--(track->mRetryCount) <= 0) {
                    LOGV("BUFFER TIMEOUT: remove(%d) from active list on thread %p", track->name(), this);
                    tracksToRemove->add(track);
                    // indicate to client process that the track was disabled because of underrun
                    cblk->flags |= CBLK_DISABLED_ON;
                } else if (mixerStatus != MIXER_TRACKS_READY) {
                    mixerStatus = MIXER_TRACKS_ENABLED;
                }
            }
            mAudioMixer->disable(AudioMixer::MIXING);
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioMixer::disable(int name)
{
    switch (name) {
        case MIXING: {
            if (mState.tracks[ mActiveTrack ].enabled != 0) {
                mState.tracks[ mActiveTrack ].enabled = 0;
                LOGV("disable(%d)", mActiveTrack);
                invalidateState(1<<mActiveTrack);
            }
        } break;
        default:
            return NAME_NOT_FOUND;
    }
    return NO_ERROR;
}
// 此处也有对函数disable的调用。
// ----------------------------------------------------------------
        }
    }


// 干掉那些需要被干掉的track
    // remove all the tracks that need to be...
    count = tracksToRemove->size();
    if (UNLIKELY(count)) {
        for (size_t i=0 ; i<count ; i++) {
            const sp<Track>& track = tracksToRemove->itemAt(i);
            mActiveTracks.remove(track);
            if (track->mainBuffer() != mMixBuffer) {
                chain = getEffectChain_l(track->sessionId());
                if (chain != 0) {
                    LOGV("stopping track on chain %p for session Id: %d", chain.get(), track->sessionId());
                    chain->stopTrack();
                }
            }
            if (track->isTerminated()) {
                mTracks.remove(track);
                deleteTrackName_l(track->mName);
            }
        }
    }


    // mix buffer must be cleared if all tracks are connected to an
    // effect chain as in this case the mixer will not write to
    // mix buffer and track effects will accumulate into it
    if (mixedTracks != 0 && mixedTracks == tracksWithEffect) {
        memset(mMixBuffer, 0, mFrameCount * mChannelCount * sizeof(int16_t));
    }


    return mixerStatus;
}
// ----------------------------------------------------------------
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
bool AudioFlinger::MixerThread::threadLoop()
{
    Vector< sp<Track> > tracksToRemove;
    uint32_t mixerStatus = MIXER_IDLE;
    nsecs_t standbyTime = systemTime();
    size_t mixBufferSize = mFrameCount * mFrameSize;
    // FIXME: Relaxed timing because of a certain device that can't meet latency
    // Should be reduced to 2x after the vendor fixes the driver issue
    nsecs_t maxPeriod = seconds(mFrameCount) / mSampleRate * 3;
    nsecs_t lastWarning = 0;
    bool longStandbyExit = false;
    uint32_t activeSleepTime = activeSleepTimeUs();
    uint32_t idleSleepTime = idleSleepTimeUs();
    uint32_t sleepTime = idleSleepTime;
    Vector< sp<EffectChain> > effectChains;


    while (!exitPending())
    {
        processConfigEvents();
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
void AudioFlinger::ThreadBase::processConfigEvents()
{
    mLock.lock();
    while(!mConfigEvents.isEmpty()) {
        LOGV("processConfigEvents() remaining events %d", mConfigEvents.size());
        ConfigEvent *configEvent = mConfigEvents[0];
        mConfigEvents.removeAt(0);
        // release mLock before locking AudioFlinger mLock: lock order is always
        // AudioFlinger then ThreadBase to avoid cross deadlock
        mLock.unlock();
        mAudioFlinger->mLock.lock();
        audioConfigChanged_l(configEvent->mEvent, configEvent->mParam);
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// audioConfigChanged_l() must be called with AudioFlinger::mLock held
void AudioFlinger::audioConfigChanged_l(int event, int ioHandle, void *param2)
{
    size_t size = mNotificationClients.size();
    for (size_t i = 0; i < size; i++) {
        mNotificationClients.valueAt(i)->client()->ioConfigChanged(event, ioHandle, param2);
    }
}
// ----------------------------------------------------------------
        mAudioFlinger->mLock.unlock();
        delete configEvent;
        mLock.lock();
    }
    mLock.unlock();
}
// ----------------------------------------------------------------


        mixerStatus = MIXER_IDLE;
        { // scope for mLock


            Mutex::Autolock _l(mLock);


            if (checkForNewParameters_l()) {
                mixBufferSize = mFrameCount * mFrameSize;
                // FIXME: Relaxed timing because of a certain device that can't meet latency
                // Should be reduced to 2x after the vendor fixes the driver issue
                maxPeriod = seconds(mFrameCount) / mSampleRate * 3;
                activeSleepTime = activeSleepTimeUs();
                idleSleepTime = idleSleepTimeUs();
            }


            const SortedVector< wp<Track> >& activeTracks = mActiveTracks;


            // put audio hardware into standby after short delay
            if UNLIKELY((!activeTracks.size() && systemTime() > standbyTime) ||
                        mSuspended) {
                if (!mStandby) {
                    LOGV("Audio hardware entering standby, mixer %p, mSuspended %d\n", this, mSuspended);
                    mOutput->standby();
                    mStandby = true;
                    mBytesWritten = 0;
                }


                if (!activeTracks.size() && mConfigEvents.isEmpty()) {
                    // we're about to wait, flush the binder command buffer
                    IPCThreadState::self()->flushCommands();


                    if (exitPending()) break;


                    // wait until we have something to do...
                    LOGV("MixerThread %p TID %d going to sleep\n", this, gettid());
                    mWaitWorkCV.wait(mLock);
                    LOGV("MixerThread %p TID %d waking up\n", this, gettid());


                    if (mMasterMute == false) {
                        char value[PROPERTY_VALUE_MAX];
                        property_get("ro.audio.silent", value, "0");
                        if (atoi(value)) {
                            LOGD("Silence is golden");
                            setMasterMute(true);
                        }
                    }


                    standbyTime = systemTime() + kStandbyTimeInNsecs;
                    sleepTime = idleSleepTime;
                    continue;
                }
            }


// 此处调用了函数prepareTracks_l
            mixerStatus = prepareTracks_l(activeTracks, &tracksToRemove);


            // prevent any changes in effect chain list and in each effect chain
            // during mixing and effect process as the audio buffers could be deleted
            // or modified if an effect is created or deleted
            lockEffectChains_l(effectChains);
       }


        if (LIKELY(mixerStatus == MIXER_TRACKS_READY)) {
// 这儿调用了前面介绍的AudioMixer::process函数
            // mix buffers...
            mAudioMixer->process();
            sleepTime = 0;
            standbyTime = systemTime() + kStandbyTimeInNsecs;
            //TODO: delay standby when effects have a tail
        } else {
            // If no tracks are ready, sleep once for the duration of an output
            // buffer size, then write 0s to the output
            if (sleepTime == 0) {
                if (mixerStatus == MIXER_TRACKS_ENABLED) {
                    sleepTime = activeSleepTime;
                } else {
                    sleepTime = idleSleepTime;
                }
            } else if (mBytesWritten != 0 ||
                       (mixerStatus == MIXER_TRACKS_ENABLED && longStandbyExit)) {
                memset (mMixBuffer, 0, mixBufferSize);
                sleepTime = 0;
                LOGV_IF((mBytesWritten == 0 && (mixerStatus == MIXER_TRACKS_ENABLED && longStandbyExit)), "anticipated start");
            }
            // TODO add standby time extension fct of effect tail
        }


        if (mSuspended) {
            sleepTime = suspendSleepTimeUs();
        }
// 往硬件中写数据
        // sleepTime == 0 means we must write to audio hardware
        if (sleepTime == 0) {
             for (size_t i = 0; i < effectChains.size(); i ++) {
                 effectChains[i]->process_l();
             }
             // enable changes in effect chain
             unlockEffectChains(effectChains);
#ifdef LVMX
            int audioOutputType = LifeVibes::getMixerType(mId, mType);
            if (LifeVibes::audioOutputTypeIsLifeVibes(audioOutputType)) {
               LifeVibes::process(audioOutputType, mMixBuffer, mixBufferSize);
            }
#endif
            mLastWriteTime = systemTime();
            mInWrite = true;
            mBytesWritten += mixBufferSize;


// 是这儿写的。
            int bytesWritten = (int)mOutput->write(mMixBuffer, mixBufferSize);
            if (bytesWritten < 0) mBytesWritten -= mixBufferSize;
            mNumWrites++;
            mInWrite = false;
            nsecs_t now = systemTime();
            nsecs_t delta = now - mLastWriteTime;
            if (delta > maxPeriod) {
                mNumDelayedWrites++;
                if ((now - lastWarning) > kWarningThrottle) {
                    LOGW("write blocked for %llu msecs, %d delayed writes, thread %p",
                            ns2ms(delta), mNumDelayedWrites, this);
                    lastWarning = now;
                }
                if (mStandby) {
                    longStandbyExit = true;
                }
            }
            mStandby = false;
        } else {
            // enable changes in effect chain
            unlockEffectChains(effectChains);
            usleep(sleepTime);
        }


        // finally let go of all our tracks, without the lock held
        // since we can't guarantee the destructors won't acquire that
        // same lock.
        tracksToRemove.clear();


        // Effect chains will be actually deleted here if they were removed from
        // mEffectChains list during mixing or effects processing
        effectChains.clear();
    }


    if (!mStandby) {
        mOutput->standby();
    }


    LOGV("MixerThread %p exiting", this);
    return false;
}
// ----------------------------------------------------------------
// ----------------------------------------------------------------


// ----------------------------------------------------------------

/*
在前面,看函数AudioMixer::process__OneTrack16BitsStereoNoResampling的时候,留下了一个问题:
放入track_t的mainBuffer中的数据,是如何被使用的?
函数函数AudioMixer::process__OneTrack16BitsStereoNoResampling的主要功能就是将数据从audio_track_cblk_t中copy到track_t的mainBuffer中。
先看看mainBuffer的来历。
*/

// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// mainBuffer是track_t的成员。
// 函数AudioMixer::setParameter中有对mainBuffer赋值。
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
        if (name == MAIN_BUFFER) {
            if (mState.tracks[ mActiveTrack ].mainBuffer != valueBuf) {
                mState.tracks[ mActiveTrack ].mainBuffer = valueBuf;
                LOGV("setParameter(TRACK, MAIN_BUFFER, %p)", valueBuf);
                invalidateState(1<<mActiveTrack);
            }
            return NO_ERROR;
        }
// ----------------------------------------------------------------
// 函数AudioFlinger::MixerThread::prepareTracks_l中调用了函数AudioMixer::setParameter。
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
            mAudioMixer->setParameter(
                AudioMixer::TRACK,
                AudioMixer::MAIN_BUFFER, (void *)track->mainBuffer());

// 这个 track之前也被使用过:
// mAudioMixer->setBufferProvider(track);


// track的来历:
// Track* const track = t.get();


// t的来历:
// sp<Track> t = activeTracks[i].promote();


// activeTracks是函数AudioFlinger::MixerThread::prepareTracks_l的参数。


// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// 函数函数AudioFlinger::MixerThread::prepareTracks_l在函数AudioFlinger::MixerThread::threadLoop中被调用:
mixerStatus = prepareTracks_l(activeTracks, &tracksToRemove);

/*
activeTracks的来历:
const SortedVector< wp<Track> >& activeTracks = mActiveTracks;


mActiveTracks是PlaybackThread的成员变量。


函数AudioFlinger::PlaybackThread::addTrack_l,往mActiveTracks中添加成员。*/
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// addTrack_l() must be called with ThreadBase::mLock held
status_t AudioFlinger::PlaybackThread::addTrack_l(const sp<Track>& track)
{
    status_t status = ALREADY_EXISTS;


    // set retry count for buffer fill
    track->mRetryCount = kMaxTrackStartupRetries;
    if (mActiveTracks.indexOf(track) < 0) {
        // the track is newly added, make sure it fills up all its
        // buffers before playing. This is to ensure the client will
        // effectively get the latency it requested.
        track->mFillingUpStatus = Track::FS_FILLING;
        track->mResetDone = false;
        mActiveTracks.add(track);
        if (track->mainBuffer() != mMixBuffer) {
            sp<EffectChain> chain = getEffectChain_l(track->sessionId());
            if (chain != 0) {
                LOGV("addTrack_l() starting track on chain %p for session %d", chain.get(), track->sessionId());
                chain->startTrack();
            }
        }


        status = NO_ERROR;
    }


    LOGV("mWaitWorkCV.broadcast");
    mWaitWorkCV.broadcast();


    return status;
}
// ----------------------------------------------------------------


// 函数AudioFlinger::PlaybackThread::Track::start调用了函数AudioFlinger::PlaybackThread::addTrack_l。
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioFlinger::PlaybackThread::Track::start()
{
    status_t status = NO_ERROR;
    LOGV("start(%d), calling thread %d session %d",
            mName, IPCThreadState::self()->getCallingPid(), mSessionId);
    sp<ThreadBase> thread = mThread.promote();
    if (thread != 0) {
        Mutex::Autolock _l(thread->mLock);
        int state = mState;
        // here the track could be either new, or restarted
        // in both cases "unstop" the track
        if (mState == PAUSED) {
            mState = TrackBase::RESUMING;
            LOGV("PAUSED => RESUMING (%d) on thread %p", mName, this);
        } else {
            mState = TrackBase::ACTIVE;
            LOGV("? => ACTIVE (%d) on thread %p", mName, this);
        }


        if (!isOutputTrack() && state != ACTIVE && state != RESUMING) {
            thread->mLock.unlock();
            status = AudioSystem::startOutput(thread->id(),
                                              (AudioSystem::stream_type)mStreamType,
                                              mSessionId);
            thread->mLock.lock();
        }
        if (status == NO_ERROR) {
            PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
            playbackThread->addTrack_l(this);
        } else {
            mState = state;
        }
    } else {
        status = BAD_VALUE;
    }
    return status;
}
// ----------------------------------------------------------------


函数AudioFlinger::TrackHandle::start调用了函数AudioFlinger::PlaybackThread::Track::start。
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioFlinger::TrackHandle::start() {
    return mTrack->start();
}
// ----------------------------------------------------------------


// 再往上拱不是很好拱,还是跳到上面往下钻吧。

/*
从java的测试代码中可知,调用完write写完数据后,一般会调用play函数。
如在函数testSetStereoVolumeMid中:
        track.write(data, 0, data.length);
        track.play();
*/
// play函数的实现:
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    /**
     * Starts playing an AudioTrack.
     * @throws IllegalStateException
     */
    public void play()
    throws IllegalStateException {
        if (mState != STATE_INITIALIZED) {
            throw(new IllegalStateException("play() called on uninitialized AudioTrack."));
        }


        synchronized(mPlayStateLock) {
// 调到了native中的start函数。
            native_start();
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
static void
android_media_AudioTrack_start(JNIEnv *env, jobject thiz)
{
// 此处的AudioTrack对象是创建AudioTrack的时候保存过去的
    AudioTrack *lpTrack = (AudioTrack *)env->GetIntField(
        thiz, javaAudioTrackFields.nativeTrackInJavaObj);
    if (lpTrack == NULL ) {
        jniThrowException(env, "java/lang/IllegalStateException",
            "Unable to retrieve AudioTrack pointer for start()");
        return;
    }


    lpTrack->start();
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
void AudioTrack::start()
{
    sp<AudioTrackThread> t = mAudioTrackThread;
    status_t status;


    LOGV("start %p", this);
    if (t != 0) {
        if (t->exitPending()) {
            if (t->requestExitAndWait() == WOULD_BLOCK) {
                LOGE("AudioTrack::start called from thread");
                return;
            }
        }
        t->mLock.lock();
     }


    if (android_atomic_or(1, &mActive) == 0) {
        mNewPosition = mCblk->server + mUpdatePeriod;
        mCblk->bufferTimeoutMs = MAX_STARTUP_TIMEOUT_MS;
        mCblk->waitTimeMs = 0;
        mCblk->flags &= ~CBLK_DISABLED_ON;
        if (t != 0) {
           t->run("AudioTrackThread", THREAD_PRIORITY_AUDIO_CLIENT);
        } else {
            setpriority(PRIO_PROCESS, 0, THREAD_PRIORITY_AUDIO_CLIENT);
        }


        if (mCblk->flags & CBLK_INVALID_MSK) {
            LOGW("start() track %p invalidated, creating a new one", this);
            // no need to clear the invalid flag as this cblk will not be used anymore
            // force new track creation
            status = DEAD_OBJECT;
        } else {
// 此处的mAudioTrack是在函数AudioTrack::createTrack中创建并赋值的
// sp<IAudioTrack> track = audioFlinger->createTrack()
// mAudioTrack = track;

// 函数AudioFlinger::createTrack返回的是一个TrackHandle对象
// trackHandle = new TrackHandle(track);
// return trackHandle;
// 也就是说,此处调用的是函数AudioFlinger::TrackHandle::start
            status = mAudioTrack->start();
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
status_t AudioFlinger::TrackHandle::start() {
// TrackHandle的构造函数中传入了一个track
// 该track的创建:track = thread->createTrack_l()
// 函数AudioFlinger::PlaybackThread::createTrack_l返回的是一个AudioFlinger::PlaybackThread::Track对象
// 因此,此处调用的是函数AudioFlinger::PlaybackThread::Track::start


// !!!!接上头了


// 也就是说,函数AudioFlinger::MixerThread::prepareTracks_l调用函数AudioMixer::setParameter时使用的track对象,其实就是我们创建AudioTrack时创建的那个东东
// 也就是说函数函数AudioMixer::process__OneTrack16BitsStereoNoResampling,其实是把数据copy到了AudioTrack对象的mian buffer中
    return mTrack->start();
}
----------------------------------------------------------------
        }
        if (status == DEAD_OBJECT) {
            LOGV("start() dead IAudioTrack: creating a new one");
            status = createTrack(mStreamType, mCblk->sampleRate, mFormat, mChannelCount,
                                 mFrameCount, mFlags, mSharedBuffer, getOutput(), false);
            if (status == NO_ERROR) {
                status = mAudioTrack->start();
                if (status == NO_ERROR) {
                    mNewPosition = mCblk->server + mUpdatePeriod;
                }
            }
        }
        if (status != NO_ERROR) {
            LOGV("start() failed");
            android_atomic_and(~1, &mActive);
            if (t != 0) {
                t->requestExit();
            } else {
                setpriority(PRIO_PROCESS, 0, ANDROID_PRIORITY_NORMAL);
            }
        }
    }


    if (t != 0) {
        t->mLock.unlock();
    }
}
// ----------------------------------------------------------------
}
// ----------------------------------------------------------------
            mPlayState = PLAYSTATE_PLAYING;
        }
    }
// ----------------------------------------------------------------
// ----------------------------------------------------------------


// ----------------------------------------------------------------
// ----------------------------------------------------------------

/*
还有一个遗留问题,
函数AudioFlinger::MixerThread::threadLoop往硬件写数据使用的是以下代码:
int bytesWritten = (int)mOutput->write(mMixBuffer, mixBufferSize);
mMixBuffer是MixerThread的成员变量。


而函数函数AudioMixer::process__OneTrack16BitsStereoNoResampling是把数据copy到AudioTrack对象的mainBuffer中。
这两个东东之间是如何联系起来的呢?


下面的这个链条,貌似将它们关联了起来:
1、函数AudioFlinger::createTrack在调用thread->createTrack_l成功之后,会根据条件调用函数AudioFlinger::moveEffectChain_l。*/
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
        track = thread->createTrack_l(client, streamType, sampleRate, format,
                channelCount, frameCount, sharedBuffer, lSessionId, &lStatus);


        // move effect chain to this output thread if an effect on same session was waiting
        // for a track to be created
        if (lStatus == NO_ERROR && effectThread != NULL) {
            Mutex::Autolock _dl(thread->mLock);
            Mutex::Autolock _sl(effectThread->mLock);
            moveEffectChain_l(lSessionId, effectThread, thread, true);
        }
// ----------------------------------------------------------------


// 2、函数函数AudioFlinger::moveEffectChain_l调用了函数AudioFlinger::PlaybackThread::removeEffectChain_l。
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    // remove chain first. This is useful only if reconfiguring effect chain on same output thread,
    // so that a new chain is created with correct parameters when first effect is added. This is
    // otherwise unecessary as removeEffect_l() will remove the chain when last effect is
    // removed.
    srcThread->removeEffectChain_l(chain);
// ----------------------------------------------------------------


// 3、函数AudioFlinger::PlaybackThread::removeEffectChain_l将PlaybackThread的mMixBuffer设置到了AudioTrack对象的mian buffer。
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
size_t AudioFlinger::PlaybackThread::removeEffectChain_l(const sp<EffectChain>& chain)
{
    int session = chain->sessionId();


    LOGV("removeEffectChain_l() %p from thread %p for session %d", chain.get(), this, session);


    for (size_t i = 0; i < mEffectChains.size(); i++) {
        if (chain == mEffectChains[i]) {
            mEffectChains.removeAt(i);
            // detach all tracks with same session ID from this chain
            for (size_t i = 0; i < mTracks.size(); ++i) {
                sp<Track> track = mTracks[i];
                if (session == track->sessionId()) {
                    track->setMainBuffer(mMixBuffer);
                }
            }
            break;
        }
    }
    return mEffectChains.size();
}
// ----------------------------------------------------------------


/*函数AudioFlinger::openOutput中在调用HAL层的openOutputStream成功打开一个output之后会创建一个MixerThread对象。
并将其添加到成员变量mPlaybackThreads中:
            thread = new MixerThread(this, output, id, *pDevices);
       mPlaybackThreads.add(id, thread);*/


###########################################################


&&&&&&&&&&&&&&&&&&&&&&&总结&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
MixerThread的threadloop函数,会先检查数据是否已经通过AudioTrack对象写到了audio_track_cblk_t中。
如果已经写好了,就将数据copy到AudioTrack对象的main buffer中。
在创建AudioTrack对象的时候,已经将AudioTrack对象的mian buffer和PlaybackThread的mix buffer进行了关联。
MixerThread的threadloop函数之后就会调用HAL层的write函数将数据写到硬件。
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&

阅读更多
个人分类: Android
想对作者说点什么? 我来说一句

没有更多推荐了,返回首页

加入CSDN,享受更精准的内容推荐,与500万程序员共同成长!
关闭
关闭