AudioFlinger 实现原理分析

本次分析基于 Android 1.6 源码,虽然随着 Android 版本的不断迭代更新,很多类和函数有很大变化,但是基本原理没有变。
阅读本文前请先阅读 AudioTrack 实现原理分析

一、服务启动

AudioFlinger 服务是驻留 system_server 进程中的,system_server 的入口函数如下:
[system_main.cpp]

int main(int argc, const char* const argv[])
{
    ...
    system_init();
}

main 函数中主要是调用 system_init 初始化系统服务:
[system_init.cpp]

extern "C" status_t system_init()
{
    ...
    if (!proc->supportsProcesses()) {
        // Start the AudioFlinger
        AudioFlinger::instantiate();

        // Start the media playback service
        MediaPlayerService::instantiate();

        // Start the camera service
        CameraService::instantiate();
    }
    ...
    if (proc->supportsProcesses()) {
        LOGI("System server: entering thread pool.\n");
        ProcessState::self()->startThreadPool();
        IPCThreadState::self()->joinThreadPool();
        LOGI("System server: exiting thread pool.\n");
    }
    return NO_ERROR;
}

通过调用 AudioFlinger::instantiate() 启动 AudioFlinger 服务,最后阻塞当前进程。

void AudioFlinger::instantiate() {
    defaultServiceManager()->addService(
            String16("media.audio_flinger"), new AudioFlinger());
}

AudioFlinger构造函数

分析一个类,先分析其构造函数:

AudioFlinger::AudioFlinger()
    : BnAudioFlinger(),
        mAudioHardware(0), mA2dpAudioInterface(0), mA2dpEnabled(false), mNotifyA2dpChange(false),
        mForcedSpeakerCount(0), mA2dpDisableCount(0), mA2dpSuppressed(false), mForcedRoute(0),
        mRouteRestoreTime(0), mMusicMuteSaved(false)
{
    mHardwareStatus = AUDIO_HW_IDLE;
    // 创建与音频硬件通信的接口对象
    mAudioHardware = AudioHardwareInterface::create();
    mHardwareStatus = AUDIO_HW_INIT;
    if (mAudioHardware->initCheck() == NO_ERROR) {
        // open 16-bit output stream for s/w mixer
        mHardwareStatus = AUDIO_HW_OUTPUT_OPEN;
        status_t status;
        // 打开音频流
        AudioStreamOut *hwOutput = mAudioHardware->openOutputStream(AudioSystem::PCM_16_BIT, 0, 0, &status);
        mHardwareStatus = AUDIO_HW_IDLE;
        // 创建混音线程
        if (hwOutput) {
            mHardwareMixerThread = new MixerThread(this, hwOutput, AudioSystem::AUDIO_OUTPUT_HARDWARE);
        } else {
            LOGE("Failed to initialize hardware output stream, status: %d", status);
        }
        ...
        setMasterVolume(1.0f);
        setMasterMute(false);

        // 启动录音进程
        mAudioRecordThread = new AudioRecordThread(mAudioHardware, this);
        if (mAudioRecordThread != 0) {
            mAudioRecordThread->run("AudioRecordThread", PRIORITY_URGENT_AUDIO);            
        }
     } else {
        LOGE("Couldn't even initialize the stubbed audio hardware!");
    }
}

构造函数中主要完成两件事情:

  • 创建与音频硬件通信的接口,并初始化;
  • 创建混音线程和录音线程;

然后服务会进入阻塞,等待客户端发送消息。

二、创建 Track

在分析 C++ AudioTrack 的初始化函数的时候,有如下代码:

sp<IAudioTrack> track = audioFlinger->createTrack(getpid(),
            streamType, sampleRate, format, channelCount, frameCount, flags, sharedBuffer, &status);

track 是至关重要的对象。AudioTrack 后续的操作都离不开 track。那么它是怎么创建的呢?请看下面的代码:

sp<IAudioTrack> AudioFlinger::createTrack(
        pid_t pid,
        int streamType,
        uint32_t sampleRate,
        int format,
        int channelCount,
        int frameCount,
        uint32_t flags,
        const sp<IMemory>& sharedBuffer,
        status_t *status)
{
    ...
    {
        Mutex::Autolock _l(mLock);
        // 没有进程有唯一 client 与之对应,并以 pid 为 key 保存到 mClients 中
        wclient = mClients.valueFor(pid);
        if (wclient != NULL) {
            client = wclient.promote();
        } else {
            client = new Client(this, pid);
            mClients.add(pid, client);
        }
        ...
        // 通过混音进程创建 track
        {
            track = mHardwareMixerThread->createTrack_l(client, streamType, sampleRate, format,
                    channelCount, frameCount, sharedBuffer, &lStatus);            
        }
    }
    // Track 不具备 binder 通信功能,需要将 Tarck 对象通过 TrackHandle 包装后,返回给客户端
    if (lStatus == NO_ERROR) {
        trackHandle = new TrackHandle(track);
    } else {
        track.clear();
    }

Exit:
    if(status) {
        *status = lStatus;
    }
    return trackHandle;
}

创建 Track 是通过混音线程完成。MixerThread 的 createTrack_l 函数被调用:

sp<AudioFlinger::MixerThread::Track>  AudioFlinger::MixerThread::createTrack_l(
        const sp<AudioFlinger::Client>& client,
        int streamType,
        uint32_t sampleRate,
        int format,
        int channelCount,
        int frameCount,
        const sp<IMemory>& sharedBuffer,
        status_t *status)
{
    sp<Track> track;
    status_t lStatus;

    ...
    track = new Track(this, client, streamType, sampleRate, format,
            channelCount, frameCount, sharedBuffer);
    if (track->getCblk() == NULL) {
        lStatus = NO_MEMORY;
        goto Exit;
    }
    mTracks.add(track);
    lStatus = NO_ERROR;
    ...
    return track;
}

上面是创建 Track 的全过程,后续 AudioTrack 与 AudioFlinger 的交互都是通过 Track 完成。
Track 的构造函数如下:

AudioFlinger::MixerThread::Track::Track(
            const sp<MixerThread>& mixerThread,
            const sp<Client>& client,
            int streamType,
            uint32_t sampleRate,
            int format,
            int channelCount,
            int frameCount,
            const sp<IMemory>& sharedBuffer)
    :   TrackBase(mixerThread, client, sampleRate, format, channelCount, frameCount, 0, sharedBuffer)
{
    mVolume[0] = 1.0f;
    mVolume[1] = 1.0f;
    mMute = false;
    mSharedBuffer = sharedBuffer;
    mStreamType = streamType;
}

父类 TrackBase 的构造函数如下:

AudioFlinger::MixerThread::TrackBase::TrackBase(
            const sp<MixerThread>& mixerThread,
            const sp<Client>& client,
            uint32_t sampleRate,
            int format,
            int channelCount,
            int frameCount,
            uint32_t flags,
            const sp<IMemory>& sharedBuffer)
    :   RefBase(),
        mMixerThread(mixerThread),
        mClient(client),
        mFrameCount(0),
        mState(IDLE),
        mClientTid(-1),
        mFormat(format),
        mFlags(flags & ~SYSTEM_FLAGS_MASK)
{
    ...
    // cblk 是 ControlBlock 的缩写,控制块的意思,用于对后续共享内存操作进行进程间同步
    // 注意此处的 size 只包括 audio_track_cblk_t 中成员变量占用的内存空间,并不包括成员函数。
   size_t size = sizeof(audio_track_cblk_t);
    // 计算保存音频数据的内存空间,与 AudioTrack getMinBufferSize 呼应
   size_t bufferSize = frameCount*channelCount*sizeof(int16_t);
    // 需要申请的内存空间包括:内存控制块 + 音频数据
   if (sharedBuffer == 0) {
       size += bufferSize;
   }

   if (client != NULL) {
        // 通过 client 分配内存,client 与客户端进程是对应关系
        mCblkMemory = client->heap()->allocate(size);
        if (mCblkMemory != 0) {
            mCblk = static_cast<audio_track_cblk_t *>(mCblkMemory->pointer());
            if (mCblk) { // construct the shared structure in-place.
                new(mCblk) audio_track_cblk_t();
                // clear all buffers
                mCblk->frameCount = frameCount;
                mCblk->sampleRate = sampleRate;
                mCblk->channels = (uint8_t)channelCount;
                if (sharedBuffer == 0) {
                    mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
                    memset(mBuffer, 0, frameCount*channelCount*sizeof(int16_t));
                    // 强制设置为 underrun 状态
                    mCblk->flowControlFlag = 1;
                } else {
                    mBuffer = sharedBuffer->pointer();
                }
                mBufferEnd = (uint8_t *)mBuffer + bufferSize;
            }
        } else {
            LOGE("not enough memory for AudioTrack size=%u", size);
            client->heap()->dump("AudioTrack");
            return;
        }
   } else {
       mCblk = (audio_track_cblk_t *)(new uint8_t[size]);
       if (mCblk) { // construct the shared structure in-place.
           new(mCblk) audio_track_cblk_t();
           // clear all buffers
           mCblk->frameCount = frameCount;
           mCblk->sampleRate = sampleRate;
           mCblk->channels = (uint8_t)channelCount;
           mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
           memset(mBuffer, 0, frameCount*channelCount*sizeof(int16_t));
           // Force underrun condition to avoid false underrun callback until first data is
           // written to buffer
           mCblk->flowControlFlag = 1;
           mBufferEnd = (uint8_t *)mBuffer + bufferSize;
       }
   }
}

TrackBase 的构造函数通过 client 分配共享内存,并通过 mCblkMemory 引用此块内存。然后在共享内存的头部塞入内存控制块结构 audio_track_cblk_t,并初始化相关成员。对于内存分配的原理暂时不表,先对 audio_track_cblk_t 的成员加以说明:

struct audio_track_cblk_t
{
                Mutex       lock;
                Condition   cv;    // 这两个同步变量,初始化时会设置成可以跨进程共享
    volatile    uint32_t    user;    // 当前写地址
    volatile    uint32_t    server;    // 当前读地址
                uint32_t    userBase;
                uint32_t    serverBase;
    void*       buffers;    // 数据缓存区首地址
    uint32_t    frameCount;    // 数据缓存区大小(以帧为单位)
    // Cache line boundary
    uint32_t    loopStart;
    uint32_t    loopEnd;
    int         loopCount;
    volatile    union {
                    uint16_t    volume[2];
                    uint32_t    volumeLR;
                };
                uint32_t    sampleRate;
                uint8_t     channels;
                uint8_t     flowControlFlag; // underrun (out) or overrrun (in) indication
                uint8_t     out;        // out equals 1 for AudioTrack and 0 for AudioRecord
                uint8_t     forceReady;
                uint16_t    bufferTimeoutMs; // Maximum cumulated timeout before restarting audioflinger
                uint16_t    waitTimeMs;      // Cumulated wait time
                // Padding ensuring that data buffer starts on a cache line boundary (32 bytes).
                // See AudioFlinger::TrackBase constructor
                int32_t     Padding[1];
                // Cache line boundary
                
                            audio_track_cblk_t();
                uint32_t    stepUser(uint32_t frameCount);  // 更新写地址
                bool        stepServer(uint32_t frameCount);  // 更新读地址
                void*       buffer(uint32_t offset) const;  // 返回可写空间的首地址
                uint32_t    framesAvailable();  // 用用写空间大小
                uint32_t    framesAvailable_l();
                uint32_t    framesReady();  // 是否有可读数据
};

audio_track_cblk_t 用于实现 AudioTrack 和 AudioFlinger 同步访问共享内存。AudioTrack 作为生产者,会将音频数据写入共享内存,AudioFlinger 作为消费者,会从共享内存中取出音频数据进行处理。audio_track_cblk 的作用就是告诉 AudioTrack 将数据写到共享内存的什么位置,AudioFlinger 从共内存的什么位置取数据,防止出现冲突。

三、启动音频播放

前面分析 AudioTrack 启动音乐播放,会调用如下代码:

mAudioTrack->start();

mAudioTrack 为前面创建的 Track 的代理端,那服务端的 TrackHandle 会被调用。

status_t AudioFlinger::TrackHandle::start() {
    return mTrack->start();
}

实际上 TrackHandle 又使用代理设计模型,实际会调用 Track 对象的 start 函数:

status_t AudioFlinger::MixerThread::Track::start()
{
    LOGV("start(%d), calling thread %d for output %d", mName, IPCThreadState::self()->getCallingPid(), mMixerThread->mOutputType);
    Mutex::Autolock _l(mMixerThread->mAudioFlinger->mLock);
    mMixerThread->addTrack_l(this);
    return NO_ERROR;
}

addTrack_l 函数:

status_t AudioFlinger::MixerThread::addTrack_l(const sp<Track>& track)
{
    status_t status = ALREADY_EXISTS;

    // here the track could be either new, or restarted
    // in both cases "unstop" the track
    if (track->isPaused()) {
        track->mState = TrackBase::RESUMING;
        LOGV("PAUSED => RESUMING (%d)", track->name());
    } else {
        track->mState = TrackBase::ACTIVE;
        LOGV("? => ACTIVE (%d)", track->name());
    }
    // set retry count for buffer fill
    track->mRetryCount = kMaxTrackStartupRetries;
    if (mActiveTracks.indexOf(track) < 0) {
        // 新添加的 track,将其添加到活跃 Track 列表(mActiveTracks),并设置 Track 填充状态为 FS_FILLING。
        track->mFillingUpStatus = Track::FS_FILLING;
        track->mResetDone = false;
        addActiveTrack_l(track);
        status = NO_ERROR;
    }
    // 唤醒混音线程开始工作
    LOGV("mWaitWorkCV.broadcast");
    mAudioFlinger->mWaitWorkCV.broadcast();

    return status;
}

启动音乐播放,AudioFinger 的对应动作是,将 Track 添加到活跃 Track 列表,并唤醒混音线程。

四、播放音频

在 AudioFlinger 的构造函数中创建 MixerThread 混音线程,那么什么时候启动进程呢?其实 mHardwareMixerThread 第一次被引用的时候就启动了线程。

void AudioFlinger::MixerThread::onFirstRef()
{
    const size_t SIZE = 256;
    char buffer[SIZE];

    snprintf(buffer, SIZE, "Mixer Thread for output %d", mOutputType);

    run(buffer, ANDROID_PRIORITY_URGENT_AUDIO);
}

线程的工作函数会被调用:

bool AudioFlinger::MixerThread::threadLoop()
{
    unsigned long sleepTime = kBufferRecoveryInUsecs;
    int16_t* curBuf = mMixBuffer;
    Vector< sp<Track> > tracksToRemove;
    size_t enabledTracks = 0;
    nsecs_t standbyTime = systemTime();   
    size_t mixBufferSize = mFrameCount*mChannelCount*sizeof(int16_t);
    nsecs_t maxPeriod = seconds(mFrameCount) / mSampleRate * 2;

    do {
        enabledTracks = 0;
        { // scope for the AudioFlinger::mLock
            Mutex::Autolock _l(mAudioFlinger->mLock);
            // 对活跃状态的 Track 进行排序
            const SortedVector< wp<Track> >& activeTracks = mActiveTracks;

            // 一段时间没有活跃的 Track,进入待机状态,这段延时由 kStandbyTimeInNsecs 指定,默认为 3s
            if UNLIKELY(!activeTracks.size() && systemTime() > standbyTime) {
                // wait until we have something to do...
                LOGV("Audio hardware entering standby, output %d\n", mOutputType);
                if (!mStandby) {
                    mOutput->standby();
                    mStandby = true;
                }
                
                if (mOutputType == AudioSystem::AUDIO_OUTPUT_HARDWARE) {
                    mAudioFlinger->handleForcedSpeakerRoute(FORCE_ROUTE_RESTORE);
                }                
                // we're about to wait, flush the binder command buffer
                IPCThreadState::self()->flushCommands();
                // 阻塞线程
                mAudioFlinger->mWaitWorkCV.wait(mAudioFlinger->mLock);
                LOGV("Audio hardware exiting standby, output %d\n", mOutputType);
                
                if (mMasterMute == false) {
                    char value[PROPERTY_VALUE_MAX];
                    property_get("ro.audio.silent", value, "0");
                    if (atoi(value)) {
                        LOGD("Silence is golden");
                        setMasterMute(true);
                    }                    
                }
                // 更新下一次进入待机的时间
                standbyTime = systemTime() + kStandbyTimeInNsecs;
                continue;
            }

            // Forced route to speaker is handled by hardware mixer thread
            if (mOutputType == AudioSystem::AUDIO_OUTPUT_HARDWARE) {
                mAudioFlinger->handleForcedSpeakerRoute(CHECK_ROUTE_RESTORE_TIME);
            }

            // 找到需要处理的 tracks
            size_t count = activeTracks.size();
            for (size_t i=0 ; i<count ; i++) {
                sp<Track> t = activeTracks[i].promote();
                if (t == 0) continue;

                Track* const track = t.get();
                audio_track_cblk_t* cblk = track->cblk();
                // 设置混音器当前处理 Track 的索引
                mAudioMixer->setActiveTrack(track->name());
                // 当 Track 有音频数据待处理时,播放音频
                if (cblk->framesReady() && (track->isReady() || track->isStopped()) &&
                        !track->isPaused())
                {
                    //LOGV("track %d u=%08x, s=%08x [OK]", track->name(), cblk->user, cblk->server);
                    // compute volume for this track
                    int16_t left, right;
                    if (track->isMuted() || mMasterMute || track->isPausing()) {
                        left = right = 0;
                        if (track->isPausing()) {
                            LOGV("paused(%d)", track->name());
                            track->setPaused();
                        }
                    } else {
                        float typeVolume = mStreamTypes[track->type()].volume;
                        float v = mMasterVolume * typeVolume;
                        float v_clamped = v * cblk->volume[0];
                        if (v_clamped > MAX_GAIN) v_clamped = MAX_GAIN;
                        left = int16_t(v_clamped);
                        v_clamped = v * cblk->volume[1];
                        if (v_clamped > MAX_GAIN) v_clamped = MAX_GAIN;
                        right = int16_t(v_clamped);
                    }
                    // 将 track 作为混音器的数源,并使能混音器
                    // XXX: these things DON'T need to be done each time
                    mAudioMixer->setBufferProvider(track);
                    mAudioMixer->enable(AudioMixer::MIXING);

                    int param;
                    if ( track->mFillingUpStatus == Track::FS_FILLED) {
                        // no ramp for the first volume setting
                        track->mFillingUpStatus = Track::FS_ACTIVE;
                        if (track->mState == TrackBase::RESUMING) {
                            track->mState = TrackBase::ACTIVE;
                            param = AudioMixer::RAMP_VOLUME;
                        } else {
                            param = AudioMixer::VOLUME;
                        }
                    } else {
                        param = AudioMixer::RAMP_VOLUME;
                    }
                    mAudioMixer->setParameter(param, AudioMixer::VOLUME0, left);
                    mAudioMixer->setParameter(param, AudioMixer::VOLUME1, right);
                    mAudioMixer->setParameter(
                        AudioMixer::TRACK,
                        AudioMixer::FORMAT, track->format());
                    mAudioMixer->setParameter(
                        AudioMixer::TRACK,
                        AudioMixer::CHANNEL_COUNT, track->channelCount());
                    mAudioMixer->setParameter(
                        AudioMixer::RESAMPLE,
                        AudioMixer::SAMPLE_RATE,
                        int(cblk->sampleRate));

                    // reset retry count
                    track->mRetryCount = kMaxTrackRetries;
                    enabledTracks++;
                } else {
                    // 播放停止,track 无音频数据处理
                    //LOGV("track %d u=%08x, s=%08x [NOT READY]", track->name(), cblk->user, cblk->server);
                    if (track->isStopped()) {
                        track->reset();
                    }
                    if (track->isTerminated() || track->isStopped() || track->isPaused()) {
                        // We have consumed all the buffers of this track.
                        // Remove it from the list of active tracks.
                        LOGV("remove(%d) from active list", track->name());
                        tracksToRemove.add(track);
                    } else {
                        // No buffers for this track. Give it a few chances to
                        // fill a buffer, then remove it from active list.
                        if (--(track->mRetryCount) <= 0) {
                            LOGV("BUFFER TIMEOUT: remove(%d) from active list", track->name());
                            tracksToRemove.add(track);
                        }
                    }
                    // LOGV("disable(%d)", track->name());
                    mAudioMixer->disable(AudioMixer::MIXING);
                }
            }

            // 移除所有停止播放的 track
            count = tracksToRemove.size();
            if (UNLIKELY(count)) {
                for (size_t i=0 ; i<count ; i++) {
                    const sp<Track>& track = tracksToRemove[i];
                    removeActiveTrack_l(track);
                    if (track->isTerminated()) {
                        mTracks.remove(track);
                        deleteTrackName_l(track->mName);
                    }
                }
            }
       }
        
        if (LIKELY(enabledTracks)) {
            // 混音
            mAudioMixer->process(curBuf);
            // output audio to hardware
            mLastWriteTime = systemTime();
            mInWrite = true;
            // 将音频数据写入硬件
            mOutput->write(curBuf, mixBufferSize);
            mNumWrites++;
            mInWrite = false;
            mStandby = false;
            nsecs_t temp = systemTime();
            standbyTime = temp + kStandbyTimeInNsecs;
            nsecs_t delta = temp - mLastWriteTime;
            if (delta > maxPeriod) {
                LOGW("write blocked for %llu msecs", ns2ms(delta));
                mNumDelayedWrites++;
            }
            sleepTime = kBufferRecoveryInUsecs;
        } else {
            // There was nothing to mix this round, which means all
            // active tracks were late. Sleep a little bit to give
            // them another chance. If we're too late, the audio
            // hardware will zero-fill for us.
            //LOGV("no buffers - usleep(%lu)", sleepTime);
            usleep(sleepTime);
            if (sleepTime < kMaxBufferRecoveryInUsecs) {
                sleepTime += kBufferRecoveryInUsecs;
            }
        }

        // finally let go of all our tracks, without the lock held
        // since we can't guarantee the destructors won't acquire that
        // same lock.
        tracksToRemove.clear();
    } while (true);

    return false;
}

混音线程是音频播放的核心进程,代码量很大,主要完成以下工作:

  • 阻塞等待活跃 track
  • 调用 AudioMixer 进行混音
  • 调用 AudioStreamOut 播放音频

其实混音进程的主要工作是由混音器承担,那么混音器在哪里创建呢?可以在 MixerThread 的构造函数中看到混音器的初始化:

mAudioMixer = new AudioMixer(mFrameCount, output->sampleRate());

下面来分析混音器的工作原理。

混音器

混音器的功能是将多路音频混合成一路音频,混音器的输入就是 Tack,通过 setBufferProvider 将 Tack 设置到混音器中。

status_t AudioMixer::setBufferProvider(AudioBufferProvider* buffer)
{
    mState.tracks[ mActiveTrack ].bufferProvider = buffer;
    return NO_ERROR;
}

注意到 setBufferProvider 的形参类型为 AudioBufferProvider,而实参类型为 Track,不禁想到 Track 类继承了 AudioBufferProvider。事实也确实如此。

class AudioBufferProvider
{
public:
    struct Buffer {
        union {
            void*       raw;
            short*      i16;
            int8_t*     i8;
        };
        size_t frameCount;
    };

    virtual ~AudioBufferProvider() {}
    
    virtual status_t getNextBuffer(Buffer* buffer) = 0;
    virtual void releaseBuffer(Buffer* buffer) = 0;
};

从 AudioBufferProvider 的定义可看到它是以 Buffer 对象的数据者。其中 getNextBuffer 在 Track 中实现如下:

status_t AudioFlinger::MixerThread::Track::getNextBuffer(AudioBufferProvider::Buffer* buffer)
{
     audio_track_cblk_t* cblk = this->cblk();
     uint32_t framesReady;
     uint32_t framesReq = buffer->frameCount;

     // Check if last stepServer failed, try to step now
     if (mFlags & TrackBase::STEPSERVER_FAILED) {
         if (!step())  goto getNextBuffer_exit;
         LOGV("stepServer recovered");
         mFlags &= ~TrackBase::STEPSERVER_FAILED;
     }

     framesReady = cblk->framesReady();

     if (LIKELY(framesReady)) {
        uint32_t s = cblk->server;
        uint32_t bufferEnd = cblk->serverBase + cblk->frameCount;

        bufferEnd = (cblk->loopEnd < bufferEnd) ? cblk->loopEnd : bufferEnd;
        if (framesReq > framesReady) {
            framesReq = framesReady;
        }
        if (s + framesReq > bufferEnd) {
            framesReq = bufferEnd - s;
        }

         buffer->raw = getBuffer(s, framesReq);
         if (buffer->raw == 0) goto getNextBuffer_exit;

         buffer->frameCount = framesReq;
        return NO_ERROR;
     }

getNextBuffer_exit:
     buffer->raw = 0;
     buffer->frameCount = 0;
     return NOT_ENOUGH_DATA;
}

cblk()返回的是 Track 中的 mCblk 成员,由前面分析可知,它指向共享内存头部的控制块结构。getNextBuffer 就是从共享内存区取一块可读的内存,共享内存中保存了音频数据。对于音频数据何时写入共享内存,后面再详细分析。
混音器使能:

status_t AudioMixer::enable(int name)
{
    switch (name) {
        case MIXING: {
            if (mState.tracks[ mActiveTrack ].enabled != 1) {
                mState.tracks[ mActiveTrack ].enabled = 1;
                LOGV("enable(%d)", mActiveTrack);
                invalidateState(1<<mActiveTrack);
            }
        } break;
        default:
            return NAME_NOT_FOUND;
    }
    return NO_ERROR;
}

invalidateState 函数设置了对应 Track 的混音处理函数:

void AudioMixer::invalidateState(uint32_t mask)
{
    if (mask) {
        mState.needsChanged |= mask;
        mState.hook = process__validate;
    }
}

混音器调用 process 处理音频数据:

void AudioMixer::process(void* output)
{
    mState.hook(&mState, output);
}

那么钩子函数 process__validate 被调用,process__validate 根据 Track 的音频参数,调用不同的钩子函数:

void AudioMixer::process__validate(state_t* state, void* output)
{
    LOGW_IF(!state->needsChanged,
        "in process__validate() but nothing's invalid");


    uint32_t changed = state->needsChanged;
    state->needsChanged = 0; // clear the validation flag


    // recompute which tracks are enabled / disabled
    uint32_t enabled = 0;
    uint32_t disabled = 0;
    while (changed) {
        const int i = 31 - __builtin_clz(changed);
        const uint32_t mask = 1<<i;
        changed &= ~mask;
        track_t& t = state->tracks[i];
        (t.enabled ? enabled : disabled) |= mask;
    }
    state->enabledTracks &= ~disabled;
    state->enabledTracks |=  enabled;


    // compute everything we need...
    int countActiveTracks = 0;
    int all16BitsStereoNoResample = 1;
    int resampling = 0;
    int volumeRamp = 0;
    uint32_t en = state->enabledTracks;
    // 便利所有 Track
    while (en) {
        const int i = 31 - __builtin_clz(en);
        en &= ~(1<<i);

        countActiveTracks++;
        track_t& t = state->tracks[i];
        uint32_t n = 0;
        n |= NEEDS_CHANNEL_1 + t.channelCount - 1;
        n |= NEEDS_FORMAT_16;
        n |= t.doesResample() ? NEEDS_RESAMPLE_ENABLED : NEEDS_RESAMPLE_DISABLED;
       
        if (t.volumeInc[0]|t.volumeInc[1]) {
            volumeRamp = 1;
        } else if (!t.doesResample() && t.volumeRL == 0) {
            n |= NEEDS_MUTE_ENABLED;
        }
        t.needs = n;
        // 设置 Track 处理钩子函数
        if ((n & NEEDS_MUTE__MASK) == NEEDS_MUTE_ENABLED) {
            t.hook = track__nop;
        } else {
            if ((n & NEEDS_RESAMPLE__MASK) == NEEDS_RESAMPLE_ENABLED) {
                all16BitsStereoNoResample = 0;
                resampling = 1;
                t.hook = track__genericResample;
            } else {
                if ((n & NEEDS_CHANNEL_COUNT__MASK) == NEEDS_CHANNEL_1){
                    t.hook = track__16BitsMono;
                    all16BitsStereoNoResample = 0;
                }
                if ((n & NEEDS_CHANNEL_COUNT__MASK) == NEEDS_CHANNEL_2){
                    t.hook = track__16BitsStereo;
                }
            }
        }
    }

    // 设置混音器钩子函数
    state->hook = process__nop;
    if (countActiveTracks) {
        if (resampling) {
            if (!state->outputTemp) {
                state->outputTemp = new int32_t[MAX_NUM_CHANNELS * state->frameCount];
            }
            if (!state->resampleTemp) {
                state->resampleTemp = new int32_t[MAX_NUM_CHANNELS * state->frameCount];
            }
            state->hook = process__genericResampling;
        } else {
            if (state->outputTemp) {
                delete [] state->outputTemp;
                state->outputTemp = 0;
            }
            if (state->resampleTemp) {
                delete [] state->resampleTemp;
                state->resampleTemp = 0;
            }
            state->hook = process__genericNoResampling;
            if (all16BitsStereoNoResample && !volumeRamp) {
                if (countActiveTracks == 1) {
                    state->hook = process__OneTrack16BitsStereoNoResampling;
                }
            }
        }
    }

    LOGV("mixer configuration change: %d activeTracks (%08x) "
        "all16BitsStereoNoResample=%d, resampling=%d, volumeRamp=%d",
        countActiveTracks, state->enabledTracks,
        all16BitsStereoNoResample, resampling, volumeRamp);

   // 调用混音器钩子函数
   state->hook(state, output);

   // Now that the volume ramp has been done, set optimal state and
   // track hooks for subsequent mixer process
   if (countActiveTracks) {
       int allMuted = 1;
       uint32_t en = state->enabledTracks;
       while (en) {
           const int i = 31 - __builtin_clz(en);
           en &= ~(1<<i);
           track_t& t = state->tracks[i];
           if (!t.doesResample() && t.volumeRL == 0)
           {
               t.needs |= NEEDS_MUTE_ENABLED;
               t.hook = track__nop;
           } else {
               allMuted = 0;
           }
       }
       if (allMuted) {
           state->hook = process__nop;
       } else if (!resampling && all16BitsStereoNoResample) {
           if (countActiveTracks == 1) {
              state->hook = process__OneTrack16BitsStereoNoResampling;
           }
       }
   }
}

下面是以单 Track 不需要重采样的立体声音源为例,钩子函数赋值情况,下面分析混音处理函数:

void AudioMixer::process__OneTrack16BitsStereoNoResampling(state_t* state, void* output)
{
    const int i = 31 - __builtin_clz(state->enabledTracks);
    const track_t& t = state->tracks[i];

    AudioBufferProvider::Buffer& b(t.buffer);
   
    int32_t* out = static_cast<int32_t*>(output);
    // 硬件缓冲区可容纳的帧数
    size_t numFrames = state->frameCount;
  
    const int16_t vl = t.volume[0];
    const int16_t vr = t.volume[1];
    const uint32_t vrl = t.volumeRL;
    while (numFrames) {
        b.frameCount = numFrames;
        // 从共享内存中取出对应数量的音频帧
        t.bufferProvider->getNextBuffer(&b);
        int16_t const *in = b.i16;

        // in == NULL can happen if the track was flushed just after having
        // been enabled for mixing.
        if (in == NULL || ((unsigned long)in & 3)) {
            memset(out, 0, numFrames*MAX_NUM_CHANNELS*sizeof(int16_t));
            LOGE_IF(((unsigned long)in & 3), "process stereo track: input buffer alignment pb: buffer %p track %d, channels %d, needs %08x",
                    in, i, t.channelCount, t.needs);
            return;
        }
        // 从共享内存实际获取的帧数
        size_t outFrames = b.frameCount;
        // 单 track 不需要混音,仅需要计算音频增益
        if (UNLIKELY(uint32_t(vl) > UNITY_GAIN || uint32_t(vr) > UNITY_GAIN)) {
            // volume is boosted, so we might need to clamp even though
            // we process only one track.
            do {
                uint32_t rl = *reinterpret_cast<uint32_t const *>(in);
                in += 2;
                int32_t l = mulRL(1, rl, vrl) >> 12;
                int32_t r = mulRL(0, rl, vrl) >> 12;
                // clamping...
                l = clamp16(l);
                r = clamp16(r);
                *out++ = (r<<16) | (l & 0xFFFF);
            } while (--outFrames);
        } else {
            do {
                uint32_t rl = *reinterpret_cast<uint32_t const *>(in);
                in += 2;
                int32_t l = mulRL(1, rl, vrl) >> 12;
                int32_t r = mulRL(0, rl, vrl) >> 12;
                *out++ = (r<<16) | (l & 0xFFFF);
            } while (--outFrames);
        }
        // 每次音频处理,需要填充一次硬件缓冲区
        // 计算离填满缓冲区,还需要多少帧
        numFrames -= b.frameCount;
        // 释放共享内存
        t.bufferProvider->releaseBuffer(&b);
    }
}

当出现多个 Track 同时播放音频时,则会调用混音钩子函数:

void AudioMixer::process__genericResampling(state_t* state, void* output)
{
    int32_t* const outTemp = state->outputTemp;
    const size_t size = sizeof(int32_t) * MAX_NUM_CHANNELS * state->frameCount;
    memset(outTemp, 0, size);


    int32_t* out = static_cast<int32_t*>(output);
    size_t numFrames = state->frameCount;


    uint32_t en = state->enabledTracks;
    while (en) {
        const int i = 31 - __builtin_clz(en);
        en &= ~(1<<i);
        track_t& t = state->tracks[i];

        // 对于需要重新采样的 Track 执行重采样动作
        if ((t.needs & NEEDS_RESAMPLE__MASK) == NEEDS_RESAMPLE_ENABLED) {
            (t.hook)(&t, outTemp, numFrames, state->resampleTemp);
        } else {

            size_t outFrames = numFrames;
           
            while (outFrames) {
                t.buffer.frameCount = outFrames;
                t.bufferProvider->getNextBuffer(&t.buffer);
                t.in = t.buffer.raw;
                // t.in == NULL can happen if the track was flushed just after having
                // been enabled for mixing.
                if (t.in == NULL) break;
                // 调用 track 的钩子函数进行混音
                (t.hook)(&t, outTemp + (numFrames-outFrames)*MAX_NUM_CHANNELS, t.buffer.frameCount, state->resampleTemp);
                outFrames -= t.buffer.frameCount;
                t.bufferProvider->releaseBuffer(&t.buffer);
            }
        }
    }

    ditherAndClamp(out, outTemp, numFrames);
}

重采样和混音过程这里不作详细分析,通常会调用硬件接口实现。我们只关心混音后的音频输出,即 output 参数。播放音频的代码如下:

mOutput->write(curBuf, mixBufferSize);

以上函数通常会调用 hal 层将音频数据送给 codec 驱动喇叭。

驱动喇叭

播放音频是通过 mOutput 的 write 函数完成,那么 mOutput 何时初始化呢?这要回到 AudioFlinger 的构造函数,有如下代码段:、

mAudioHardware = AudioHardwareInterface::create();
AudioStreamOut *hwOutput = mAudioHardware->openOutputStream(AudioSystem::PCM_16_BIT, 0, 0, &status);

先看 create 函数:

AudioHardwareInterface* AudioHardwareInterface::create()
{
    AudioHardwareInterface* hw = 0;
    char value[PROPERTY_VALUE_MAX];

#ifdef GENERIC_AUDIO
    hw = new AudioHardwareGeneric();
#else
    // if running in emulation - use the emulator driver
    if (property_get("ro.kernel.qemu", value, 0)) {
        LOGD("Running in emulation - using generic audio driver");
        hw = new AudioHardwareGeneric();
    }
    else {
        LOGV("Creating Vendor Specific AudioHardware");
        hw = createAudioHardware();
    }
#endif
    if (hw->initCheck() != NO_ERROR) {
        LOGW("Using stubbed audio hardware. No sound will be produced.");
        delete hw;
        hw = new AudioHardwareStub();
    }
    return hw;
}

根据硬件配置,会实例化不同硬件接口对象。我们模拟器为例,AudioHardwareGeneric 类的对象会被创建:

AudioHardwareGeneric::AudioHardwareGeneric()
    : mOutput(0), mInput(0),  mFd(-1), mMicMute(false)
{
    mFd = ::open(kAudioDeviceName, O_RDWR);
}

在构造函数中会取打开音频驱动。
接着看 openOutputStream 函数:

AudioStreamOut* AudioHardwareGeneric::openOutputStream(
        int format, int channelCount, uint32_t sampleRate, status_t *status)
{
    AutoMutex lock(mLock);
    // only one output stream allowed
    if (mOutput) {
        if (status) {
            *status = INVALID_OPERATION;
        }
        return 0;
    }
    // create new output stream
    AudioStreamOutGeneric* out = new AudioStreamOutGeneric();
    status_t lStatus = out->set(this, mFd, format, channelCount, sampleRate);
    if (status) {
        *status = lStatus;
    }
    if (lStatus == NO_ERROR) {
        mOutput = out;
    } else {
        delete out;
    }
    return mOutput;
}

这可以看到 mOutput 初始化为 AudioStreamOutGeneric 类的对象。那么音频数据是通过 AudioStreamOutGeneric 的 write 函数写入硬件缓冲区:

ssize_t AudioStreamOutGeneric::write(const void* buffer, size_t bytes)
{
    Mutex::Autolock _l(mLock);
    return ssize_t(::write(mFd, buffer, bytes));
}

这里便是音频数据流的终点,通过 write 写入内核,在由音频 driver 写入到硬件 codec。

三、音频数据的产生
前面分析音频数据的播放时,假定了共享内存中已经存有可读的音频数据。那么这些音频数据是何时写入共享内存的呢?在 C++ AudioTrack 构造函数获取了共享内存:

sp<IMemory> cblk = track->getCblk();

服务端的 getCblk 会被调用:

sp<IMemory> AudioFlinger::MixerThread::TrackBase::getCblk() const
{
    return mCblkMemory;
}

前面分析可知 mCblkMemory 指向了分配给 track 的共享内存的头地址。C++ AudioTrack 的 write 函数就是往这边共享内存中写入数据,当然在写入前需要获取可以的内存块。

五、停止播放

用户停止播放,AudioFlinger 端的 Track 会相应的调用 stop 函数。

void AudioFlinger::MixerThread::Track::stop()
{
    LOGV("stop(%d), calling thread %d for output %d", mName, IPCThreadState::self()->getCallingPid(), mMixerThread->mOutputType);
    Mutex::Autolock _l(mMixerThread->mAudioFlinger->mLock);
    if (mState > STOPPED) {
        mState = STOPPED;
        // If the track is not active (PAUSED and buffers full), flush buffers
        if (mMixerThread->mActiveTracks.indexOf(this) < 0) {
            reset();
        }
        LOGV("(> STOPPED) => STOPPED (%d)", mName);
    }
}

其实 stop 函数只是将状态设置成 STOPPED,真正的停止,是在 threadLoop 函数中:

bool AudioFlinger::MixerThread::threadLoop()
{
    ... 
               } else {
                    //LOGV("track %d u=%08x, s=%08x [NOT READY]", track->name(), cblk->user, cblk->server);
                    if (track->isStopped()) {
                        track->reset();
                    }
                    if (track->isTerminated() || track->isStopped() || track->isPaused()) {
                        // We have consumed all the buffers of this track.
                        // Remove it from the list of active tracks.
                        LOGV("remove(%d) from active list", track->name());
                        tracksToRemove.add(track);
                    } else {
                        // No buffers for this track. Give it a few chances to
                        // fill a buffer, then remove it from active list.
                        if (--(track->mRetryCount) <= 0) {
                            LOGV("BUFFER TIMEOUT: remove(%d) from active list", track->name());
                            tracksToRemove.add(track);
                        }
                    }
                    // LOGV("disable(%d)", track->name());
                    mAudioMixer->disable(AudioMixer::MIXING);
                }
            }

            // remove all the tracks that need to be...
            count = tracksToRemove.size();
            if (UNLIKELY(count)) {
                for (size_t i=0 ; i<count ; i++) {
                    const sp<Track>& track = tracksToRemove[i];
                    removeActiveTrack_l(track);
                    if (track->isTerminated()) {
                        mTracks.remove(track);
                        deleteTrackName_l(track->mName);
                    }
                }
            }
        }
        ...
}

停止音频播放,就是将 Track 从活跃列表中移除。此时 Track 的生命周期还未消亡,知道 C++ AudioTrack 析构函数被调用:

AudioTrack::~AudioTrack()
{
    LOGV_IF(mSharedBuffer != 0, "Destructor sharedBuffer: %p", mSharedBuffer->pointer());

    if (mStatus == NO_ERROR) {
        // Make sure that callback function exits in the case where
        // it is looping on buffer full condition in obtainBuffer().
        // Otherwise the callback thread will never exit.
        stop();
        if (mAudioTrackThread != 0) {
            mAudioTrackThread->requestExitAndWait();
            mAudioTrackThread.clear();
        }
        mAudioTrack.clear();
        IPCThreadState::self()->flushCommands();
    }
}

mAudioTrack引用的是服务的TrackHandle,那么 TrackHandle 的析构函数会被调用:

AudioFlinger::TrackHandle::~TrackHandle() {
    mTrack->destroy();
}

那么 Track 的 destroy 函数会被调用:

void AudioFlinger::MixerThread::Track::destroy()
{
    sp<Track> keep(this);
    { // scope for AudioFlinger::mLock
        Mutex::Autolock _l(mMixerThread->mAudioFlinger->mLock);
        mMixerThread->destroyTrack_l(this);
    }
}

转而调用混音线程的 destroyTrack_l 函数:

void AudioFlinger::MixerThread::destroyTrack_l(const sp<Track>& track)
{
    track->mState = TrackBase::TERMINATED;
    if (mActiveTracks.indexOf(track) < 0) {
        LOGV("remove track (%d) and delete from mixer", track->name());
        mTracks.remove(track);
        deleteTrackName_l(track->name());
    }
}

这里将 track 中 mTracks 中移除,支持 Track 的生命周期终结。析构函数会调用:

AudioFlinger::MixerThread::TrackBase::~TrackBase()
{
    if (mCblk) {
        mCblk->~audio_track_cblk_t();   // destroy our shared-structure.        
    }
    mCblkMemory.clear();            // and free the shared memory
    mClient.clear();
}

父类的析构函数释放共享内存,子类析构:

AudioFlinger::MixerThread::Track::~Track()
{
    wp<Track> weak(this); // never create a strong ref from the dtor
    Mutex::Autolock _l(mMixerThread->mAudioFlinger->mLock);
    mState = TERMINATED;
}

更新状态为 TERMINATED。

至此音频播放过程的调用关系分析完成。

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

翻滚吧香香

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值