android audio effect的基本框架

写这个主要还是方便自己记忆和总结。

就从audio effect的创建开始吧。

AudioEffect::AudioEffect ->AudioEffect::set->AudioFlinger::createEffect->AudioFlinger::ThreadBase::createEffect_l

创建基本就以上的这个过程,那它又是怎么起作用的呢?

audio effect 就是音效,放音和录音都有,这里主要说放音,音效的作用就是在往hal层(或者驱动)写数据之前再将数据做音效处理,代码如下,

bool AudioFlinger::PlaybackThread::threadLoop()
{
    .....
            if (mBytesRemaining == 0) {
            mCurrentWriteLength = 0;
            if (mMixerStatus == MIXER_TRACKS_READY) {
                // threadLoop_mix() sets mCurrentWriteLength
                threadLoop_mix();
            } else if ((mMixerStatus != MIXER_DRAIN_TRACK)
                        && (mMixerStatus != MIXER_DRAIN_ALL)) {
                // threadLoop_sleepTime sets sleepTime to 0 if data
                // must be written to HAL
                threadLoop_sleepTime();
                if (sleepTime == 0) {
                    mCurrentWriteLength = mixBufferSize;
                }
            }
            mBytesRemaining = mCurrentWriteLength;
            if (isSuspended()) {
                sleepTime = suspendSleepTimeUs();
                // simulate write to HAL when suspended
                mBytesWritten += mixBufferSize;
                mBytesRemaining = 0;
            }

            // only process effects if we're going to write
            if (sleepTime == 0 && mType != OFFLOAD) {
                for (size_t i = 0; i < effectChains.size(); i ++) {
                    <strong><span style="color:#FF0000;">effectChains[i]->process_l();</span></strong>
                }
            }
        }
        // Process effect chains for offloaded thread even if no audio
        // was read from audio track: process only updates effect state
        // and thus does have to be synchronized with audio writes but may have
        // to be called while waiting for async write callback
        if (mType == OFFLOAD) {
            for (size_t i = 0; i < effectChains.size(); i ++) {
                <strong><span style="color:#FF0000;">effectChains[i]->process_l();</span></strong>
            }
        }

        // enable changes in effect chain
        unlockEffectChains(effectChains);

        if (!waitingAsyncCallback()) {
            // sleepTime == 0 means we must write to audio hardware
            if (sleepTime == 0) {
                if (mBytesRemaining) {
					ALOGV("PlaybackThread::threadLoop()>>>>>>>> ready to write data to HAL");
                    ssize_t ret = threadLoop_write();
    ....
process_l就是音效处理函数,既然有数据处理,那么肯定有数据的入口和出口。这个应该在创建audio effect的时候就设置了,那么看看代码是不是这样子。

status_t AudioFlinger::PlaybackThread::addEffectChain_l(const sp<EffectChain>& chain)
{
    int session = chain->sessionId();
    int16_t *buffer = mMixBuffer;
    bool ownsBuffer = false;
    if (session > 0) {
        // Only one effect chain can be present in direct output thread and it uses
        // the mix buffer as input
        if (mType != DIRECT) {
            size_t numSamples = mNormalFrameCount * mChannelCount;
            buffer = new int16_t[numSamples];
            memset(buffer, 0, numSamples * sizeof(int16_t));
            ALOGV("addEffectChain_l() creating new input buffer %p session %d", buffer, session);
            ownsBuffer = true;
        }

        // Attach all tracks with same session ID to this chain.
        for (size_t i = 0; i < mTracks.size(); ++i) {
            sp<Track> track = mTracks[i];
            if (session == track->sessionId()) {
                ALOGV("addEffectChain_l() track->setMainBuffer track %p buffer %p", track.get(),
                        buffer);
                track->setMainBuffer(buffer); /* track 的mainBuffer 这里被重新设置,指向的是上面分配的内存,而不是mMixBuffer */
                chain->incTrackCnt();
            }
        }

        // indicate all active tracks in the chain
        for (size_t i = 0 ; i < mActiveTracks.size() ; ++i) {
            sp<Track> track = mActiveTracks[i].promote();
            if (track == 0) {
                continue;
            }
            if (session == track->sessionId()) {
                ALOGV("addEffectChain_l() activating track %p on session %d", track.get(), session);
                chain->incActiveTrackCnt();
            }
        }
    }

    chain->setInBuffer(buffer, ownsBuffer); /* 很明显这两个函数就是设置了输入的buffer和输出的buffer */
    chain->setOutBuffer(mMixBuffer);
    ... 
之前分析过,track的mainBuffer 会得到混音后的数据,所以在有音效处理的时候,混音处理之后,chain 的inBuffer 的会得到的是混音之后的数据,然后在继续做音效处理,理所当然的就有了数据的输入,处理完成之后往mMixBuffer写数据作为输出。

时候不早,明天再写...

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值