Android 录音数据传输

80 篇文章 0 订阅
63 篇文章 0 订阅
今天来看看Android中的录音数据是怎么来的。


从AudioRecord开始看吧。


AudioRecord中可以取得录音数据的接口是:AudioRecord::read。
首先调用函数obtainBuffer取得录音数据的地址。
然后用memcpy将录音数据copy出来。


看样子,数据来源是obtainBuffer函数了。


来看看函数AudioRecord::obtainBuffer。
其主要功能就是对传入的audioBuffer进行赋值。
audioBuffer是Buffer* 类型。


看看Buffer类:


    class Buffer
    {
    public:
        enum {
            MUTE    = 0x00000001
        };
        uint32_t    flags;
        int         channelCount;
        int         format;
        size_t      frameCount;
        size_t      size;
        union {
            void*       raw;
            short*      i16;
            int8_t*     i8;
        };
    };

其中保存数据的是下面这块东东:
        union {
            void*       raw;
            short*      i16;
            int8_t*     i8;
        };

函数AudioRecord::obtainBuffer中对这块东东赋值的代码如下:
audioBuffer->raw         = (int8_t*)cblk->buffer(u);


cblk的来历:
    audio_track_cblk_t* cblk = mCblk;

mCblk的赋值在函数AudioRecord::openRecord中被赋值:
    mCblk = static_cast<audio_track_cblk_t*>(cblk->pointer());
    mCblk->buffers = (char*)mCblk + sizeof(audio_track_cblk_t); // 可见mCblk头部保存的是结构体的信息,后面跟的是数据

函数audio_track_cblk_t::buffer的实现:
void* audio_track_cblk_t::buffer(uint64_t offset) const
{
    return (int8_t *)this->buffers + (offset - userBase) * this->frameSize;
}


可见数据就保存在audio_track_cblk_t结构体中。


是什么地方往结构体audio_track_cblk_t中写的数据呢?


发现函数AudioRecord::obtainBuffer中,在获取buffer地址时,首先会调用函数audio_track_cblk_t::framesReady来判断有多少数据准备好了。


想起来在播放数据的时候,使用audio_track_cblk_t中的数据时,也调用了函数audio_track_cblk_t::framesReady。
而往audio_track_cblk_t中写数据时,调用了函数audio_track_cblk_t::framesAvailable。


录音肯定也是这样的了。
也就是说,找到调用函数audio_track_cblk_t::framesAvailable的地方,也就找到了往audio_track_cblk_t中写数据的地方。


录音相关,调用audio_track_cblk_t::framesAvailable的地方如下:
AudioFlinger::RecordThread::RecordTrack::getNextBuffer函数。


函数AudioFlinger::RecordThread::RecordTrack::getNextBuffer的主要作用是给传入的AudioBufferProvider::Buffer赋值。


AudioBufferProvider::Buffer结构体类型:
    struct Buffer {
        union {
            void*       raw;
            short*      i16;
            int8_t*     i8;
        };
        size_t frameCount;
    };

保存数据的是下面这块东东:
        union {
            void*       raw;
            short*      i16;
            int8_t*     i8;
        };

函数AudioFlinger::RecordThread::RecordTrack::getNextBuffer对这块东东赋值的代码如下:
        buffer->raw = getBuffer(s, framesReq);

函数AudioFlinger::ThreadBase::TrackBase::getBuffer返回了一个int8_t型指针bufferStart。
对bufferStart赋值的代码如下:
int8_t *bufferStart = (int8_t *)mBuffer + (offset-cblk->serverBase)*cblk->frameSize;


mBuffer的赋值在AudioFlinger::ThreadBase::TrackBase::TrackBase的构造函数中:
mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
mCblk 的来历:
mCblk = static_cast<audio_track_cblk_t *>(mCblkMemory->pointer()); 或 new(mCblk) audio_track_cblk_t();
这个在研究播放音频流的时候已经看过,这儿就不重复了。


肯定是哪个地方调用了函数AudioFlinger::RecordThread::RecordTrack::getNextBuffer,获取一段buffer,然后将录音数据写入到这个buffer。


搜搜调用函数AudioFlinger::RecordThread::RecordTrack::getNextBuffer的地方,录音相关的调用有以下:
AudioFlinger::RecordThread::threadLoop函数


AudioFlinger::RecordThread::threadLoop函数中,调用函数AudioFlinger::RecordThread::RecordTrack::getNextBuffer获取了buffer。
然后将buffer赋值给了dst:
int8_t *dst = buffer.i8 + (buffer.frameCount - framesOut) * mActiveTrack->mCblk->frameSize;


下面要分两种情况来讨论了:
+++++++++++++++++++++++++++++++++++++++不需要resampling的情况-start++++++++++++++++++++++++++++++++++++++++++++++
往dst写数据的有以下两个地方:
                                    while (framesIn--) {
                                        *dst16++ = *src16;
                                        *dst16++ = *src16++;
                                    }
或:
                                    while (framesIn--) {
                                        *dst16++ = (int16_t)(((int32_t)*src16 + (int32_t)*(src16 + 1)) >> 1);
                                        src16 += 2;
                                    }

也就是说数据来源是src了,看看src的由来。
int8_t *src = (int8_t *)mRsmpInBuffer + mRsmpInIndex * mFrameSize;


对mRsmpInBuffer赋值的地方在函数AudioFlinger::RecordThread::readInputParameters中:
mRsmpInBuffer = new int16_t[mFrameCount * mChannelCount];


这儿只是new了个buffer,寻找数据来源,还要看哪儿往里面写数据了。
往 mRsmpInBuffer中写数据的地方也在AudioFlinger::RecordThread::threadLoop函数中:
                                mBytesRead = mInput->read(mRsmpInBuffer, mInputBytes);
----------------------------------------不需要resampling的情况-end-----------------------------------------------
+++++++++++++++++++++++++++++++++++++++需要resampling的情况-start++++++++++++++++++++++++++++++++++++++++++++++
往dst写数据的地方:
                        while (framesOut--) {
                            *dst++ = (int16_t)(((int32_t)*src + (int32_t)*(src + 1)) >> 1);
                            src += 2;
                        }

也就是说数据来源是src了,看看src的由来。
                        int16_t *src = (int16_t *)mRsmpOutBuffer;

对mRsmpInBuffer赋值的地方在函数AudioFlinger::RecordThread::readInputParameters中:
mRsmpInBuffer = new int16_t[mFrameCount * mChannelCount];


这儿只是new了个buffer,寻找数据来源,还要看哪儿往里面写数据了。
往 mRsmpInBuffer中写数据的地方在AudioFlinger::RecordThread::getNextBuffer函数中:
        mBytesRead = mInput->read(mRsmpInBuffer, mInputBytes);

下面看看对AudioFlinger::RecordThread::getNextBuffer函数的调用:


首先函数AudioFlinger::RecordThread::threadLoop中调用了函数AudioResamplerOrder1::resample。
                    mResampler->resample(mRsmpOutBuffer, framesOut, this);
函数AudioResamplerOrder1::resample调用了函数AudioResamplerOrder1::resampleMono16:
        resampleMono16(out, outFrameCount, provider);
函数AudioResamplerOrder1::resampleMono16中调用了函数AudioFlinger::RecordThread::getNextBuffer:
            provider->getNextBuffer(&mBuffer);

mResampler的赋值在函数AudioFlinger::RecordThread::readInputParameters中:
        mResampler = AudioResampler::create(16, channelCount, mReqSampleRate);

函数AudioResampler::create根据参数,返回不同的resampler:
switch (quality) {
    default:
    case LOW_QUALITY:
        LOGV("Create linear Resampler");
        resampler = new AudioResamplerOrder1(bitDepth, inChannelCount, sampleRate);
        break;
    case MED_QUALITY:
        LOGV("Create cubic Resampler");
        resampler = new AudioResamplerCubic(bitDepth, inChannelCount, sampleRate);
        break;
    case HIGH_QUALITY:
        LOGV("Create sinc Resampler");
        resampler = new AudioResamplerSinc(bitDepth, inChannelCount, sampleRate);
        break;
    }

从上面对函数AudioFlinger::RecordThread::getNextBuffer的调用可知,mRsmpOutBuffer最终作为参数out传给了函数AudioResamplerOrder1::resampleMono16。
最终往mRsmpOutBuffer中写数据的地方是在函数AudioResamplerOrder1::resampleMono16中:


        // handle boundary case
        while (inputIndex == 0) {
            // LOGE("boundary case\n");
            int32_t sample = Interp(mX0L, in[0], phaseFraction);
            out[outputIndex++] += vl * sample;
            out[outputIndex++] += vr * sample;
            Advance(&inputIndex, &phaseFraction, phaseIncrement);
            if (outputIndex == outputSampleCount)
                break;
        }

或:


        while (outputIndex < outputSampleCount && inputIndex < mBuffer.frameCount) {
            int32_t sample = Interp(in[inputIndex-1], in[inputIndex],
                    phaseFraction);
            out[outputIndex++] += vl * sample;
            out[outputIndex++] += vr * sample;
            Advance(&inputIndex, &phaseFraction, phaseIncrement);
        }

in的来历:
        int16_t *in = mBuffer.i16;

mBuffer的赋值:
            provider->getNextBuffer(&mBuffer);

回到了刚次说的对函数AudioFlinger::RecordThread::getNextBuffer的调用。


函数AudioFlinger::RecordThread::getNextBuffer中首先调用函数AudioStreamInALSA::read获取数据指针。
        mBytesRead = mInput->read(mRsmpInBuffer, mInputBytes);

然后将数据地址赋值给传入的AudioBufferProvider::Buffer指针:
    buffer->raw = mRsmpInBuffer + mRsmpInIndex * channelCount;

下面看看resampling对数据的处理。


            int32_t sample = Interp(in[inputIndex-1], in[inputIndex],
                    phaseFraction);

phaseFraction的来源:
    uint32_t phaseFraction = mPhaseFraction;

phaseFraction作为参数传给了函数Advance:
            Advance(&inputIndex, &phaseFraction, phaseIncrement);

函数Advance中对phaseFraction进行了赋值:
    static inline void Advance(size_t* index, uint32_t* frac, uint32_t inc) {
        *frac += inc;
        *index += (size_t)(*frac >> kNumPhaseBits);
        *frac &= kPhaseMask;
    }

常量的定义:
    // number of bits for phase fraction - 28 bits allows nearly 8x downsampling
    static const int kNumPhaseBits = 28;


    // phase mask for fraction
    static const uint32_t kPhaseMask = (1LU<<kNumPhaseBits)-1;

再看看Interp函数:
    static inline int32_t Interp(int32_t x0, int32_t x1, uint32_t f) {
        return x0 + (((x1 - x0) * (int32_t)(f >> kPreInterpShift)) >> kNumInterpBits);
    }

常量定义:
    // number of bits used in interpolation multiply - 15 bits avoids overflow
    static const int kNumInterpBits = 15;


    // bits to shift the phase fraction down to avoid overflow
    static const int kPreInterpShift = kNumPhaseBits - kNumInterpBits;

再看vl和vr:
    int32_t vl = mVolume[0];
    int32_t vr = mVolume[1];

mVolume在函数AudioResampler::setVolume中被赋值:
void AudioResampler::setVolume(int16_t left, int16_t right) {
    // TODO: Implement anti-zipper filter
    mVolume[0] = left;
    mVolume[1] = right;
}


函数AudioResampler::setVolume在函数AudioFlinger::RecordThread::readInputParameters中被调用:
        mResampler->setVolume(AudioMixer::UNITY_GAIN, AudioMixer::UNITY_GAIN);

常量定义:
    static const uint16_t UNITY_GAIN = 0x1000;
----------------------------------------需要resampling的情况-end-----------------------------------------------


可以,无论是否resampling,都是通过调用函数AudioStreamInALSA::read来获取数据。


函数AudioStreamInALSA::read中调用了ALSA Lib中的函数snd_pcm_mmap_readi或函数snd_pcm_readi来取得数据:
        if (mHandle->mmap)
            n = snd_pcm_mmap_readi(mHandle->handle, buffer, frames);
        else
            n = snd_pcm_readi(mHandle->handle, buffer, frames);
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值