Android音频子系统(二)------threadLoop_write数据写入流程

你好!这里是风筝的博客,
欢迎和我一起交流。

之前的文章:Android音频子系统(一)------openOutput打开流程
讲述了Output打开过程,那么接下来它是何时如何写入数据的呢?

这里以Android N为例

//@Threads.cpp
bool AudioFlinger::PlaybackThread::threadLoop()
{
	//......
	ret = threadLoop_write();
	//......
}

threadLoop还是比较太复杂的,我把他放在了这里:Android音频子系统(五)------AudioFlinger处理流程

简单看下PlaybackThread::threadLoop_write

//@Threads.cpp
ssize_t AudioFlinger::PlaybackThread::threadLoop_write()
{
     // If an NBAIO sink is present, use it to write the normal mixer's submix
     if (mNormalSink != 0) {
         ssize_t framesWritten = mNormalSink->write((char *)mSinkBuffer + offset, count);
     // otherwise use the HAL / AudioStreamOut directly
     } else {
         // Direct output and offload threads
         // FIXME We should have an implementation of timestamps for direct output threads.
         // They are used e.g for multichannel PCM playback over HDMI.
         bytesWritten = mOutput->write((char *)mSinkBuffer + offset, mBytesRemaining);
     }
}

从注释可知,如果mNormalSink有被赋值,那么会调用mNormalSink->write,否则就是调用mOutput->write。
所以这里分两种情况:
1.mNormalSink被赋值的情况
2.Direct output and offload 的情况

我们看下mNormalSink的情况吧。

1.mNormalSink被赋值

正常情况下,mixer的场景下mNormalSink肯定是会被赋值的

//@Threads.h
class PlaybackThread : public ThreadBase {
private:
    // The HAL output sink is treated as non-blocking, but current implementation is blocking
    sp<NBAIO_Sink>          mOutputSink;
    // If a fast mixer is present, the blocking pipe sink, otherwise clear
    sp<NBAIO_Sink>          mPipeSink;
    // The current sink for the normal mixer to write it's (sub)mix, mOutputSink or mPipeSink
    sp<NBAIO_Sink>          mNormalSink;
}

//@NBAIO.h
class NBAIO_Sink : public NBAIO_Port {
    virtual ssize_t write(const void *buffer, size_t count) = 0;
}

mNormalSink是NBAIO_Sink类型指针,而NBAIO_Sink ->write又是纯虚函数,我们查找他的write实现,就得先看下mNormalSink被赋值给了什么。

通过结合代码和搜索mNormalSink,发现在MixerThread的构造函数中有进行赋值(MixerThread继承自PlaybackThread):

//@Threads.cpp
static const enum {
    FastMixer_Never,    // never initialize or use: for debugging only
    FastMixer_Always,   // always initialize and use, even if not needed: for debugging only
                        // normal mixer multiplier is 1
    FastMixer_Static,   // initialize if needed, then use all the time if initialized,
                        // multiplier is calculated based on min & max normal mixer buffer size
    FastMixer_Dynamic,  // initialize if needed, then use dynamically depending on track load,
                        // multiplier is calculated based on min & max normal mixer buffer size
} kUseFastMixer = FastMixer_Static;

AudioFlinger::MixerThread::MixerThread(const sp<AudioFlinger>& audioFlinger, AudioStreamOut* output,
        audio_io_handle_t id, audio_devices_t device, bool systemReady, type_t type)
    :   PlaybackThread(audioFlinger, output, id, device, type, systemReady),//这里构造了PlaybackThread
        // mAudioMixer below
        // mFastMixer below
        mFastMixerFutex(0),
        mMasterMono(false)
        // mOutputSink below
        // mPipeSink below
        // mNormalSink below
{
    mAudioMixer = new AudioMixer(mNormalFrameCount, mSampleRate);
    mOutputSink = new AudioStreamOutSink(output->stream);
    // initialize fast mixer depending on configuration
    bool initFastMixer;
    switch (kUseFastMixer) {//kUseFastMixer = FastMixer_Static
    case FastMixer_Never:
        initFastMixer = false;
        break;
    case FastMixer_Always:
        initFastMixer = true;
        break;
    case FastMixer_Static:
    case FastMixer_Dynamic:
        initFastMixer = mFrameCount < mNormalFrameCount;
        break;
    }
    
    MonoPipe *monoPipe = new MonoPipe(mNormalFrameCount * 4, format, true /*writeCanBlock*/);
    mPipeSink = monoPipe;
    
    // create fast mixer and configure it initially with just one fast track for our submix
    mFastMixer = new FastMixer();
    // start the fast mixer
    mFastMixer->run("FastMixer", PRIORITY_URGENT_AUDIO);

    switch (kUseFastMixer) {//kUseFastMixer = FastMixer_Static
    case FastMixer_Never:
    case FastMixer_Dynamic:
        mNormalSink = mOutputSink;
        break;
    case FastMixer_Always:
        mNormalSink = mPipeSink;
        break;
    case FastMixer_Static:
        mNormalSink = initFastMixer ? mPipeSink : mOutputSink;
        break;
    }
}

这里还涉及到了fastMixer,这里不是本文重点,先不表。
头疼,这里mNormalSink 的情况又分两组,默认情况下kUseFastMixer=FastMixer_Static,initFastMixer = mFrameCount < mNormalFrameCount;

所以我们这里讨论两种情况:
1.mNormalSink = mOutputSink;
2.mNormalSink = mPipeSink;

1.1 mNormalSink = mOutputSink

看下mOutputSink的由来:mOutputSink = new AudioStreamOutSink(output->stream);

//@AudioStreamOutSink.h
class AudioStreamOutSink : public NBAIO_Sink {
    sp<StreamOutHalInterface> mStream;
}

//@AudioStreamOutSink.cpp
AudioStreamOutSink::AudioStreamOutSink(sp<StreamOutHalInterface> stream) :
        NBAIO_Sink(),
        mStream(stream),
        mStreamBufferSizeBytes(0)
{
    ALOG_ASSERT(stream != 0);
}

这里将mStream初始化为入参stream,也就是传入的output->stream。
也就是说,当mNormalSink = mOutputSink时,PlaybackThread::threadLoop_write里的mNormalSink->write就是AudioStreamOutSink::write

//@AudioStreamOutSink.cpp
ssize_t AudioStreamOutSink::write(const void *buffer, size_t count)
{
    ssize_t ret = mStream->write(mStream, buffer, count * mFrameSize);
    if (ret > 0) {
        ret /= mFrameSize;
        mFramesWritten += ret;
    } else {
        // FIXME verify HAL implementations are returning the correct error codes e.g. WOULD_BLOCK
    }
    return ret;
}

那么这里的mStream->write又是调用到哪里呢?看类型也知道StreamOutHalInterface肯定是和hal相关。
刚刚我们说了AudioStreamOutSink的构造函数中,将传参output->stream初始化给了mStream,那么我们看下output->stream的由来:
output由MixerThread构造函数传入,那么MixerThread是在那里被new的呢?

sp<AudioFlinger::PlaybackThread> AudioFlinger::openOutput_l(audio_module_handle_t module,
                                                            audio_io_handle_t *output,
                                                            audio_config_t *config,
                                                            audio_devices_t devices,
                                                            const String8& address,
                                                            audio_output_flags_t flags)
{
    AudioStreamOut *outputStream = NULL;
    status_t status = outHwDev->openOutputStream(
            &outputStream,
            *output,
            devices,
            flags,
            config,
            address.string());
            
    if (status == NO_ERROR) {
        PlaybackThread *thread;
        if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
            thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady);
        } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
                || !isValidPcmSinkFormat(config->format)
                || !isValidPcmSinkChannelMask(config->channel_mask)) {
            thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady);
        } else {
            thread = new MixerThread(this, outputStream, *output, devices, mSystemReady);//这里!!!
        }
        mPlaybackThreads.add(*output, thread);
    }
}

之前的文章讲open流程有稍微提到过:Android音频子系统(一)------openOutput打开流程
这里是将outHwDev->openOutputStream的实参&outputStream,给了new MixerThread:

//@AudioHwDevice.cpp
status_t AudioHwDevice::openOutputStream(
        AudioStreamOut **ppStreamOut,
        audio_io_handle_t handle,
        audio_devices_t devices,
        audio_output_flags_t flags,
        struct audio_config *config,
        const char *address)
{
	//创建AudioStreamOut音频输出流
    AudioStreamOut *outputStream = new AudioStreamOut(this, flags);
    *ppStreamOut = outputStream;//这里做了赋值,也就是&outputStream
}

所以我们可知,outputStream是一个输出流,也就是说AudioStreamOutSink::write里的mStream->write,就是AudioStreamOut::write

//@AudioStreamOut.h
class AudioStreamOut {
public:
    audio_stream_out_t *stream;
}

//@AudioStreamOut.cpp
ssize_t AudioStreamOut::write(const void *buffer, size_t numBytes)
{
    ALOG_ASSERT(stream != NULL);
    ssize_t bytesWritten = stream->write(stream, buffer, numBytes);
    if (bytesWritten > 0 && mHalFrameSize > 0) {
        mFramesWritten += bytesWritten / mHalFrameSize;
    }
    return bytesWritten;
}

这里明显看到stream->write,stream是AudioStreamOut类成员,又是在哪里赋值的呢?

status_t AudioStreamOut::open(
        audio_io_handle_t handle,
        audio_devices_t devices,
        struct audio_config *config,
        const char *address)
{
    audio_stream_out_t *outStream;

    int status = hwDev()->open_output_stream(
            hwDev(),
            handle,
            devices,
            customFlags,
            config,
            &outStream,
            address);

    if (status == NO_ERROR) {
        stream = outStream;
    }
}

这里滴流程是真滴多,open的时候stream会被赋值:stream = outStream,这里也就到了hal层了:adev->hw_device.open_output_stream = adev_open_output_stream;到hal层也就懒得细贴代码了。

总结下就是mNormalSink = mOutputSink时,mNormalSink->write最后就调用到hal层的write操作了。

1.2 mNormalSink = mPipeSink

那么如果是mNormalSink = mPipeSink;的情况呢?这个情况就比较简单了,
MixerThread构造函数里面:

MonoPipe *monoPipe = new MonoPipe(mNormalFrameCount * 4, format, true /*writeCanBlock*/);
mPipeSink = moniPipe

所以mNormalSink->write也就是MonoPipe::write

ssize_t MonoPipe::write(const void *buffer, size_t count)
{
}

这部分说实话没看懂,Android怎么这么复杂,先留着吧。。。。。。

2.Direct output and offload

Direct output and offload的情况下:
一般的HDMI设备就是走的Direct output了,我们分析下

//AudioStreamOut	*mOutput;
bytesWritten = mOutput->write((char *)mSinkBuffer + offset, mBytesRemaining);

我们在本文件搜索下mOutput是在哪被初始化的,结果找到了:

AudioFlinger::PlaybackThread::PlaybackThread(const sp<AudioFlinger>& audioFlinger,
                                             AudioStreamOut* output,
                                             audio_io_handle_t id,
                                             audio_devices_t device,
                                             type_t type,
                                             bool systemReady)
    :   ThreadBase(audioFlinger, id, device, AUDIO_DEVICE_NONE, type, systemReady),
        //......
        mActiveTracksGeneration(0),
        // mStreamTypes[] initialized in constructor body
        mOutput(output),//就是这里初始化了
        mLastWriteTime(-1), mNumWrites(0), mNumDelayedWrites(0), mInWrite(false),
        mMixerStatus(MIXER_IDLE),
        //......
{
}

在PlaybackThread的构造函数里面,会初始化mOutput为output,output是PlaybackThread构造函数的入参,那么它是在哪里被创建的呢?
一般是在播放线程实例,例如OffloadThread或者DirectOutputThread或者MixerThread的构造函数中创建的,MixerThread比较常见,文章开头也描述有,就以MixerThread分析:

AudioFlinger::MixerThread::MixerThread(const sp<AudioFlinger>& audioFlinger, AudioStreamOut* output,
        audio_io_handle_t id, audio_devices_t device, bool systemReady, type_t type)
    :   PlaybackThread(audioFlinger, output, id, device, type, systemReady),//就是这里了
        // mAudioMixer below
        // mFastMixer below
        mFastMixerFutex(0),
        mMasterMono(false)
        // mOutputSink below
        // mPipeSink below
        // mNormalSink below
{
}

继续往下追踪,PlaybackThread(audioFlinger, output, id, device, type, systemReady)中output,也就是MixerThread中的传参output,又是哪里传入的呢?

//AudioFlinger.cpp
sp<AudioFlinger::ThreadBase> AudioFlinger::openOutput_l(audio_module_handle_t module,
                                                            audio_io_handle_t *output,
                                                            audio_config_t *config,
                                                            audio_devices_t devices,
                                                            const String8& address,
                                                            audio_output_flags_t flags)
{
	//这里outputStream的初始化
    AudioStreamOut *outputStream = NULL;
    status_t status = outHwDev->openOutputStream(
            &outputStream,
            *output,
            devices,
            flags,
            config,
            address.string());
            
    if (status == NO_ERROR) {

        PlaybackThread *thread;
        if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
            thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady);
            ALOGV("openOutput_l() created offload output: ID %d thread %p", *output, thread);
        } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
                || !isValidPcmSinkFormat(config->format)
                || !isValidPcmSinkChannelMask(config->channel_mask)) {
            thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady);
            ALOGV("openOutput_l() created direct output: ID %d thread %p", *output, thread);
        } else {
        	//创建MixerThread线程
            thread = new MixerThread(this, outputStream, *output, devices, mSystemReady);
            ALOGV("openOutput_l() created mixer output: ID %d thread %p", *output, thread);
        }
        mPlaybackThreads.add(*output, thread);
        return thread;
    }
}

又到了我们熟悉的openOutput_l函数:Android音频子系统(一)------openOutput打开流程

所以我们可知,mOutput->write也就是AudioStreamOut::write了,这里write就可以往底层写入数据。

不过…

其实也不要那么麻烦分析,mOutput是class PlaybackThread 的成员,类型是AudioStreamOut:

class PlaybackThread : public ThreadBase {
	AudioStreamOut                  *mOutput;
}

所以也可以得知mOutput->write也就是AudioStreamOut::write了

ssize_t AudioStreamOut::write(const void* buffer, size_t bytes)
{
    AudioOutputList::iterator I;
    bool checkDMAStart = false;
    bool hasActiveOutputs = false;
    {
        Mutex::Autolock _l(mRoutingLock);
        for (I = mPhysOutputs.begin(); I != mPhysOutputs.end(); ++I) {
            if (AudioOutput::PRIMED == (*I)->getState())
                checkDMAStart = true;

            if ((*I)->getState() == AudioOutput::ACTIVE)
                hasActiveOutputs = true;
        }
    }
    if (checkDMAStart) {
        int64_t junk;
        getNextWriteTimestamp_internal(&junk);
    }

    // We always call processOneChunk on the outputs, as it is the
    // tick for their state machines.
    {
        Mutex::Autolock _l(mRoutingLock);
        for (I = mPhysOutputs.begin(); I != mPhysOutputs.end(); ++I) {
            (*I)->processOneChunk((uint8_t *)buffer, bytes, hasActiveOutputs, mInputFormat);
        }

        // If we don't actually have any physical outputs to write to, just sleep
        // for the proper amount of time in order to simulate the throttle that writing
        // to the hardware would impose.
        uint32_t framesWritten = bytes / mInputFrameSize;
        finishedWriteOp(framesWritten, (0 == mPhysOutputs.size()));
    }
}

因为是Direct output直接输出的,里面先判断DMA开始了没,接着判断有没有active outputs。
之后调用(*I)->processOneChunk进行处理:

void AudioOutput::processOneChunk(const uint8_t* data, size_t len,
                                  bool hasActiveOutputs, audio_format_t format) {
        doPCMWrite(data, len, format);//写入pcm数据
}

最后也是调用到pcm_write,整条路就打通了。

  • 2
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
RT-Thread诞生于2006年,是一款以开源、中立、社区化发展起来的物联网操作系统。 RT-Thread主要采用 C 语言编写,浅显易懂,且具有方便移植的特性(可快速移植到多种主流 MCU 及模组芯片上)。RT-Thread把面向对象的设计方法应用到实时系统设计中,使得代码风格优雅、架构清晰、系统模块化并且可裁剪性非常好。 RT-Thread有完整版和Nano版,对于资源受限的微控制器(MCU)系统,可通过简单易用的工具,裁剪出仅需要 3KB Flash、1.2KB RAM 内存资源的 NANO 内核版本;而相对资源丰富的物联网设备,可使用RT-Thread完整版,通过在线的软件包管理工具,配合系统配置工具实现直观快速的模块化裁剪,并且可以无缝地导入丰富的软件功能包,实现类似 Android 的图形界面及触摸滑动效果、智能语音交互效果等复杂功能。 RT-Thread架构 RT-Thread是一个集实时操作系统(RTOS)内核、中间件组件的物联网操作系统,架构如下: 内核层:RT-Thread内核,是 RT-Thread的核心部分,包括了内核系统中对象的实现,例如多线程及其调度、信号量、邮箱、消息队列、内存管理、定时器等;libcpu/BSP(芯片移植相关文件 / 板级支持包)与硬件密切相关,由外设驱动和 CPU 移植构成。 组件与服务层:组件是基于 RT-Thread内核之上的上层软件,例如虚拟文件系统、FinSH命令行界面、网络框架、设备框架等。采用模块化设计,做到组件内部高内聚,组件之间低耦合。 RT-Thread软件包:运行于 RT-Thread物联网操作系统平台上,面向不同应用领域的通用软件组件,由描述信息、源代码或库文件组成。RT-Thread提供了开放的软件包平台,这里存放了官方提供或开发者提供的软件包,该平台为开发者提供了众多可重用软件包的选择,这也是 RT-Thread生态的重要组成部分。软件包生态对于一个操作系统的选择至关重要,因为这些软件包具有很强的可重用性,模块化程度很高,极大的方便应用开发者在最短时间内,打造出自己想要的系统。RT-Thread已经支持的软件包数量已经达到 180+。 RT-Thread的特点: 资源占用极低,超低功耗设计,最小内核(Nano版本)仅需1.2KB RAM,3KB Flash。 组件丰富,繁荣发展的软件包生态 。 简单易用 ,优雅的代码风格,易于阅读、掌握。 高度可伸缩,优质的可伸缩的软件架构,松耦合,模块化,易于裁剪和扩展。 强大,支持高性能应用。 跨平台、芯片支持广泛。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值