Android音频 源码简读 (一) ------ AudioTrack的创建及start

      音频播放是Android设备的基本功能,特意整理了一片篇关于AudioTrack、AudioFlinger相关的播放流程;接下来,我们从AudioTrack的创建(createTrack)和播放(start)来跟踪流程;

       1. 首先从output说起,output可以理解为hal层的音频通路,有primary out、lowlatency out、offload、direct_pcm等;

output的创建是在AudioPolicyManager初始化的时候,如下。其中mHwModules是音频输出设备module,如有primary / a2dp / usb / r_submix / dp;mOutputProfiles是每个module下的output,比如primary包含primary out / lowlatency out / offload / direct_pcm;

AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
{
    for (size_t i = 0; i < mHwModules.size(); i++) {
        for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++){
                        status_t status = mpClientInterface->openOutput(outProfile->getModuleHandle(),
                                                            &output,
                                                            &config,
                                                            &outputDesc->mDevice,
                                                            address,
                                                            &outputDesc->mLatency,
                                                            outputDesc->mFlags);
        {
    }    
}

这段代码,将调用openOutput来打开音频通路,并返回output Id,实现在AudioFlinger中,如下:

sp<AudioFlinger::ThreadBase> AudioFlinger::openOutput_l(audio_module_handle_t module,
                                                            audio_io_handle_t *output,
                                                            audio_config_t *config,
                                                            audio_devices_t devices,
                                                            const String8& address,
                                                            audio_output_flags_t flags)
{    
    if (*output == AUDIO_IO_HANDLE_NONE) {
        *output = nextUniqueId(AUDIO_UNIQUE_ID_USE_OUTPUT);    //生成outputId
    } else {
        // Audio Policy does not currently request a specific output handle.
        // If this is ever needed, see openInput_l() for example code.
        ALOGE("openOutput_l requested output handle %d is not AUDIO_IO_HANDLE_NONE", *output);
        return 0;
    }

    AudioStreamOut *outputStream = NULL;
    status_t status = outHwDev->openOutputStream(   //在hal层打开通路
            &outputStream,
            *output,
            devices,
            flags,
            config,
            address.string());

    sp<PlaybackThread> thread;
            if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
                thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady);
                ALOGV("openOutput_l() created offload output: ID %d thread %p",
                      *output, thread.get());
            } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
                    || !isValidPcmSinkFormat(config->format)
                    || !isValidPcmSinkChannelMask(config->channel_mask)) {
                thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady);
                ALOGV("openOutput_l() created direct output: ID %d thread %p",
                      *output, thread.get());
            } else {
                thread = new MixerThread(this, outputStream, *output, devices, mSystemReady);
                ALOGV("openOutput_l() created mixer output: ID %d thread %p",
                      *output, thread.get());
            }
    mPlaybackThreads.add(*output, thread);     //将thread和output绑定

 先通过nextUniqueId获取一个output Id;然后调用openOutputStream打开hal层音频通路,然后创建播放线程OffloadThread / DirectOutputThread / MixerThread,这些类都继承自PlaybackThread;最后将outputId和thread添加到mPlaybackThreads队列中;这样就将outputId和thread绑定好了(创建audiotrack时,会根据outputId来获取播放线程)

      2. 下面从AudioTrack的createTrack开始跟踪:

createTrack_l会创建一个Track提供给应用将数据写入,createTrack_l最后的实现是AudioFlinger中;来看代码:

path: frameworks\av\media\libaudioclient\AudioTrack.cpp
status = AudioSystem::getOutputForAttr(attr, &output,  //获取output值
                                       mSessionId, &streamType, mClientUid,
                                       &config,
                                       mFlags, mSelectedDeviceId, &mPortId);

sp<IAudioTrack> track = audioFlinger->createTrack(streamType,
                                                  mSampleRate,
                                                  mFormat,
                                                  mChannelMask,
                                                  &temp,
                                                  &flags,
                                                  mSharedBuffer,
                                                  output) {
}

调用getOutputForAttr获取适合播放的output,然后调用AudioFlinger::createTrack:

sp<IAudioTrack> AudioFlinger::createTrack(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t *frameCount,
        audio_output_flags_t *flags,
        const sp<IMemory>& sharedBuffer,
        audio_io_handle_t output,
        pid_t pid,
        pid_t tid,
        audio_session_t *sessionId,
        int clientUid,
        status_t *status,
        audio_port_handle_t portId)
{
    PlaybackThread *thread = checkPlaybackThread_l(output);  //通过output获取播放线程
    track = thread->createTrack_l(client, streamType, sampleRate, format,
                channelMask, frameCount, sharedBuffer, lSessionId, flags, tid,
                clientUid, &lStatus, portId);  //创建track

    trackHandle = new TrackHandle(track);  //这里为track加了IBinder,实现进程间数据传输
    return trackHandle;
}

调用checkPlaybackThread_l通过output Id获取播放线程,然后调用播放线程中的createTrack_l方法来获取track;下面来看播放线程中的createTrack方法:

sp<AudioFlinger::PlaybackThread::Track> AudioFlinger::PlaybackThread::createTrack_l(
        const sp<AudioFlinger::Client>& client,
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t *pFrameCount,
        const sp<IMemory>& sharedBuffer,
        audio_session_t sessionId,
        audio_output_flags_t *flags,
        pid_t tid,
        uid_t uid,
        status_t *status,
        audio_port_handle_t portId)
{
    track = new Track(this, client, streamType, sampleRate, format,
                      channelMask, frameCount, NULL, sharedBuffer,
                      sessionId, uid, *flags, TrackBase::TYPE_DEFAULT, portId);

    mTracks.add(track); 
}

这里new Track实际上做的是,使用诸如streamType / sampleRate / format参数,来调用创建环形缓冲区,供AudioTrack和AudioFlinger写读数据;

     3. 下面来看track 调用start时,底层都做了些什么:

AudioTrack的创建是AudioFlinger做的,start最终调用的是Tracks.cpp中的start方法;

status_t AudioFlinger::PlaybackThread::Track::start(AudioSystem::sync_event_t event __unused,
                                                    audio_session_t triggerSession __unused)
{
    PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
    status = playbackThread->addTrack_l(this);
}

在这里先获取创建该track的播放线程PlaybackThread,然后调用addTrack_l将track对象加入到mActiveTracks队列中;这里的mActiveTracks暂且记下,后面AudioFlinger在处理要播放的track时会用到;

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值