Android AudioFlinger & AudioPolicy启动详解

Android AudioFlinger & AudioPolicy启动详解

AudioPolicyService和AudioFlinger都是运行在AudioServer进程中的模块,它们的入口是在/frameworks/av/media/audioserver/main_audioserver.cpp文件中的main()函数中,由init进程通过解析audioserver.rc文件启动。main()函数干的事也比较简单,就是创建AudioPolicyService和AudioFlinger对象,并将它们两个作为Binder服务,添加进ServiceManager中。

一、audioserver进程启动

Google将init.rc文件做了拆分, 由模块独立管理其rc文件,不再由init进程集中管理, 这里audioserver.rc 便是audio系统的启动文件, 看下其内容:

# 作为service启动,说明启动路径
service audioserver /system/bin/audioserver 
	# 核心
    class core
    #用户为audioserver
    user audioserver
    # media gid needed for /dev/fm (radio) and for /data/misc/media (tee)
    group audio camera drmrpc inet media mediadrm net_bt net_bt_admin net_bw_acct wakelock
    capabilities BLOCK_SUSPEND
    ioprio rt 4
    take_profiles ProcessCapabilityHigh HighPerformance
    onrestart restart audio-hal
    onrestart restart audio-hal-aidl
    onrestart restart audio-effect-hal-aidl
    onrestart restart audio-hal-4-0-msd
    onrestart restart audio_proxy_service

on property:vts.native_server.on=1
    stop audioserver
on property:vts.native_server.on=0
    start audioserver

...


再看audioserver的启动入口main函数:

int main(int argc __unused, char **argv)
{
    // 限制audioserver内存使用上限
    limitProcessMemory(
        "audio.maxmem", /* "ro.audio.maxmem", property that defines limit */
        (size_t)512 * (1 << 20), /* SIZE_MAX, upper limit in bytes */
        20 /* upper limit as percentage of physical RAM */);

    signal(SIGPIPE, SIG_IGN);

    bool doLog = (bool) property_get_bool("ro.test_harness", 0);

    pid_t childPid;
    //是否启动子进程启动log记录,这里不是流程分析关注点
    if (doLog && (childPid = fork()) != 0) {
        ......
    } else {
        android::hardware::configureRpcThreadpool(4, false /*callerWillJoin*/);
        ProcessState::self()->startThreadPool();
        // 启动AudioFlinger
        const auto af = sp<AudioFlinger>::make();
        const auto afAdapter = sp<AudioFlingerServiceAdapter>::make(af);
        // 启动AudioPolicyService
        const auto aps = sp<AudioPolicyService>::make();
        // Add AudioFlinger and AudioPolicyService to ServiceManager
        sp<IServiceManager> sm = defaultServiceManager();
        sm->addService(String16(IAudioFlinger::DEFAULT_SERVICE_NAME), afAdapter,
                false, IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT);
        sm->addService(String16(IAudioFlinger::DEFAULT_SERVICE_NAME), aps,
                false, IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT);
       .......
       // 满足条件,启动AAudioService
        std::vector<AudioMMapPolicyInfo> policyinfos;
        status_t status = af->getMMapPolicyInfos(AudioMMapPolicyType::DEFAULT, &policyinfos);
        AAudioService::instantiate();
    }
}

二、AudioFlinger初始化

在audioserver中,即调用AudioFlinger的构造函数:

AudioFlinger::AudioFlinger()...
{
    ...
    mDeviceFactoryHal = DeviceFactoryHalInterface::create();
    mEffectsFactoryHal = audioflinger::EffectConfiguration::getEffectsFactoryHal();
    ...
}

AudioFlinger的构造函数只是对参数进行初始化。

AudioFlinger::onFirst()
{
    mDeviceFactoryHalCallback = new DeviceFactoryHalCallbackImp;
    mDeviceFactoryHal->setCallbackOnce(mDeviceFactoryHalCallback);
}

三、AudioPolicyService初始化

AudioPolicyService::AudioPolicyService()
    : BnAudioPolicyService(),
      mAudioPolicyManager(NULL),
      mAudioPolicyClient(NULL),
      mPhoneState(AUDIO_MODE_INVALID),
      mCaptureStateNotifier(false), 
      mCreateAudioPolicyManager(createAudioPolicyManager),
      mDestroyAudioPolicyManager(destroyAudioPolicyManager),
      mUsecaseValidator(media::createUsecaseValidator()){
      setMinSchedulerPolicy(SCHED_NORMAL, ANDROID_PRIORITY_AUDIO);
}

AudioPolicyService构造函数只是进行简单初始化。

void AudioPolicyService::onFirstRef()
{
    ...
    //start audio commands thread
    mAudioCommandThread = new AudioCommandThread(String8("ApmAudio"), this);
    //start output activity command thread
    mOutputCommandThread = new AudioCommandThread(String8("ApmOutput"), this);

    mAudioPolicyClient = new AudioPolicyClient(this);

    // 加载libaudiopolicymanagercustom.so
    loadAudioPolicyManager();
    mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);
}

AudioPolicyService::onFirstRef()函数中调用了createAudioPolicyManager。

extern "C" AudioPolicyInterface* createAudioPolicyManager(
        AudioPolicyClientInterface *clientInterface)
{
    auto config = AudioPolicyConfig::loadFromApmXmlConfigWithFallback();
    AudioPolicyManager *apm = new AudioPolicyManager(config,loadApmEngineLibraryAndCreateEngine(
            config->getEngineLibraryNameSuffix(),clientInterface);
    status_t status = apm->initialize();
    if (status != NO_ERROR) {
        delete apm;
        apm = nullptr;
    }
    return apm;
}

createAudioPolicyManager主要进行两个操作:加载xml文件和创建AudioPolicyManager并将其初始化。

3.1 解析xml配置文件
  1. 解析audio_policy_configuration.xml配置文件

会调用loadFromApmXmlConfigWithFallback()函数,该函数会通过Serializer.cpp工具来解析audio_policy_configuration.xml配置文件,然后将解析到的配置信息保存到AudioPolicyManager类的mAudioPolicyConfig对象中。代码调用流程如下:

AudioPolicyConfig::loadFromApmXmlConfigWithFallback(const std::string& xmlFilePath)
    const std::string filePath = audio_get_audio_policy_config_file();
    config = sp<AudioPolicyConfig>::make();
    config->loadFromXml(filePath, false);
        deserializeAudioPolicyFile(filePath.c_str(), this);
            PolicySerializer serializer;
            serializer.deserializer(fileName, config);
                xmlNodePtr root = xmlDocGetRootElement(doc.get());
                ModuleTraits::Collection modules;
                deserializeCollection<ModuleTrait>(root, &modules, config);
                config->setHwModules(modules);

audio_policy_configuration.xml配置文件中定义的各种节点,与AudioPolicyManager中的数据结构类型之间的映射关系如下:

  • <module>节点对应的是HwModule类

  • <mixPort>节点对应的是IOProfile类,输出流对应的是OutputProfile子类,输入流对应的是InputProfile子类。

  • <devicePort>节点对应的是DeviceDescriptor类。

  • <profile>节点对应的是AudioProfile类,这个类定义在libaudiofoundation目录中。用于描述mixPort和devicePort的配置信息。比如支持什么样的采样率、channel个数、format格式。

  • <route>节点对应的是AudioRoute类。描述的是OutputProfile和DeviceDescriptor的连接关系。

  • <defaultOutputDevice>节点用于表述默认的输出物理设备,手机一般配置的是扬声器。它保存在AudioPolicyConfig对象的mDefaultOutputDevice成员变量中。

  • <attachedDevices>节点代表的是系统开机可用device,比如扬声器、内置Mic。而像有线耳机、无线蓝牙设备这些,刚开机时并不是可用的Device。要等到AudioService监听到它们连接上时,才会通知AudioPolicyManager可用。保存在AudioPolicyConfig对象的mOutputDevices成员变量和mInputDevices成员变量中。而这两个变量指针引用的是AudioPolicyManager对象的mOutputDevicesAll和mInputDevicesAll变量,所以也保存在这两个变量中。

  1. 解析audio_policy_engine_configuration.xml配置文件

调用AudioPolicyManager构造函数前会创建并初始化Engine对象,Engine的父类EngineBase在其构造函数中会解析audio_policy_engine_configuration.xml配置文件。

loadApmEngineLibraryAndCreateEngine
    //加载so文件
    auto engLib = EngineLibrary::load(librarySuffix);
    auto engine = engLib->createEngineUsingXmlConfig(configXmlFilePath);
        auto instance = createEngine();
        instance->loadFromApmXmlConfigWithFallback(xmlFilePath);
            EngineBase::loadAudioPolicyEngineConfig(xmlFilePath);

当解析完这些配置文件后,会把其中的数据保存到EngineBase类的两个成员变量中:mProductStrategies集合和mVolumeGroups集合。

介绍一下这些配置文件中涉及到的AudioAttributes、StreamType、ProductStrategy、VolumeGroup、deviceCategory、VolumeCurve这些数据结构的作用,以及它们之间的关联关系。

  • <AudioAttributes>:描述的是一个音频使用场景。比如音乐播放场景或者通话场景。

  • <StreamType>:节点的"streamType"标签,早期用于表示音频使用场景的变量。StreamType描述的是场景,比如AUDIO_STREAM_VOICE_CALL和AUDIO_STREAM_MUSIC。

  • <ProductStrategy>:代表的是硬件设备选择策略。比如在当前策略下,应该选择使用扬声器播放还是耳机播放。

  • <VolumeGroup>:代表的是一种StreamType对应的音量类型。VolumeGroup中定义了这种音量类型的最小音量等级、最大音量等级、以及在不同硬件设备下应该采用的音量曲线。

  • <deviceCategory>:代表的一类硬件设备类型。每类不同的设备类型在不同的StreamType场景下,会使用不同的音量曲线。

  • <VolumeCurve>:代表的一个音量曲线,供每类不同的硬件设备类型使用。如33,-2800,表示的是音量在33%时,使用-28dB的增益。

3.2 AudioPolicyManager初始化

AudioPolicyManager构造函数也只有初始化参数,接下来看initialize函数:

status_t AudioPolicyManager::initialize() {
    mEngine->setObserver(this);
    status_t status = mEngine->initCheck();
    //选择defaultOutputDevice并设置Strategy
    Engine->initializeDevicesSelectionCache();
    mCommunnicationStrategy = mEngine->getProductStrategyForAttributes(
            mEngine->getAttributesForStreamType(AUDIO_STREAM_VOICE_CALL));
    onNewAudioModulesAvailableInt(nullptr /* newDevices */);
    updateDevicesAndOutputs();
}

接着调用onNewAudioModulesAvailableInt()函数。此函数会根据解析到的配置信息,通知AudioFlinger加载AudioModule,打开PlaybackThread。

void AudioPolicyManager::onNewAudioModulesAvailableInt(DeviceVector *newDevices)
{
    for (const auto& hwModule : mHwModulesAll) {
        if (std::find(mHwModules.begin(), mHwModules.end(), hwModule) != mHwModules.end()) {
            continue;
        }
        //加载硬件抽象库, 在构造函数中传入的mpClientInterface == AudioPolicyClient,其中实现类AudioPolicyClientImpI.cpp
        hwModule->setHandle(mpClientInterface->loadHwModule(hwModule->getName()));
        mHwModules.push_back(hwModule);

        for (const auto& outProfile : hwModule->getOutputProfiles()) {
            const DeiceVector &supportedDevices = outProfile->getSupportedDevices();
            DeviceVector availProfileDevices = supportedDevices.filter(mConfig->getOutputDevices());
            sp<DeviceDescriptor> supportedDevice = 0;
            if (supportedDevices.contains(mDefaultOutputDevice)) {
                supportedDevice = mDefaultOutputDevice;
            } else {
                if (availProfileDevices.isEmpty()) {
                    continue;
                }
                supportedDevice = availProfileDevices.itemAt(0);
            }
            if (!mOutputDevicesAll.contains(supportedDevice)) {
                continue;
            }
            sp<SwAudioOutputDescriptor> outputDesc = new SwAudioOutputDescriptor(outProfile, mpClientInterface);
            audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
            status_t status = outputDesc->open(nullptr, DeviceVector(supportedDevice),
                                               AUDIO_STREAM_DEFAULT,
                                               AUDIO_OUTPUT_FLAG_NONE, &output);

            for (const auto &device : availProfileDevices) {
                // give a valid ID to an attached device once confirmed it is reachable
                if (!device->isAttached()) {
                    device->attach(hwModule);
                    mAvailableOutputDevices.add(device);
                    device->setEncapsulationInfoFromHal(mpClientInterface);
                    if (newDevices) newDevices->add(device);
                    //向mMediaDevices列表的头部添加此device,这个列表中保存的是可使用的有线外设和无线蓝牙外设
                    setEngineDeviceConnectionState(device, AUDIO_POLICY_DEVICE_STATE_AVAILABLE);
                }
            }

            if (mPrimaryOutput == 0 &&outProfile->getFlags() & AUDIO_OUTPUT_FLAG_PRIMARY) {
                mPrimaryOutput = outputDesc;
            }
            if ((outProfile->getFlags() & AUDIO_OUTPUT_FLAG_DIRECT) != 0) {
                outputDesc->close();
            } else {
                addOutput(output, outputDesc);
                setOutputDevices(outputDesc,DeviceVector(supportedDevice),true,0,NULL);
            }
        }

        for (const auto& inProfile : hwModule->getInputProfiles()) {
            const DeiceVector &supportedDevices = inProfile->getSupportedDevices();
            DeviceVector availProfileDevices = supportedDevices.filter(mConfig->getInputDevices());

            sp<AudioInputDescriptor> inputDesc =  new AudioInputDescriptor(inProfile, mpClientInterface);
            audio_io_handle_t input = AUDIO_IO_HANDLE_NONE;
            status_t status = inputDesc->open(nullptr,
                                              availProfileDevices.itemAt(0),
                                              AUDIO_SOURCE_MIC,
                                              AUDIO_INPUT_FLAG_NONE,
                                              &input);

            for (const auto &device : availProfileDevices) {
                // give a valid ID to an attached device once confirmed it is reachable
                if (!device->isAttached()) {
                    device->attach(hwModule);
                    device->importAudioPortAndPickAudioProfile(inProfile, true);
                    mAvailableInputDevices.add(device);
                    if (newDevices) newDevices->add(device);
                    setEngineDeviceConnectionState(device, AUDIO_POLICY_DEVICE_STATE_AVAILABLE);
                }
            }
            inputDesc->close();
        }
    }
    //Check if spatializer outputs can be closed until used
    std::vector<audio_io_handle_t> outputsClosed;
    for(size_t i = 0; i < mOutputs.size(); i++) {
        sp<SwAudioOutputDescriptor> desc = mOutputs.valueAt(i);
        if((desc->mFlags & AUDIO_OUTPUT_FLAG_SPATIALIZER) != 0 &&
                !isOutputOnlyAvailableRouteToSomeDevice(desc)) {
            outputsClosed.push_back(desc->mIoHandle);
            desc->close();
        }
    }
    for(auto output : outputsClosed){
        removeOutput(output);
    }
}
  1. 打开设备节点并创建AudioHwDevice

mpClientInterface->loadHwModule会调用到AudioFlinger::loadHwModule_l

audio_module_handle_t AudioFlinger::loadHwModule_l(const char *name)
{
    for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
        if (strncmp(mAudioHwDevs.valueAt(i)->moduleName(), name, strlen(name)) == 0) {
            ALOGW("loadHwModule() module %s already loaded", name);
            return mAudioHwDevs.keyAt(i);
        }
    }

    sp<DeviceHalInterface> dev;
    int rc = mDevicesFactoryHal->openDevice(name, &dev);
    if (rc) {
        ALOGI("loadHwModule() error %d loading module %s ", rc, name);
        return 0;
     }

     mHardwareStatus = AUDIO_HW_INIT;
     rc = dev->init_check(dev);
     mHardwareStatus = AUDIO_HW_IDLE;
     if (rc) {
         ALOGI("loadHwModule() init check error %d for module %s ", rc, name);
         return AUDIO_MODULE_HANDLE_NONE;
     }

     // Check and cache this HAL's level of support for master mute and master
     // volume.  If this is the first HAL opened, and it supports the get
     // methods, use the initial values provided by the HAL as the current
     // master mute and volume settings.

     AudioHwDevice::Flags flags = static_cast<AudioHwDevice::Flags>(0);
     {  // scope for auto-lock pattern
         AutoMutex lock(mHardwareLock); 
         if (0 == mAudioHwDevs.size()) {
             mHardwareStatus = AUDIO_HW_GET_MASTER_VOLUME;
             float mv;
             if (OK == dev->get_master_volume(&mv)) {
                 mMasterVolume = mv;
             }

             mHardwareStatus = AUDIO_HW_GET_MASTER_MUTE;
             bool mm;
             if (OK == dev->get_master_mute(&mm)) {
                 mMasterMute = mm;
             }
         }

         mHardwareStatus = AUDIO_HW_SET_MASTER_VOLUME;
         if (OK == dev->set_master_volume(mMasterMut)) {
             flags = static_cast<AudioHwDevice::Flags>(flags |
                     AudioHwDevice::AHWD_CAN_SET_MASTER_VOLUME);
         }

         mHardwareStatus = AUDIO_HW_SET_MASTER_MUTE;
         if (OK == dev->set_master_mute(mMasterMute)) {
             flags = static_cast<AudioHwDevice::Flags>(flags |
                     AudioHwDevice::AHWD_CAN_SET_MASTER_MUTE);
         }

         mHardwareStatus = AUDIO_HW_IDLE;
     }

     audio_module_handle_t handle = nextUniqueId(AUDIO_UNIQUE_ID_USE_MODULE);
     AudioHwDevice *audioDevice = new AudioHwDevice(handle, name, dev, flags);
     if(strcmp(name, AUDIO_HARDWARE_MODULE_ID_MSD) == 0) {
         mPrimaryHardwareDev = audioDevice;
         mHardwareStatus = AUDIO_HW_SET_MODE;
         mPrimaryHardwareDev->hwDevice()->setMode(mMode);
         mHardwareStatus = AUDIO_HW_IDLE;
     }
     mAudioHwDevs.add(handle, audioHwDevice);

     ALOGI("loadHwModule() Loaded %s audio interface from %s (%s) handle %d",
           name, dev->common.module->name, dev->common.module->id, handle);

     return handle;
}
  1. 创建PlaybackThread并打开StreamOut
sp<SwAudioOutputDescriptor> outputDesc = new SwAudioOutputDescriptor(outProfile, mpClientInterface);
status_t status = outputDesc->open(nullptr, profileType, address,
            AUDIO_STREAM_DEFAULT, AUDIO_OUTPUT_FLAG_NONE, &output);
    status_t status = mClientInterface->openOutput(mProfile->getModuleHandle(), output, &IHalConfig,
                &IMixerConfig, device, &mLatency, mFlags);
        sp<ThreadBase> thread = openOutput_l(module, &output, &halConfig, &mixerConfig,
                        deviceType, address, flags);
            AudioStreamOut *outputStream = NULL;
            outputStream->openOutputStream(&outputStream, *output, deviceType, flags,
                            halConfig, address.string());
                AudioStreamOut *outputStream = new AudioStreamOut(this, flags);
                outputStream->open(handle, deviceType, config, address);
                    sp<StreamOutHalInterface> outStream;
                    hwDev->openOutputStream(handle, deviceType, customFlags, 
                                    config, address, &outStream);
            ...
            sp<PlaybackThread> thread;
            thread = new MixerThread(this, outputStream, *output, mSystemReady);
            mPlaybackThread.add(*output, thread);
        if((flags & AUDIO_OUTPUT_FLAG_MMAP_NOIRQ) == 0)
        PlaybackThread *playbackthread = (PlaybackThread *)thread.get();
        playbackthread->ioConfigChanged(AUDIO_OUTPUT_OPENED);
  1. 设置当前OutputStream支持的播放设备类型
uint32_t setOutputDevices(const sp<SwAudioOutputDescriptor> &outputDesc
        const DeviceVector &devices, bool force, int delayMs,
        audio_patch_handle_t *patchHandle, boole requiresMuteCheck, boole requiresVolumeCheck)
{
    DeviceVector filteredDevices = outputDesc->filterSupportDevices(devices);
    DeviceVector prevDevices = outputDesc->devices();
    DeviceVector availPrevDevices = mAvailbleOutputDevices.filter(prevDevices);

    if(!filteredDevices.isEmpty())
        outputDesc->setDevices(filteredDevices);

    bool outputRouted = outputDesc->isRouted();
    if(!devices.isEmpty() && filteredDevices.isEmpty() && !availPrevDevices.isEmpty())
        outputDesc->setDevices(prevDevices);
        return muteWaits;
    ...
    applyStreamVolumes(outputDesc, filteredDevices.types(), delayMs);
}

applyStreamVolumes函数进行设置音量。

3.3 AudioPolicyManager::updateDevicesAndOutputs()函数
AudioPolicyManager.cpp->updateDevicesAndOutputs()
    |-->Engine.cpp->updateDeviceSelectionCache()//遍历所有的ProductStrategy,找到它们目前应该选择的Device。然后保存到Engine::mDevicesForStrategies集合中。
        |-->Engine.cpp->getDevicesForProductStrategy()
 
            |-->Engine.cpp->remapStrategyFromContext()//调整设备选择策略。比如当处于通话或VoIP状态时,STRATEGY_MEDIA和STRATEGY_SONIFICATION都改为使用STRATEGY_PHONE对应的播放设备。也就是说通话时音乐播放的设备会改为与通话设备相同,比如听筒。
            
            |-->Engine.cpp->filterOutputDevicesForStrategy()//过滤掉一些可用的Device。比如通话场景下去掉A2DP设备。
 
            |-->Engine.cpp->getPreferredAvailableDevicesForProductStrategy()//查找上层AudioService设置的PreferredAvailableDevices。
                |-->EngineBase.cpp->getDevicesForRoleAndStrategy()//从EngineBase::ProductStrategyDevicesRoleMap集合中找到该strategy对应的所有devices,这个集合里的数据由AudioService设置。
 
            |-->Engine.cpp->getDevicesForStrategyInt()//只有当AudioService没有设置此ProductStrategy对应的PreferredDevices时,才会调用它。


其中比较关键的是Engine::getDevicesForProductStrategy()函数。它定义了根据不同的场景如何选择合适播放设备的策略。整体的策略是:

  • 先看框架层AudioService有没有设置当前ProductStrategy对应的PreferredDevices。如果设置了,就直接使用。这些PreferredDevices保存在EngineBase::ProductStrategyDevicesRoleMap集合中。

  • 如果上层没有设置PreferredDevices,就通过Engine::getDevicesForStrategyInt()函数来进行播放设备的策略选择。

  • 如果Engine::getDevicesForStrategyInt()函数也找不到合适的Device,就直接使用defaultOutputDevice。

  • 27
    点赞
  • 23
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值