音频输出设备是如何决定的

1. 既然是分析音频输出设备,我们首先需要知道当前手机支持的音频输出设备有哪些

adb shell dumpsys media.audio_policy > /home/jon/audio_policy.txt

我们关注如下字段:

- Available output devices:
  Device 1:
  - id:  1
  - tag name: Earpiece
  - type: AUDIO_DEVICE_OUT_EARPIECE                       
  - Profiles:
      Profile 0:
          - format: AUDIO_FORMAT_PCM_16_BIT
          - sampling rates:48000
          - channel masks:0x0010
  Device 2:
  - id:  2
  - tag name: Speaker
  - type: AUDIO_DEVICE_OUT_SPEAKER                        
  - Profiles:
      Profile 0:
          - format: AUDIO_FORMAT_PCM_16_BIT
          - sampling rates:48000
          - channel masks:0x0003
  Device 3:
  - id:  6
  - tag name: Telephony Tx
  - type: AUDIO_DEVICE_OUT_TELEPHONY_TX                   
  - Profiles:
      Profile 0:
          - format: AUDIO_FORMAT_PCM_16_BIT
          - sampling rates:8000, 16000
          - channel masks:0x0001, 0x0003

从中我们得到当前终端支持的音频输出方式有:Earpiece(听筒),Speaker(外放),Telephony Tx(用于传输路由到手机无线装置的音频)
同时,我们看下这个输出参数是怎么得到的:

frameworks\av\media\audioserver\main_audioserver.cpp

AudioPolicyService::instantiate();

frameworks\av\services\audiopolicy\service\audioPolicyService.cpp

void AudioPolicyService::onFirstRef()
{
	...
	    mAudioPolicyClient = new AudioPolicyClient(this);
        mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);
    ...
}

frameworks\av\services\audiopolicy\managerdefault\audioPolicyManager.cpp

AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
{
	...
	    mVolumeCurves = new VolumeCurvesCollection();
    AudioPolicyConfig config(mHwModules, mAvailableOutputDevices, mAvailableInputDevices,
                             mDefaultOutputDevice, speakerDrcEnabled,
                             static_cast<VolumeCurvesCollection *>(mVolumeCurves));
    PolicySerializer serializer;
    if (serializer.deserialize(AUDIO_POLICY_XML_CONFIG_FILE, config) != NO_ERROR) 
    ...

}

如上通过解析AUDIO_POLICY_XML_CONFIG_FILE(/system/etc/audio_policy_configuration.xml)配置文件而来
在这里插入图片描述
2. 输出设备类型的选择(例如Speak(usb接口,或者primary),耳机(usb接口,或者primary))
我们知道无论是哪种类型的音频播放(音乐,铃声,电话等),最终我们都会在Native的AudioTrack创建一个实例。这里我们以Music为例来说明:参考我的native播放例子

AudioTrack::AudioTrack
	 AudioTrack::set
	 	AudioTrack::createTrack_l

我们分析下这个方法:

status_t AudioTrack::createTrack_l()
{
	//这里和audioFlinger关联起来了,通过AudioSystem的静态成员函数get_audio_flinger获取audioFlinger的代理
    const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
    audio_io_handle_t output;
    audio_stream_type_t streamType = mStreamType;
    //这里的streamType是Music类型,因此得到属性attr为NULL
    audio_attributes_t *attr = (mStreamType == AUDIO_STREAM_DEFAULT) ? &mAttributes : NULL;

	status_t status;
	//这里通过属性以及stream的类型获取当前track输出的output设备
    status = AudioSystem::getOutputForAttr(attr, &output,
                                           mSessionId, &streamType, mClientUid,
                                           mSampleRate, mFormat, mChannelMask,
                                           mFlags, mSelectedDeviceId, mOffloadInfo);	

	...
	//如下4步是通过选中的输出设备,获取该输出设备的延迟,帧率,采样率
	status = AudioSystem::getLatency(output, &mAfLatency);
	status = AudioSystem::getFrameCount(output, &mAfFrameCount);
	status = AudioSystem::getFrameCountHAL(output, &afFrameCountHAL);
	status = AudioSystem::getSamplingRate(output, &mAfSampleRate);
	...
	 /*这里就是将track的创建交给audioflinger完成,之后audioflinger会最后在playback thread的
	 线程中创建用于音频播放的track*/
     sp<IAudioTrack> track = audioFlinger->createTrack(streamType,
                                                      mSampleRate,
                                                      mFormat,
                                                      mChannelMask,
                                                      &temp,
                                                      &trackFlags,
                                                      mSharedBuffer,
                                                      output,
                                                      mClientPid,
                                                      tid,
                                                      &mSessionId,
                                                      mClientUid,
                                                      &status);                    
	...
}

ok,我们回到本文的主旋律中,看看是如何选中output设备的,分析getOutputForAttr

frameworks\av\media\libmedia\audioSystem.cpp

	status_t AudioSystem::getOutputForAttr(const audio_attributes_t *attr,
                                            audio_io_handle_t *output,
                                            audio_session_t session,
                                            audio_stream_type_t *stream,
                                            uid_t uid,
                                            uint32_t samplingRate,
                                            audio_format_t format,
                                            audio_channel_mask_t channelMask,
                                            audio_output_flags_t flags,
                                            audio_port_handle_t selectedDeviceId,
                                            const audio_offload_info_t *offloadInfo)
    {
        const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
        if (aps == 0) return NO_INIT;
        return aps->getOutputForAttr(attr, output, session, stream, uid,
                                     samplingRate, format, channelMask,
                                     flags, selectedDeviceId, offloadInfo);
    }

frameworks\av\services\audiopolicy\service\audioPolicyInterfaceImpl.cpp

status_t AudioPolicyService::getOutputForAttr(const audio_attributes_t *attr,
                                              audio_io_handle_t *output,
                                              audio_session_t session,
                                              audio_stream_type_t *stream,
                                              uid_t uid,
                                              uint32_t samplingRate,
                                              audio_format_t format,
                                              audio_channel_mask_t channelMask,
                                              audio_output_flags_t flags,
                                              audio_port_handle_t selectedDeviceId,
                                              const audio_offload_info_t *offloadInfo)
{
    if (mAudioPolicyManager == NULL) {
        return NO_INIT;
    }
    ALOGV("getOutput()");
    Mutex::Autolock _l(mLock);

    const uid_t callingUid = IPCThreadState::self()->getCallingUid();
    if (!isTrustedCallingUid(callingUid) || uid == (uid_t)-1) {
        ALOGW_IF(uid != (uid_t)-1 && uid != callingUid,
                "%s uid %d tried to pass itself off as %d", __FUNCTION__, callingUid, uid);
        uid = callingUid;
    }
    return mAudioPolicyManager->getOutputForAttr(attr, output, session, stream, uid, samplingRate,
                                    format, channelMask, flags, selectedDeviceId, offloadInfo);
}

frameworks\av\services\audiopolicy\managerdefault\audioPolicyManager.cpp

status_t AudioPolicyManager::getOutputForAttr(...)
{
	audio_attributes_t attributes;
	//对于我们当前music的track,传入的attr 是NULL,如前文分析
	if (attr != NULL) {
		attributes = *attr;
	} else {
		/*		对于当前music的场景,attributes会被如下赋值
		        attr->content_type = AUDIO_CONTENT_TYPE_MUSIC;
       			attr->usage = AUDIO_USAGE_MEDIA;
		*/
		stream_type_to_audio_attributes(*stream, &attributes);
	}
	...
	//根据当前音频的属性,获取当前track的音频策略
    routing_strategy strategy = (routing_strategy) getStrategyForAttr(&attributes);
    //根据当前track的音频策略,获取当前音频的输出终端
    audio_devices_t device = getDeviceForStrategy(strategy, false /*fromCache*/);
    //根据音频设备,音频输出标识,format等选择输出路径,这个函数的详细分析请查看本文最后
    *output = getOutputForDevice(device, session, *stream,
                                 samplingRate, format, channelMask,
                                 flags, offloadInfo);
	...
}

那么,我们当前需要进一步分析的就是如下2个函数了getStrategyForAttr,getDeviceForStrategy。
首先根据,上文得到的音频属性获取音频策略,再根据音频策略选中输出设备的类型

uint32_t AudioPolicyManager::getStrategyForAttr(const audio_attributes_t *attr) {
    // flags to strategy mapping
    if ((attr->flags & AUDIO_FLAG_BEACON) == AUDIO_FLAG_BEACON) {
        return (uint32_t) STRATEGY_TRANSMITTED_THROUGH_SPEAKER;
    }
    if ((attr->flags & AUDIO_FLAG_AUDIBILITY_ENFORCED) == AUDIO_FLAG_AUDIBILITY_ENFORCED) {
        return (uint32_t) STRATEGY_ENFORCED_AUDIBLE;
    }
    // usage to strategy mapping
    //这里范围音频策略是:STRATEGY_MEDIA
    return static_cast<uint32_t>(mEngine->getStrategyForUsage(attr->usage));
}

如上getStrategyForAttr返回的音频策略是STRATEGY_MEDIA,之后我们再根据这个音频策略选中输出设备

frameworks\av\services\audiopolicy\enginedefault\src\Engine.cpp

audio_devices_t Engine::getDeviceForStrategy(routing_strategy strategy) const
{
    DeviceVector availableOutputDevices = mApmObserver->getAvailableOutputDevices();
    DeviceVector availableInputDevices = mApmObserver->getAvailableInputDevices();

    const SwAudioOutputCollection &outputs = mApmObserver->getOutputs();

    return getDeviceForStrategyInt(strategy, availableOutputDevices,
                                   availableInputDevices, outputs);
}

继续:

audio_devices_t Engine::getDeviceForStrategyInt(routing_strategy strategy,
                                                DeviceVector availableOutputDevices,
                                                DeviceVector availableInputDevices,
                                                const SwAudioOutputCollection &outputs) const
{
	uint32_t availableOutputDevicesType = availableOutputDevices.types();
	...
	switch (strategy) {
		...
		case STRATEGY_MEDIA: {
		        uint32_t device2 = AUDIO_DEVICE_NONE;
		        ...
		          //device2在前面没有被选中,而且没有设置这个setForceUse(setBluetoothA2dpOn(false)的时                 候,会设置
		           //AudioSystem.FOR_MEDIA,AudioSystem.FORCE_NO_BT_A2DP标记)
		        if ((device2 == AUDIO_DEVICE_NONE) &&
		                (mForceUse[AUDIO_POLICY_FORCE_FOR_MEDIA] != AUDIO_POLICY_FORCE_NO_BT_A2DP) &&
		                (outputs.getA2dpOutput() != 0)) {
		          //第一个优先项出现了.如果此时a2dp可用,直接到下面我用****标记的特殊情况
		            device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP;
		            if (device2 == AUDIO_DEVICE_NONE) {
		                device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_HEADPHONES;
		            }
		            if (device2 == AUDIO_DEVICE_NONE) {
		                device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_SPEAKER;
		            }
		        }
		        //这次轮到了AudioSystem.FOR_MEDIA,AudioSystem.FORCE_SPEAKER这种情况,speaker胜出
		        if ((device2 == AUDIO_DEVICE_NONE) &&
		            (mForceUse[AUDIO_POLICY_FORCE_FOR_MEDIA] == AUDIO_POLICY_FORCE_SPEAKER)) {
		            device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_SPEAKER;
		        }  
		        //接下来就是依据优先级去选择设备了.规则就是
		        //选中一个就结束,直接去和特殊设备做共存
		        if (device2 == AUDIO_DEVICE_NONE) {
		            device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_WIRED_HEADPHONE;
		        }
		        if (device2 == AUDIO_DEVICE_NONE) {
		            device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_LINE;
		        }
		        if (device2 == AUDIO_DEVICE_NONE) {
		            device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_WIRED_HEADSET;
		        }
		        if (device2 == AUDIO_DEVICE_NONE) {
		            device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_USB_HEADSET;
		        }
		        if (device2 == AUDIO_DEVICE_NONE) {
		            device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_LINE;
		        }
		        if (device2 == AUDIO_DEVICE_NONE) {
		            device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_WIRED_HEADSET;
		        }
		        if (device2 == AUDIO_DEVICE_NONE) {
		            device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_USB_HEADSET;
		        }
		        if (device2 == AUDIO_DEVICE_NONE) {
		            device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_USB_ACCESSORY;
		        }
		        if (device2 == AUDIO_DEVICE_NONE) {
		            device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_USB_DEVICE;
		        }
		        if (device2 == AUDIO_DEVICE_NONE) {
		            device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_DGTL_DOCK_HEADSET;
		        }
		        if ((device2 == AUDIO_DEVICE_NONE) && (strategy != STRATEGY_SONIFICATION)) {
		            // no sonification on aux digital (e.g. HDMI)
		            device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_AUX_DIGITAL;
		        }
		        if ((device2 == AUDIO_DEVICE_NONE) &&
		                (mForceUse[AUDIO_POLICY_FORCE_FOR_DOCK] == AUDIO_POLICY_FORCE_ANALOG_DOCK)) {
		            device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_ANLG_DOCK_HEADSET;
		        }
		        if (device2 == AUDIO_DEVICE_NONE) {
		            device2 = availableOutputDevicesType & AUDIO_DEVICE_OUT_SPEAKER;
		        }
		  ***********************************************************************************
		        //特殊情况
		        int device3 = AUDIO_DEVICE_NONE;
		        if (strategy == STRATEGY_MEDIA) {
		          //如果arc,spdif,aux_line可用,赋值给device3
		            // ARC, SPDIF and AUX_LINE can co-exist with others.
		            device3 = availableOutputDevicesType & AUDIO_DEVICE_OUT_HDMI_ARC;
		            device3 |= (availableOutputDevicesType & AUDIO_DEVICE_OUT_SPDIF);
		            device3 |= (availableOutputDevicesType & AUDIO_DEVICE_OUT_AUX_LINE);
		        }
		        //device2和arc,spdif,aux_line做一个共存
		        device2 |= device3;
		        // 一般情况下,在这之前device还是AUDIO_DEVICE_NONE
		        device |= device2;
		        // If hdmi system audio mode is on, remove speaker out of output list.
		        if ((strategy == STRATEGY_MEDIA) &&
		            (mForceUse[AUDIO_POLICY_FORCE_FOR_HDMI_SYSTEM_AUDIO] ==
		                AUDIO_POLICY_FORCE_HDMI_SYSTEM_AUDIO_ENFORCED)) {
		            device &= ~AUDIO_DEVICE_OUT_SPEAKER;
		        }
		        } break;
		  ***********************************************************************************
		        //特殊情况
		        int device3 = AUDIO_DEVICE_NONE;
		        if (strategy == STRATEGY_MEDIA) {
		          //如果arc,spdif,aux_line可用,赋值给device3
		            // ARC, SPDIF and AUX_LINE can co-exist with others.
		            device3 = availableOutputDevicesType & AUDIO_DEVICE_OUT_HDMI_ARC;
		            device3 |= (availableOutputDevicesType & AUDIO_DEVICE_OUT_SPDIF);
		            device3 |= (availableOutputDevicesType & AUDIO_DEVICE_OUT_AUX_LINE);
		        }
		        //device2和arc,spdif,aux_line做一个共存
		        device2 |= device3;
		        // 一般情况下,在这之前device还是AUDIO_DEVICE_NONE
		        device |= device2;
		        // If hdmi system audio mode is on, remove speaker out of output list.
		        if ((strategy == STRATEGY_MEDIA) &&
		            (mForceUse[AUDIO_POLICY_FORCE_FOR_HDMI_SYSTEM_AUDIO] ==
		                AUDIO_POLICY_FORCE_HDMI_SYSTEM_AUDIO_ENFORCED)) {
		            device &= ~AUDIO_DEVICE_OUT_SPEAKER;
		        }
		        } break;
...
}

这里需要说明的是availableOutputDevicesType 来自于哪儿?在本函数开头处有

uint32_t availableOutputDevicesType = availableOutputDevices.types();

因此,我们追踪availableOutputDevices,在getDeviceForStrategy函数中

audio_devices_t Engine::getDeviceForStrategy(routing_strategy strategy) const
{
	    DeviceVector availableOutputDevices = mApmObserver->getAvailableOutputDevices();
    	DeviceVector availableInputDevices = mApmObserver->getAvailableInputDevices();
}

这里的getAvailableOutputDevices来自于

virtual const DeviceVector &getAvailableOutputDevices() const
{
	return mAvailableOutputDevices;
}

而mAvailableOutputDevices正是来自于本文第一步。
3. 决定到底是选择哪种类型(usb,primary,蓝牙,usb)接口的哪种具体设备

a. 首先我们确定当前终端都支持哪些类型的音频接口

和之前选择设备类型一致,也是通过AUDIO_POLICY_XML_CONFIG_FILE(/system/etc/audio_policy_configuration.xml)配置文件解析而来,需要注意的是这个xml文件会同时include:a2dp_audio_policy_configuration.xml以及usb_audio_policy_configuration.xmlr_submix_audio_policy_configuration.xml这三个xml文件,因此总共会加入三个hardware module

<module name="primary" halVersion="2.0">
<module name="a2dp" halVersion="2.0">
<module name="usb" halVersion="2.0">
<module name="r_submix" halVersion="2.0">

另外我们还可以通过dump下media.audio_policy服务来看当前系统支持的module

adb shell dumpsys media.audio_policy > /home/jon/audio_policy.txt

关注如下字段:

HW Modules dump:
- HW Module 1:
  - name: primary
...
- HW Module 2:
  - name: a2dp
...
- HW Module 3:
  - name: usb
...
- HW Module 4:
  - name: submix

如上,里面会有关于对应module以及device的具体配置。对于每个HW Module我们对于dump数据做下初步分析

- HW Module 1:
  - name: primary      //具体的音频硬件设备类型
  - handle: 10
  - version: 2.0
  - outputs:        //由于应用层面有不同的stream,对应到配置也会有多个output
    output 0:    
    - name: primary output            //匹配到具体的stream类型
    - Profiles:
        Profile 0:   //不同的stream也会有多个音频编码格式,因此存在多个profile
            - format: AUDIO_FORMAT_PCM_16_BIT
            - sampling rates:48000
            - channel masks:0x0003
    - flags: 0x0006
    - Supported devices: //当然同样的stream也会存在多种device的输出方式(耳机,Speaker等)
      Device 1:
      - id:  1
      - tag name: Earpiece
      - type: AUDIO_DEVICE_OUT_EARPIECE                       
      Device 2:
      - id:  2
      - tag name: Speaker
      - type: AUDIO_DEVICE_OUT_SPEAKER          
     ...

如上,每种硬件接口类型,会根据stream的应用场景区分多个output,最后我们需要在这些output 中选择满足我们要求的output ,再从output 中选择最终的device。
b. 其次对于audio相关的音频路径是如何被加载到系统,同时对于每个音频路径是如何设置其默认输出设备的

我们从代码中分析下,系统是如何将众多的output保存下来的

frameworks\av\services\audiopolicy\managerdefault\audioPolicyManager.cpp

AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
{
		mpClientInterface = clientInterface;
		...
		//这里的mHwModules.size()就是上面得到信息,当前为4
	   	for (size_t i = 0; i < mHwModules.size(); i++) {
	   		//逐个加载音频的硬件库,以方便之后的调用
	        mHwModules[i]->mHandle = mpClientInterface->loadHwModule(mHwModules[i]->getName());

			//这里的每个OutputProfile对应一个output,outProfile->getTagName().string()对应的就是output的名称
	        for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++)
	        {
	        	//获取每个音频输出路径的配置
	            const sp<IOProfile> outProfile = mHwModules[i]->mOutputProfiles[j];
				ALOGE("Jon,outProfile name = %s\n",outProfile->getTagName().string());
				//如果当前的音频输出路径没有对应的device支持,则放弃这条路径
	            if (!outProfile->hasSupportedDevices()) {
	                ALOGW("Output profile contains no device on module %s", mHwModules[i]->getName());
	                continue;
	            }
	            //如果当前音频输出路径支持TTS,则标注
	            if ((outProfile->getFlags() & AUDIO_OUTPUT_FLAG_TTS) != 0) {
	                mTtsOutputAvailable = true;
	            }
				//由于direct(不需要混音,例如HDMI输出)的音频流不需要创建direct的Playback Thread,因此也不需要加到系统默认的outputs中
	            if ((outProfile->getFlags() & AUDIO_OUTPUT_FLAG_DIRECT) != 0) {
	                continue;
	            }
	            audio_devices_t profileType = outProfile->getSupportedDevicesType();
	            if ((profileType & mDefaultOutputDevice->type()) != AUDIO_DEVICE_NONE) {
	                profileType = mDefaultOutputDevice->type();
	            } else {
	                // chose first device present in profile's SupportedDevices also part of
	                // outputDeviceTypes
	                profileType = outProfile->getSupportedDeviceForType(outputDeviceTypes);
	            }
	            if ((profileType & outputDeviceTypes) == 0) {
	                continue;
	            }
	            //获取当前音频路径支持的输出设备有哪些(Speaker,耳机等)
	            audio_devices_t profileType = outProfile->getSupportedDevicesType();
	            /*
	             *  1.这里首先说明,默认的输出设备是Speaker
	             *  2.因此,这儿就是判断当前音频路径的输出设备中是否包含了Speaker,如果是,那么设置profileType 为Speaker
	             *  3.如果当前音频路径中没有包含Speaker,那么从可用的输出设备中按照顺序找出一个设备出来。(耳机,Speaker,听筒)
	             */
	            if ((profileType & mDefaultOutputDevice->type()) != AUDIO_DEVICE_NONE) {
	                profileType = mDefaultOutputDevice->type();
	            } else {
	                // chose first device present in profile's SupportedDevices also part of
	                // outputDeviceTypes
	                profileType = outProfile->getSupportedDeviceForType(outputDeviceTypes);
	            }
	            //如果当前的音频路径中并没有可用的输出设备,那么果断的放弃这条音频路径
	            if ((profileType & outputDeviceTypes) == 0) {
	                continue;
	            }	            
				//这里根据outProfile(当前音频路径配置)和mpClientInterface创建一个outputDesc 的描述符
	            sp<SwAudioOutputDescriptor> outputDesc = new SwAudioOutputDescriptor(outProfile,
	                                                                                 mpClientInterface);
	            //从当前的音频路径中获取所有支持的输出设备                                                              	
	            const DeviceVector &supportedDevices = outProfile->getSupportedDevices();
				//从所有的设备中查找和选中的设备(默认是Speaker)匹配的设备列表
	            const DeviceVector &devicesForType = supportedDevices.getDevicesFromType(profileType);
	            //如果列表元素个数大于0,则取出第一个元素的Address字段,否则为空。
	            //BTW,这个字段一般都是为空
	            String8 address = devicesForType.size() > 0 ? devicesForType.itemAt(0)->mAddress
	                    : String8("");	                                                                                 
				...
				//设置当前音频路径的默认输出设备,一般是speaker
				outputDesc->mDevice = profileType;
	            audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
	            //这里的output需要说明下(后面有用到),openOutput会将output初始化为一个全局唯一的变量,而且这个全局唯一的变量会关联到PlaybackThread线程的句柄。
	            //使用音频路径和具体的设备打开该输出设备
	            status_t status = mpClientInterface->openOutput(outProfile->getModuleHandle(),
	                                                            &output,
	                                                            &config,
	                                                            &outputDesc->mDevice,
	                                                            address,
	                                                            &outputDesc->mLatency,
	                                                            outputDesc->mFlags);
	
	            if (status != NO_ERROR) {
	                ALOGW("Cannot open output stream for device %08x on hw module %s",
	                      outputDesc->mDevice,
	                      mHwModules[i]->getName());
	            } else {
	                outputDesc->mSamplingRate = config.sample_rate;
	                outputDesc->mChannelMask = config.channel_mask;
	                outputDesc->mFormat = config.format;
	
	                for (size_t k = 0; k  < supportedDevices.size(); k++) {
	                    ssize_t index = mAvailableOutputDevices.indexOf(supportedDevices[k]);
	                    // give a valid ID to an attached device once confirmed it is reachable
	                    if (index >= 0 && !mAvailableOutputDevices[index]->isAttached()) {
	                        mAvailableOutputDevices[index]->attach(mHwModules[i]);
	                    }
	                }
	                if (mPrimaryOutput == 0 &&
	                        outProfile->getFlags() & AUDIO_OUTPUT_FLAG_PRIMARY) {
	                    mPrimaryOutput = outputDesc;
	                }
	                //将全局唯一的变量和音频路径加入到全局的mOutputs中,之后我们会通过device和stream的类型从mOutputs中选择合适的音频路径。
	                addOutput(output, outputDesc);
	                //设置当前音频路径的默认输出设备
	                setOutputDevice(outputDesc,
	                                outputDesc->mDevice,
	                                true,
	                                0,
	                                NULL,
	                                address.string());
	            }
	        }
        ...
    }
}

先分析mpClientInterface->openOutput关注2个字段outputoutputDesc->mDevice是怎么赋值的

frameworks\av\services\audiopolicy\service\audioPolicyClientImpl.cpp

status_t AudioPolicyService::AudioPolicyClient::openOutput(audio_module_handle_t module,
                                                           audio_io_handle_t *output,
                                                           audio_config_t *config,
                                                           audio_devices_t *devices,
                                                           const String8& address,
                                                           uint32_t *latencyMs,
                                                           audio_output_flags_t flags)
{
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    if (af == 0) {
        ALOGW("%s: could not get AudioFlinger", __func__);
        return PERMISSION_DENIED;
    }
    return af->openOutput(module, output, config, devices, address, latencyMs, flags);
}

frameworks\av\services\audioflinger\AudioFlinger.cpp

sp<AudioFlinger::PlaybackThread> AudioFlinger::openOutput_l(audio_module_handle_t module,
                                                            audio_io_handle_t *output,
                                                            audio_config_t *config,
                                                            audio_devices_t devices,
                                                            const String8& address,
                                                            audio_output_flags_t flags)
{
	//在AudioPolicyManager中已经完成了音频硬件库的加载,这里是直接取出outHwDev的指针
    AudioHwDevice *outHwDev = findSuitableHwDev_l(module, devices);
	if (*output == AUDIO_IO_HANDLE_NONE) {
		//对于每个音频路径生成全局唯一的output变量
        *output = nextUniqueId(AUDIO_UNIQUE_ID_USE_OUTPUT);
    } else {
    }
    //调用openOutputStream
    status_t status = outHwDev->openOutputStream(
            &outputStream,
            *output,
            devices,
            flags,
            config,
            address.string());    
    if (status == NO_ERROR) {
		
        PlaybackThread *thread;
        //根据stream的类型生成对应的PlaybackThread
        if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
            thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady);
        } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT) || !isValidPcmSinkFormat(config->format || !isValidPcmSinkChannelMask(config->channel_mask)) {
            thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady);
        } else {
            thread = new MixerThread(this, outputStream, *output, devices, mSystemReady);
        }
        mPlaybackThreads.add(*output, thread);
        return thread;
    }
}

这里的openOutputStream会调用到hardware了,我们以primary为例说明如下:

hardware\libhardware\modules\audio\audio_hw.c

adev->device.open_output_stream = adev_open_output_stream;

adev_open_output_stream会设置相关对音频硬件操作的函数指针

c. 上面的b中对每路音频路径都设置了一个默认的输出设备,之前在track的创建中有一步是getOutputForAttr,我们讲了音频策略的获取,音频设备的选择,最后还有一个 函数没讲getOutputForDevice,这个函数是根据我们下发的device,format,以及请求的audio policy从系统保存的众多音频路径中选择符合要求的路径

audio_io_handle_t AudioPolicyManager::getOutputForDevice(
        audio_devices_t device,
        audio_session_t session __unused,
        audio_stream_type_t stream,
        uint32_t samplingRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        audio_output_flags_t flags,
        const audio_offload_info_t *offloadInfo)
{
		...
		//到AudioTrack了,基本都是pcm码流了
	    if (audio_is_linear_pcm(format)) {
        // get which output is suitable for the specified stream. The actual
        // routing change will happen when startOutput() will be called
        //这步的意思就是在系统保存的mOutputs中选择包含device的音频路径列表,注意这依然是个列表,因为包含device的音频路径很有可能不止一条
        SortedVector<audio_io_handle_t> outputs = getOutputsForDevice(device, mOutputs);

        // at this stage we should ignore the DIRECT flag as no direct output could be found earlier
        //将请求的policy flags中剔除掉direct
        flags = (audio_output_flags_t)(flags & ~AUDIO_OUTPUT_FLAG_DIRECT);
        //根据音频格式,policy flags再从包含device的音频路径列表中选择最适合的音频路径
        output = selectOutput(outputs, flags, format);
    }
}
/*
 *  函数参数说明
 *  outputs:包含目标device的音频路径列表
 *  flags:应用请求的Policy Flags
 *  format:应用请求的格式
*/
audio_io_handle_t AudioPolicyManager::selectOutput(const SortedVector<audio_io_handle_t>& outputs,
                                                       audio_output_flags_t flags,
                                                       audio_format_t format)
{
    // select one output among several that provide a path to a particular device or set of
    // devices (the list was previously build by getOutputsForDevice()).
    // The priority is as follows:
    // 1: the output with the highest number of requested policy flags
    // 2: the output with the bit depth the closest to the requested one
    // 3: the primary output
    // 4: the first output in the list
	
	//如果满足要求的音频路径列表为空,那么很遗憾没有音频路径提供,返回0
    if (outputs.size() == 0) {
        return 0;
    }
    //如果满足要求的音频路径只有一条,那也没有选择的可能了,返回这条即可
    if (outputs.size() == 1) {
        return outputs[0];
    }

    int maxCommonFlags = 0;
    audio_io_handle_t outputForFlags = 0;
    audio_io_handle_t outputForPrimary = 0;
    audio_io_handle_t outputForFormat = 0;
    audio_format_t bestFormat = AUDIO_FORMAT_INVALID;
    audio_format_t bestFormatForFlags = AUDIO_FORMAT_INVALID;

    for (size_t i = 0; i < outputs.size(); i++) {
    	//逐一取出满足要求的音频路径
        sp<SwAudioOutputDescriptor> outputDesc = mOutputs.valueFor(outputs[i]);
        if (!outputDesc->isDuplicated()) {
            // if a valid format is specified, skip output if not compatible
            if (format != AUDIO_FORMAT_INVALID) {
            	//首先判断音频输出标识是否是不需要混音(HDMI等),如果是,进一步判断Format是否匹配,如果不匹配,则直接放弃这条路径
                if (outputDesc->mFlags & AUDIO_OUTPUT_FLAG_DIRECT) {
                    if (!audio_formats_match(format, outputDesc->mFormat)) {
                        continue;
                    }
                } else if (!audio_is_linear_pcm(format)) { //再判断请求的格式是不是pcm,如果不是,则放弃
                    continue;
                }
        		//之后我们从所有满足要求的音频路径中选择和请求的格式最匹配的一条音频路径
                if (AudioPort::isBetterFormatMatch(
                        outputDesc->mFormat, bestFormat, format)) {
                    outputForFormat = outputs[i];
                    bestFormat = outputDesc->mFormat;
                }
            }
            
			//我们从音频路径列表Policy Flags中选择尽可能多的满足请求的Flags的音频路径
            int commonFlags = popcount(outputDesc->mProfile->getFlags() & flags);
            if (commonFlags >= maxCommonFlags) {
                if (commonFlags == maxCommonFlags) {
                    if (AudioPort::isBetterFormatMatch(
                            outputDesc->mFormat, bestFormatForFlags, format)) {
                        outputForFlags = outputs[i];
                        bestFormatForFlags = outputDesc->mFormat;
                    }
                } else {
                    outputForFlags = outputs[i];
                    maxCommonFlags = commonFlags;
                    bestFormatForFlags = outputDesc->mFormat;
                }
                ALOGV("selectOutput() commonFlags for output %d, %04x", outputs[i], commonFlags);
            }
            //确认音频路径中是否包含主输出,如果有,直接选择第一条满足的即可
            if (outputDesc->mProfile->getFlags() & AUDIO_OUTPUT_FLAG_PRIMARY) {
                outputForPrimary = outputs[i];
            }
        }
    }

	
    if (outputForFlags != 0) {
        return outputForFlags;
    }
    if (outputForFormat != 0) {
        return outputForFormat;
    }
    if (outputForPrimary != 0) {
        return outputForPrimary;
    }

    return outputs[0];
}

对于音频路径的选择我们可以得出如下结论了:
1. 优先选择最满足请求Policy Flags(例如:deep buffer | fast | direct | primary)的音频路径;
2. 退一步选择最匹配请求Format(例如:AUDIO_FORMAT_PCM_16_BIT)的音频路径;
3. 再退一步选择主输出的音频(例如支持铃声类)路径;
4. 如果都不满足,那没办法了选谁都是选,直接选第一个音频路径输出;

这里对于selectOutput的第二个参数,flags有必要详细说一下,我们可以直接看这个变量的定义:

typedef enum {
    AUDIO_OUTPUT_FLAG_NONE = 0x0,       // no attributes
    AUDIO_OUTPUT_FLAG_DIRECT = 0x1,     // this output directly connects a track
                                        // to one output stream: no software mixer
    AUDIO_OUTPUT_FLAG_PRIMARY = 0x2,    // this output is the primary output of
                                        // the device. It is unique and must be
                                        // present. It is opened by default and
                                        // receives routing, audio mode and volume
                                        // controls related to voice calls.
    AUDIO_OUTPUT_FLAG_FAST = 0x4,       // output supports "fast tracks",
                                        // defined elsewhere
    AUDIO_OUTPUT_FLAG_DEEP_BUFFER = 0x8, // use deep audio buffers
    AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD = 0x10,  // offload playback of compressed
                                                // streams to hardware codec
    AUDIO_OUTPUT_FLAG_NON_BLOCKING = 0x20, // use non-blocking write
    AUDIO_OUTPUT_FLAG_HW_AV_SYNC = 0x40,   // output uses a hardware A/V synchronization source
    AUDIO_OUTPUT_FLAG_TTS = 0x80,          // output for streams transmitted through speaker
                                           // at a sample rate high enough to accommodate
                                           // lower-range ultrasonic playback
    AUDIO_OUTPUT_FLAG_RAW = 0x100,         // minimize signal processing
    AUDIO_OUTPUT_FLAG_SYNC = 0x200,        // synchronize I/O streams

    AUDIO_OUTPUT_FLAG_IEC958_NONAUDIO = 0x400, // Audio stream contains compressed audio in
                                               // SPDIF data bursts, not PCM.
    AUDIO_OUTPUT_FLAG_VOIP_RX = 0x800,  // use this flag in combination with DIRECT to
                                         // start voip over voice path.
    AUDIO_OUTPUT_FLAG_COMPRESS_PASSTHROUGH = 0x1000, // flag for HDMI compressed passthrough
    AUDIO_OUTPUT_FLAG_DIRECT_PCM = 0x2000, // flag for Direct PCM
} audio_output_flags_t;

其代表的就是各种音频标识

最后我们还需要知道,系统众多的音频路径怎么查看其支持的输出音频标识呢

adb shell dumpsys media.audio_policy > /home/jon/audio_policy.txt

我们关注如下字段:
在这里插入图片描述

  • 6
    点赞
  • 69
    收藏
    觉得还不错? 一键收藏
  • 4
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值