audiopolicy

        手机本身有听筒和扬声器作为音频输出,手机本身可能有底部(双)Mic、顶部Mic、背部Mic作为音频输入。

        手机可能连接有线耳机、多个蓝牙耳机、多个WiFi音频外设,或者车载设备、VR设备、投屏设备等。

AudioPolicy提供了一个音频输入、输出管理的中心,当然它还有一些其他的作用。
在介绍audiopolicy之前,需要先了解audio focus相关内容。

一 音频焦点

1、为什么会有音频焦点机制?
我们android系统里面会安装各种多媒体软件,如果不制定一个有效合理的规则,各个应用各自为政,那么可能就会出现各种播放器、软件的混音。音频焦点机制规定某一时刻只能有一个应用获取到声音的焦点,这个时候就可以发出声音。当然,在这个应用获取到焦点之前,需要通知其他所用的应用失去焦点。

android10 音频焦点仲裁关键类及变量列表

变量类型说明
AudioFocusInfo描述焦点申请者属性
FocusEntry内部类对AudioFocusInfo和Context的封装
sInteractionMatrix二维数组仲裁焦点申请结果
mFocusHolders全局变量,HashMap保存当前焦点持有者
mFocusLosers全局变量,HashMap保存暂时失去焦点并等待重新获得焦点的申请
losers局部变量,ArrayList保存失去焦点但失去焦点类型尚未确定的申请
blocked局部变量,ArrayList保存mFocusLosers中可以被当前申请者抢占的申请
permanentlyLost局部变量,ArrayList保存永久失去焦点的申请

2、使用音频焦点

//获取焦点
AudioManager mAudioManager = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
        mAudioManager.requestAudioFocus(cl, AudioManager.STREAM_MUSIC, AudioManager.AUDIOFOCUS_GAIN);

requestAudioFocus方法有三个参数
第一个参数:OnAudioFocusChangeListener ,此为一个监听控制器,通过这个监听器可以知道自己获取到焦点或者失去焦点。
第二个参数:streamType音频流类型,焦点获得之后的数据传输类型,这个参数不会影响焦点机制,不同的音频流类型同样遵守一个焦点机制。
第三个参数:durationHint,获得焦点的时间长短,

其中GAIN有4种

    AUDIOFOCUS_GAIN 永久获取焦点
    AUDIOFOCUS_GAIN_TRANSIENT 临时获取焦点
    AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK 临时获取焦点,其它应用可以降低音量
    AUDIOFOCUS_GAIN_TRANSIENT_EXCLUSIVE 临时获取焦点,不会让出

LOSS有3种

    AUDIOFOCUS_LOSS 永久失去焦点
    AUDIOFOCUS_LOSS_TRANSIENT 临时失去焦点
    AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK 临时失去焦点,可以不静音
 

看看OnAudioFocusChangeListener 的实现:

OnAudioFocusChangeListener cl = new OnAudioFocusChangeListener() {

        @Override
        public void onAudioFocusChange(int focusChange) {
            switch(focusChange){
            case AudioManager.AUDIOFOCUS_LOSS:
                //长时间丢失焦点,这个时候需要停止播放,并释放资源。根据不同的逻辑,有时候还会释放焦点
                mAudioManager.abandonAudioFocus(cl);
                break;
            case AudioManager.AUDIOFOCUS_LOSS_TRANSIENT:
                //短暂失去焦点,这时可以暂停播放,但是不必要释放资源,因为很快又会获取到焦点
                break;
            case AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK:
                //短暂失去焦点,但是可以跟新的焦点拥有者同时播放,并做降噪处理
                break;
            case AudioManager.AUDIOFOCUS_GAIN:
                //获得了音频焦点,可以播放声音
                break;
            }
        }
    };

3、获取音频焦点机制流程分析
我们调用AudioManager请求焦点,并在重构方法里面判断参数合法值,然后注册监听,通过Binder通信,和系统服务AudioService通信。我们看到OnAudioFocusChangeListener这个回调监听并没有发送给AudioService,取而代之的是mAudioFocusDispatcher这个参数作为和跨进程回调的桥梁。

requestAudioFocus(OnAudioFocusChangeListener l, int streamType, int durationHint)
.......
registerAudioFocusListener(l);
.......
 IAudioService service = getService();
        try {
            status = service.requestAudioFocus(requestAttributes, durationHint, mICallBack,
                   mAudioFocusDispatcher, getIdForAudioFocusListener(l),
                   getContext().getOpPackageName() /* package name */, flags,
                   ap != null ? ap.cb() : null);
       } catch (RemoteException e) {
           throw e.rethrowFromSystemServer();
       }

IAudioFocusDispatcher 的设计很简洁,主要就是把从AudioService获取到的消息通过handler机制,交给另外的线程处理,从代码看到是交给了请求焦点的线程处理。

private final IAudioFocusDispatcher mAudioFocusDispatcher = new IAudioFocusDispatcher.Stub() {

        public void dispatchAudioFocusChange(int focusChange, String id) {
            final Message m = mServiceEventHandlerDelegate.getHandler().obtainMessage(
                    MSSG_FOCUS_CHANGE/*what*/, focusChange/*arg1*/, 0/*arg2 ignored*/, id/*obj*/);
            mServiceEventHandlerDelegate.getHandler().sendMessage(m);
        }

    };

AudioService是运行在system_server进程里面的系统服务,其中维护了一个栈:Stack mFocusStack,此为维护焦点的关键。
申请焦点主要是如下几点:
a、检查当前栈顶的元素是否是Phone应用占用,如果Phone处于占用状态,那么focusGrantDelayed = true。
b、压栈之前,需要检查当前栈中是否已经有这个应用的记录,如果有的话就删除掉。
c、如果focusGrantDelayed = true,那么就会延迟申请,并把此次请求FocusRequester实例入栈,但是此时记录不是被压在栈顶,而是放在lastLockedFocusOwnerIndex这个位置,也就是打电话这个记录的后面;如果focusGrantDelayed = false,不需要延迟获得焦点,同样创建FocusRequester实例,但是先要通知栈里其他记录失去焦点,然后压入栈顶,最后通知自己获得焦点成功。

 boolean focusGrantDelayed = false;
           if (!canReassignAudioFocus()) { //这里判断焦点是否处于电话状态
                if ((flags & AudioManager.AUDIOFOCUS_FLAG_DELAY_OK) == 0) {
                    return AudioManager.AUDIOFOCUS_REQUEST_FAILED;
                } else {
                    // request has AUDIOFOCUS_FLAG_DELAY_OK: focus can't be
                    // granted right now, so the requester will be inserted in the focus stack
                    // to receive focus later
                    focusGrantDelayed = true;
                }
            }          
// focus requester might already be somewhere below in the stack, remove it 此处便是移除栈里面相同clientId的记录
            removeFocusStackEntry(clientId, false /* signal */, false /*notifyFocusFollowers*/);
//创建新的FocusRequester实例,为入栈做准备
final FocusRequester nfr = new FocusRequester(aa, focusChangeHint, flags, fd, cb,
                   clientId, afdh, callingPackageName, Binder.getCallingUid(), this);

 if (focusGrantDelayed) {
          // focusGrantDelayed being true implies we can't reassign focus right 
          // which implies the focus stack is not empty.延迟
            final int requestResult = pushBelowLockedFocusOwners(nfr);
           if (requestResult != AudioManager.AUDIOFOCUS_REQUEST_FAILED) {
               notifyExtPolicyFocusGrant_syncAf(nfr.toAudioFocusInfo(), requestResult);
            }
           return requestResult;
   } else {
               // propagate the focus change through the stack没有延迟
               if (!mFocusStack.empty()) {
                    propagateFocusLossFromGain_syncAf(focusChangeHint);
               }

               // push focus requester at the top of the audio focus stack
               mFocusStack.push(nfr);
           }
             notifyExtPolicyFocusGrant_syncAf(nfr.toAudioFocusInfo(),
              AudioManager.AUDIOFOCUS_REQUEST_GRANTED);
    }

4、释放音频焦点流程
释放音频焦点会有以下两种情况:
a:如果要释放的应用是在栈顶,则释放之后,还需要通知先在栈顶应用,其获得了audiofocus;
b:如果要释放的应用不是在栈顶,则只是移除这个记录,不需要更改当前audiofocus的占有情况。

 private void removeFocusStackEntry(String clientToRemove, boolean signal,
            boolean notifyFocusFollowers) {
        // is the current top of the focus stack abandoning focus? (because of request, not death)
        if (!mFocusStack.empty() && mFocusStack.peek().hasSameClient(clientToRemove))
      { //释放焦点的应用端在栈顶
            //Log.i(TAG, "   removeFocusStackEntry() removing top of stack");
            FocusRequester fr = mFocusStack.pop();
            fr.release();
            if (notifyFocusFollowers) {
                final AudioFocusInfo afi = fr.toAudioFocusInfo();
                afi.clearLossReceived();
                notifyExtPolicyFocusLoss_syncAf(afi, false);
            }
            if (signal) {
                // notify the new top of the stack it gained focus
                notifyTopOfAudioFocusStack();
            }
        } else {
            //释放焦点的应用端不在栈顶
            // focus is abandoned by a client that's not at the top of the stack,
            // no need to update focus.
            // (using an iterator on the stack so we can safely remove an entry after having
            //  evaluated it, traversal order doesn't matter here)
            Iterator stackIterator = mFocusStack.iterator();
            while(stackIterator.hasNext()) {
                FocusRequester fr = stackIterator.next();
                if(fr.hasSameClient(clientToRemove)) {
                    Log.i(TAG, "AudioFocus  removeFocusStackEntry(): removing entry for "
                            + clientToRemove);
                    stackIterator.remove();
                    fr.release();
                }
            }
        }
    }

二 音频策略

AudioStream在Audio Base.h中定义

typedef enum {
 AUDIO_STREAM_DEFAULT = -1, // (-1)
  AUDIO_STREAM_MIN = 0,
  AUDIO_STREAM_VOICE_CALL = 0,
  AUDIO_STREAM_SYSTEM = 1,
AUDIO_STREAM_RING = 2,
 AUDIO_STREAM_MUSIC = 3,
 AUDIO_STREAM_ALARM = 4,
 AUDIO_STREAM_NOTIFICATION = 5,
AUDIO_STREAM_BLUETOOTH_SCO = 6,
 AUDIO_STREAM_ENFORCED_AUDIBLE = 7,
AUDIO_STREAM_DTMF = 8,
AUDIO_STREAM_TTS = 9,
AUDIO_STREAM_ACCESSIBILITY = 10,
#ifndef AUDIO_NO_SYSTEM_DECLARATIONS
   /** For dynamic policy output mixes. Only used by the audio policy */
AUDIO_STREAM_REROUTING = 11,
  /** For audio flinger tracks volume. Only used by the audioflinger */
AUDIO_STREAM_PATCH = 12,
#endif // AUDIO_NO_SYSTEM_DECLARATIONS
} audio_stream_type_t;

AudioStream仅用来标识音频的音量,使用音频属性AudioAttributes和AudioStream共同决定AudioStrategy

(在之前的版本中,AudioStream对应AudioStrategy,AudioStrategy选择音频输出设备)

routing_strategy Engine::getStrategyForUsage(audio_usage_t usage)
{
 // usage to strategy mapping
switch (usage) {
 case AUDIO_USAGE_ASSISTANCE_ACCESSIBILITY:
return STRATEGY_ACCESSIBILITY;

  case AUDIO_USAGE_MEDIA:
 case AUDIO_USAGE_GAME:
  case AUDIO_USAGE_ASSISTANT:
case AUDIO_USAGE_ASSISTANCE_NAVIGATION_GUIDANCE:
case AUDIO_USAGE_ASSISTANCE_SONIFICATION:
  return STRATEGY_MEDIA;

case AUDIO_USAGE_VOICE_COMMUNICATION:
   return STRATEGY_PHONE;

  case AUDIO_USAGE_VOICE_COMMUNICATION_SIGNALLING:
  return STRATEGY_DTMF;

   case AUDIO_USAGE_ALARM:
   case AUDIO_USAGE_NOTIFICATION_TELEPHONY_RINGTONE:
   return STRATEGY_SONIFICATION;

  case AUDIO_USAGE_NOTIFICATION:
   case AUDIO_USAGE_NOTIFICATION_COMMUNICATION_REQUEST:
   case AUDIO_USAGE_NOTIFICATION_COMMUNICATION_INSTANT:
 case AUDIO_USAGE_NOTIFICATION_COMMUNICATION_DELAYED:
  case AUDIO_USAGE_NOTIFICATION_EVENT:
     return STRATEGY_SONIFICATION_RESPECTFUL;

   case AUDIO_USAGE_UNKNOWN:
  default:
   return STRATEGY_MEDIA;
  }
}

Strategy根据一定的规则,来选择不同音频的输出设备,比如:音乐从耳机发出、闹钟从手机扬声器和耳机同时发出

audio_devices_t Engine::getDeviceForStrategyInt(routing_strategy strategy,
   DeviceVector availableOutputDevices,
     DeviceVector availableInputDevices,
       const SwAudioOutputCollection &outputs,
       uint32_t outputDeviceTypesToIgnore) const
{
  uint32_t device = AUDIO_DEVICE_NONE;
  uint32_t availableOutputDevicesType =
      availableOutputDevices.types() & ~outputDeviceTypesToIgnore;

  switch (strategy) {

 case STRATEGY_TRANSMITTED_THROUGH_SPEAKER:
   device = availableOutputDevicesType & AUDIO_DEVICE_OUT_SPEAKER;
break;

 case STRATEGY_SONIFICATION_RESPECTFUL:
      if (isInCall() || outputs.isStreamActiveLocally(AUDIO_STREAM_VOICE_CALL)) {
       device = getDeviceForStrategyInt(
              STRATEGY_SONIFICATION, availableOutputDevices, availableInputDevices, outputs,
                outputDeviceTypesToIgnore);
     } else {
   bool media_active_locally =
             outputs.isStreamActiveLocally(
                       AUDIO_STREAM_MUSIC, SONIFICATION_RESPECTFUL_AFTER_MUSIC_DELAY)
              || outputs.isStreamActiveLocally(
                       AUDIO_STREAM_ACCESSIBILITY, SONIFICATION_RESPECTFUL_AFTER_MUSIC_DELAY);
       // routing is same as media without the "remote" device
          device = getDeviceForStrategyInt(STRATEGY_MEDIA,
             availableOutputDevices,
               availableInputDevices, outputs,
                AUDIO_DEVICE_OUT_REMOTE_SUBMIX | outputDeviceTypesToIgnore);
        // if no media is playing on the device, check for mandatory use of "safe" speaker
         // when media would have played on speaker, and the safe speaker path is available
           if (!media_active_locally
                && (device & AUDIO_DEVICE_OUT_SPEAKER)
                && (availableOutputDevicesType & AUDIO_DEVICE_OUT_SPEAKER_SAFE)) {
          device |= AUDIO_DEVICE_OUT_SPEAKER_SAFE;
         device &= ~AUDIO_DEVICE_OUT_SPEAKER;
       }
    }
     break;

 case STRATEGY_DTMF:
     if (!isInCall()) {
         // when off call, DTMF strategy follows the same rules as MEDIA strategy
        device = getDeviceForStrategyInt(
                STRATEGY_MEDIA, availableOutputDevices, availableInputDevices, outputs,
               outputDeviceTypesToIgnore);
          break;
     }
   // when in call, DTMF and PHONE strategies follow the same rules
   // FALL THROUGH

   case STRATEGY_PHONE:
    // Force use of only devices on primary output if:
     // - in call AND
      //   - cannot route from voice call RX OR
       //   - audio HAL version is < 3.0 and TX device is on the primary HW module
      if (getPhoneState() == AUDIO_MODE_IN_CALL) {
          audio_devices_t txDevice = getDeviceForInputSource(AUDIO_SOURCE_VOICE_COMMUNICATION);
        sp<AudioOutputDescriptor> primaryOutput = outputs.getPrimaryOutput();
          audio_devices_t availPrimaryInputDevices =
             availableInputDevices.getDevicesFromHwModule(primaryOutput->getModuleHandle());

        // TODO: getPrimaryOutput return only devices from first module in
         // audio_policy_configuration.xml, hearing aid is not there, but it's
          // a primary device
      // FIXME: this is not the right way of solving this problem
          audio_devices_t availPrimaryOutputDevices =
          (primaryOutput->supportedDevices() | AUDIO_DEVICE_OUT_HEARING_AID) &
          availableOutputDevices.types();

          if (((availableInputDevices.types() &
               AUDIO_DEVICE_IN_TELEPHONY_RX & ~AUDIO_DEVICE_BIT_IN) == 0) ||
                   (((txDevice & availPrimaryInputDevices & ~AUDIO_DEVICE_BIT_IN) != 0) &&
                      (primaryOutput->getAudioPort()->getModuleVersionMajor() < 3))) {
            availableOutputDevicesType = availPrimaryOutputDevices;
         }
       }

adb shell dumpsys media.audio_policy

可分析audio_policy日志

/vendor/etc/audio/audio_policy_configuration.xml
- Available output devices:
  Device 1:
  - id:  2
  - tag name: Earpiece
  - type: AUDIO_DEVICE_OUT_EARPIECE                       
  - Profiles:
      Profile 0:
          - format: AUDIO_FORMAT_PCM_16_BIT
          - sampling rates:48000
          - channel masks:0x0010

- Available input devices:
  Device 1:
  - id: 18
  - tag name: Built-In Mic
  - type: AUDIO_DEVICE_IN_BUILTIN_MIC                     
  - address: bottom                          
  - Profiles:
      Profile 0:
          - format: AUDIO_FORMAT_PCM_16_BIT
          - sampling rates:8000, 11025, 12000, 16000, 22050, 24000, 32000, 44100, 48000
          - channel masks:0x000c, 0x0010, 0x0030
 

Policy Engine dump:
  Product Strategies dump:
    -STRATEGY_PHONE (id: 15)
      Selected Device: {type:AUDIO_DEVICE_OUT_EARPIECE, @:}
       Group: 1 stream: AUDIO_STREAM_VOICE_CALL
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_VOICE_COMMUNICATION Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }
       Group: 8 stream: AUDIO_STREAM_BLUETOOTH_SCO
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_UNKNOWN Source: AUDIO_SOURCE_DEFAULT Flags: 0x4 Tags:  }

    -STRATEGY_SONIFICATION (id: 16)
      Selected Device: {type:AUDIO_DEVICE_OUT_SPEAKER, @:}
       Group: 3 stream: AUDIO_STREAM_RING
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_NOTIFICATION_TELEPHONY_RINGTONE Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }
       Group: 6 stream: AUDIO_STREAM_ALARM
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_ALARM Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }

    -STRATEGY_ENFORCED_AUDIBLE (id: 17)
      Selected Device: {type:AUDIO_DEVICE_OUT_SPEAKER, @:}
       Group: 9 stream: AUDIO_STREAM_ENFORCED_AUDIBLE
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_UNKNOWN Source: AUDIO_SOURCE_DEFAULT Flags: 0x1 Tags:  }

    -STRATEGY_ACCESSIBILITY (id: 18)
      Selected Device: {type:AUDIO_DEVICE_OUT_SPEAKER, @:}
       Group: 12 stream: AUDIO_STREAM_ACCESSIBILITY
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_ASSISTANCE_ACCESSIBILITY Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }

    -STRATEGY_SONIFICATION_RESPECTFUL (id: 19)
      Selected Device: {type:AUDIO_DEVICE_OUT_SPEAKER, @:}
       Group: 7 stream: AUDIO_STREAM_NOTIFICATION
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_NOTIFICATION Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }
       Group: 7 stream: AUDIO_STREAM_NOTIFICATION
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_NOTIFICATION_COMMUNICATION_REQUEST Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }
       Group: 7 stream: AUDIO_STREAM_NOTIFICATION
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_NOTIFICATION_COMMUNICATION_INSTANT Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }
       Group: 7 stream: AUDIO_STREAM_NOTIFICATION
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_NOTIFICATION_COMMUNICATION_DELAYED Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }
       Group: 7 stream: AUDIO_STREAM_NOTIFICATION
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_NOTIFICATION_EVENT Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }

    -STRATEGY_MEDIA (id: 20)
      Selected Device: {type:AUDIO_DEVICE_OUT_SPEAKER, @:}
       Group: 4 stream: AUDIO_STREAM_MUSIC
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_MEDIA Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }
       Group: 4 stream: AUDIO_STREAM_MUSIC
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_GAME Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }
       Group: 4 stream: AUDIO_STREAM_MUSIC
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_ASSISTANT Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }
       Group: 4 stream: AUDIO_STREAM_MUSIC
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_ASSISTANCE_NAVIGATION_GUIDANCE Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }
       Group: 4 stream: AUDIO_STREAM_MUSIC
        Attributes: { Any }
       Group: 2 stream: AUDIO_STREAM_SYSTEM
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_ASSISTANCE_SONIFICATION Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }
       Group: 5 stream: AUDIO_STREAM_VOICEASSIST
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_VOICEASSIST Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }

    -STRATEGY_DTMF (id: 21)
      Selected Device: {type:AUDIO_DEVICE_OUT_SPEAKER, @:}
       Group: 10 stream: AUDIO_STREAM_DTMF
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_VOICE_COMMUNICATION_SIGNALLING Source: AUDIO_SOURCE_DEFAULT Flags: 0x0 Tags:  }

    -STRATEGY_TRANSMITTED_THROUGH_SPEAKER (id: 22)
      Selected Device: {type:AUDIO_DEVICE_OUT_SPEAKER, @:}
       Group: 11 stream: AUDIO_STREAM_TTS
        Attributes: { Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_UNKNOWN Source: AUDIO_SOURCE_DEFAULT Flags: 0x8 Tags:  }

    -STRATEGY_REROUTING (id: 23)
      Selected Device: {type:AUDIO_DEVICE_OUT_SPEAKER, @:}
       Group: 13 stream: AUDIO_STREAM_REROUTING
        Attributes: { Any }

    -STRATEGY_PATCH (id: 24)
      Selected Device: {type:AUDIO_DEVICE_OUT_SPEAKER, @:}
       Group: 14 stream: AUDIO_STREAM_PATCH
        Attributes: { Any }
 

 

三 audio服务启动流程

音频服务在frameworks/av/media/audioserver/main_audioserver.cpp中,这里会启动音频的AudioFlingerAudioPolicyService两大组件。 经过上面的流程系统音频服务已经启动处于待命状态,如果有应用需要播放则会通过服务最终选择合适的硬件将声音播出。

audiopolicy时序图如下:

1 AudioFlinger和AudioPolicyService属于binder服务。他们启动都是在同一个进程中,交互表面上是binder IPC交互,其实底层也是指针相互调用

instantiate方法是发布自身服务到ServiceManager

//framework/av/media/audioserver/main_audioserver.cpp

int main(int argc __unused, char **argv)
{
    .....
    AudioFlinger::instantiate();
    AudioPolicyService::instantiate();

2 AudioPolicyManager是AudioPolicyService服务进程下的一个独立功能模块,该模块可以由厂家自行实现(但必须遵循aosp的接口定义),最后提供libaudiopolicymanager.so库,由AudioPolicyService服务load进来调用即可。音频配置文件audio_policy_configuration.xml配置了音频的设备、流以及路由关系,AudioPolicyManager负责解析存储这些信息。

        一个Android设备,存在着许多音频设备,如听筒、麦克风、音箱、蓝牙耳机、音箱等等,Android开放给各个厂商开发者AudioPolicyManager模块来管理这些设备。在该模块中,一个audio_policy_configuration.xml文件配置了一个设备有几个module,每个module里面有哪些设备、数据流,以及这些设备和流之间的关系,而且每个module对应hal的处理逻辑也尽不一样,如primary、usb这两个module,一个是针对原生Android音频设备,一个是针对usb连接方式的音频设备,他们在hal乃至kenerl层实现都不一样,所以loadHwModule最终加载的内容均不一样。
 

 AudioPolicyService和AudioPolicyManager之间,相互保存对方的对象指针引用。

void AudioPolicyService::onFirstRef()
{
    {
        Mutex::Autolock _l(mLock);

        ......
        /** 将自身封装为一个客户端传递个Manager,好处是可以隐藏Service自身内部的逻辑,
        * 只暴露给Manager需要的接口即可,同时也降低耦合       * */
        mAudioPolicyClient = new AudioPolicyClient(this);
        //createAudioPolicyManager方法创建Manager
  mAudioPolicyManager =createAudioPolicyManager(mAudioPolicyClient);
    }
   ......
}
 

AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface, bool /*forTesting*/)
    :
    ........
 mpClientInterface(clientInterface),   //参数是AudioPolicyService的this指针
    ........
{
    loadConfig();     //加载音频配置文件
    initialize();         //初始化module,device信息,进行加载module、打开设备等操作,整个操作贯通到HAL、Kernel层。
}

///loadHwModule为例///

mpClientInterface->loadHwModule(hwModule->getName())

//hwModule就是配置文件中的每一个module,它的name是字符串,一般是:primary、a2dp、usb等

audio_module_handle_t AudioPolicyService::AudioPolicyClient::loadHwModule(const char *name)
{
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    if (af == 0) {
        ALOGW("%s: could not get AudioFlinger", __func__);
        return AUDIO_MODULE_HANDLE_NONE;
    }

    return af->loadHwModule(name);     // load工作交给AudioFlinger
}

audio_module_handle_t AudioFlinger::loadHwModule_l(const char *name)
{
    //在缓存mAudioHwDevs中查找是否已经打开过,key是唯一句柄handle,value是AudioHwDevice设备
    for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
        if (strncmp(mAudioHwDevs.valueAt(i)->moduleName(), name, strlen(name)) == 0) {
            ALOGW("loadHwModule() module %s already loaded", name);
            return mAudioHwDevs.keyAt(i);
        }
    }

    sp<DeviceHalInterface> dev;
    //这个mDevicesFactoryHal成员就是我们上面分析过的,它存储了audio hidl的客户端,通过他就可以和hidl的服务端打交道
    int rc = mDevicesFactoryHal->openDevice(name, &dev);
    if (rc) {
        ALOGE("loadHwModule() error %d loading module %s", rc, name);
        return AUDIO_MODULE_HANDLE_NONE;
    }
    ......
    //打开成功的就保存
    audio_module_handle_t handle = (audio_module_handle_t) nextUniqueId(AUDIO_UNIQUE_ID_USE_MODULE);
    mAudioHwDevs.add(handle, new AudioHwDevice(handle, name, dev, flags));

    ALOGI("loadHwModule() Loaded %s audio interface, handle %d", name, handle);

    return handle;

}

status_t DevicesFactoryHalHybrid::openDevice(const char *name, sp<DeviceHalInterface> *device) {
    //hidl方式与hal层交互
    if (mHidlFactory != 0 && strcmp(AUDIO_HARDWARE_MODULE_ID_A2DP, name) != 0 &&
        strcmp(AUDIO_HARDWARE_MODULE_ID_HEARING_AID, name) != 0) {
        return mHidlFactory->openDevice(name, device);
    }
    //本地调用方式,内部直接通过传统hw_get_module_by_class与hal层交互
    return mLocalFactory->openDevice(name, device);
}

status_t DevicesFactoryHalHidl::openDevice(const char *name, sp<DeviceHalInterface> *device) {
    if (mDeviceFactories.empty()) return NO_INIT;
    status_t status;
    //将name转换为hidl服务端可识别的类型IDevicesFactory::Device

    //PRIMARY,A2DP,  USB,SUBMIX,STUB
    auto hidlId = idFromHal(name, &status);
    if (status != OK) return status;
    Result retval = Result::NOT_INITIALIZED;
    //遍历保存的每个hidl的客户端client,这点在前面AudioFlinger与HAL建立交互有提及
    for (const auto& factory : mDeviceFactories) {
        //IPC调用hidl服务端的openDevice
        Return<void> ret = factory->openDevice(
                hidlId,
                [&](Result r, const sp<IDevice>& result) {
                    retval = r;
                    if (retval == Result::OK) {
                        *device = new DeviceHalHidl(result);
                    }
                });
        .....
    }
    ALOGW("The specified device name is not recognized: \"%s\"", name);
    return BAD_VALUE;
}
 

//AudioFlinger使用Audio音频的Client端,以HIDL方式跨进程去调用服务端的openDevice方法,该方法的参数为module模块的名称

进入HIDL服务端,其服务端实现位于/hardware/interfaces/audio/core/all-versions/default/DeviceFactory.cpp中

Return<void> DevicesFactory::openDevice(IDevicesFactory::Device device, openDevice_cb _hidl_cb) {
    //根据设备名称,打开不同的设备
    switch (device) {
        case IDevicesFactory::Device::PRIMARY:
            return openDevice<PrimaryDevice>(AUDIO_HARDWARE_MODULE_ID_PRIMARY, _hidl_cb);
        case IDevicesFactory::Device::A2DP:
            return openDevice(AUDIO_HARDWARE_MODULE_ID_A2DP, _hidl_cb);
        .......
    }
    _hidl_cb(Result::INVALID_ARGUMENTS, nullptr);
    return Void();
}

//模板类创建不同的实例,以primary实例为例会创建PrimaryDevice
template <class DeviceShim, class Callback>
Return<void> DevicesFactory::openDevice(const char* moduleName, Callback _hidl_cb) {
    audio_hw_device_t* halDevice;
    Result retval(Result::INVALID_ARGUMENTS);
    sp<DeviceShim> result;
    //加载hal层设备信息
    int halStatus = loadAudioInterface(moduleName, &halDevice);
    if (halStatus == OK) {
        result = new DeviceShim(halDevice);
        retval = Result::OK;
    } else if (halStatus == -EINVAL) {
        retval = Result::NOT_INITIALIZED;
    }
    _hidl_cb(retval, result);
    return Void();
}

// static
int DevicesFactory::loadAudioInterface(const char* if_name, audio_hw_device_t** dev) {
    const hw_module_t* mod;
    int rc;
    //加载音频hal的moduleID,并且其名称为if_name的模块
    rc = hw_get_module_by_class(AUDIO_HARDWARE_MODULE_ID, if_name, &mod);
    if (rc) {
        ALOGE("%s couldn't load audio hw module %s.%s (%s)", __func__, AUDIO_HARDWARE_MODULE_ID,
              if_name, strerror(-rc));
        goto out;
    }
    //打开上个步骤加载的模块
    rc = audio_hw_device_open(mod, dev);
    if (rc) {
        ALOGE("%s couldn't open audio hw device in %s.%s (%s)", __func__, AUDIO_HARDWARE_MODULE_ID,
              if_name, strerror(-rc));
        goto out;
    }
    if ((*dev)->common.version < AUDIO_DEVICE_API_VERSION_MIN) {
        ALOGE("%s wrong audio hw device version %04x", __func__, (*dev)->common.version);
        rc = -EINVAL;
        audio_hw_device_close(*dev);
        goto out;
    }
    return OK;

out:
    *dev = NULL;
    return rc;
}

3 AudioPolicyService上可以与应用层交互,通过它中转去查询、选择音频设备信息,也就是AudioPolicyManager模块,选择完成后最终决定使用哪个设备时,又转给AudioFlinger去与下层HAL交互。

audio_policy_configuration.xml音频策略配置文件包含了两个xml文件:

    ......
    <!-- Volume section -->

    <xi:include href="audio_policy_volumes.xml"/>
    <xi:include href="default_volume_tables.xml"/>
    ......

audio_policy_volumes.xml规定了音频流、输出设备和音量曲线的关系:
因为不同的音频流使用不同的音频曲线,而同一音频流在输出设备不同时也采用不同的音频曲线,所以必须规定这三者的对应关系,xml中的这种对应关系被serializer.deserialize方法解析后,在代码中体现为VolumeCurvesCollection-VolumeCurvesForStream-VolumeCurve对应关系
 

default_volume_tables.xml规定了具体音频曲线的值,如DEFAULT_MEDIA_VOLUME_CURVE曲线上点的xy值

audio_policy_configuration.xml中每一个CarAudioDeviceInfo都有一个设备代号,称为busNumber。

AudioControl.cpp中规定了每一种音频流(contextNumber)对应的设备代号,可根据具体需求修改:

static int sContextToBusMap[] = {
      -1,     // INVALID
       0,     // MUSIC_CONTEXT
       1,     // NAVIGATION_CONTEXT
       2,     // VOICE_COMMAND_CONTEXT
       3,     // CALL_RING_CONTEXT
       4,     // CALL_CONTEXT
       5,     // ALARM_CONTEXT
       6,     // NOTIFICATION_CONTEXT
       7,     // SYSTEM_SOUND_CONTEXT
  };
这就明确了contextNumber-busNumber-CarAudioDeviceInfo三者的对应关系。例如当contextNumber为MUSIC_CONTEXT时,其对应的设备代号busNumber为0,而0这个设备代号又对应了一个CarAudioDeviceInfo。

Android又依据音频的contextNumber将不同的音频类型分为几组,在car_volume_groups.xml中,每个group对应一个CarVolumeGroup类,在CarVolumeGroup类中保存了这组中几个音频类型的contextNumber-busNumber-CarAudioDeviceInfo对应关系。

这样通过setGroupVolume的参数groupId就可以方便的控制调节哪些音频类型对应的输出设备的硬件音量。

调节音量时,会根据传入的stream参数先找到VolumeCurvesForStream对象,再根据传入的device参数找到具体的VolumeCurve,最后根据index参数及音量曲线计算出音量的分贝值


 

4 AudioFlinger上可以与应用层交互,下可以和HAL层交互。

AudioFlinger模块加载

构造函数中实例化了设备接口以及音效接口,此时AudioFlinger模块已经成功创建出来。

AudioPolicyService模块加载

5 AudioFlinger与HAL建立连接

AudioFlinger::AudioFlinger()
    : BnAudioFlinger(),
      ....
{
    ......
    //创建hal层音频访问hal引用
    mDevicesFactoryHal = DevicesFactoryHalInterface::create();
    //创建hal层音效访问hal层引用
    mEffectsFactoryHal = EffectsFactoryHalInterface::create();
    ......
}

// static
sp<DevicesFactoryHalInterface> DevicesFactoryHalInterface::create() {
    //IDevicesFactory是hal层的HIDL接口,这里先查询hal层支持的版本号,在针对性创建hidl的客户端
    if (hardware::audio::V5_0::IDevicesFactory::getService() != nullptr) {
        return V5_0::createDevicesFactoryHal();
    }
    if (hardware::audio::V4_0::IDevicesFactory::getService() != nullptr) {
        return V4_0::createDevicesFactoryHal();
    }
    if (hardware::audio::V2_0::IDevicesFactory::getService() != nullptr) {
        return V2_0::createDevicesFactoryHal();
    }
    return nullptr;
}

sp<DevicesFactoryHalInterface> createDevicesFactoryHal() {
    return new DevicesFactoryHalHybrid();
}

DevicesFactoryHalHybrid::DevicesFactoryHalHybrid()
        : mLocalFactory(new DevicesFactoryHalLocal()),
        //关注以下成员
          mHidlFactory(new DevicesFactoryHalHidl()) {
}

三  音量调节

       音量调节分为两种实现,一种是Android原生通过设置PCM数据直接调节输出的振幅来调节音量,也是就是软音量。另外一种则是通过硬件来实现音量大小控制,也就是所谓的硬音量,通过DSP设置硬件功放等来进行音量大小控制。本篇主要涉及硬音量调节。

       在CarAudioService.java中,mUseDynamicRouting=true,setGroupVolume设置硬件音量;mUseDynamicRouting=false,android原生,设置软音量。在CarVolumeGroup.java中根据setGroupVolume方法传入的值和audio_Policy_Configuration.xml中提供的信息,进行计算得到音量gainInMillibels,之后将gainInMillibels一路传到Hal层,(Hal层audio_hw.c/adev_set_audio_port_config方法调节音量曲线)在Hal层根据音频曲线再次计算音量值,最后调用BSP提供的接口设置音量。

        Android提供两种接口来调节硬件音量:adjustStreamVolume和setStreamVolume,adjustStreamVolume传入音量调节的方向,setStreamVolume直接传入音量值。调节过程中首先根据音频流类型找到输出设备,再根据音频流类型和输出设备找到音频曲线并计算出音量的db值,最后将音量值设置到对应的混音线程PlayBackThread中,实现音量调节。需要注意的是当音调至0时称为muteAdjust,Android会对这种情况做一些特别处理。
 

参考文章:

Android 11音频服务创建以及播放的流程_c-Schutz的博客-CSDN博客_android 音频播放流程

  • 0
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

步基

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值