最近产品同事在车机上提出了一个分区音效的功能:语音控制车机上的音视频类应用程序,动态调节音频的左右声道以及低音.
思路:安卓上播放音频常用的类是AudioTrack,在创建AudioTrack类对象时AudioFlinger系统服务会分配一个随机生成的sessionID(默认情况下sessionID是自动增长且唯一的),对于相同SessionID的AudioTrack对象享有相同的audio音效;audio音效又分为均衡器,环境混响,预置混响,响度增强,可视化等,本文说到的音效的实质就是在响度增强音效的基础上增加了左/右声道以及低音的控制;另外,要想在自己的应用程序里控制第三方应用的音效,首先需要获取到第三方应用程序的pid以及构建AudioTrack时分配的sessionID;pid还比较容易获取,应用程序开始播放音频时,一般会请求音频焦点,Java层的音频服务AudioService很容易获取到pid;最复杂的是sessionID,因为AudioFlinger服务将sessionID分配给AudioTrack以后,这些信息都是存放在AF服务里的,应用程序正常情况下是无法获取的,如果硬是想获取只能添加Binder接口跨进程,从最底层的AF所在的audioserver进程一步步往自己应用程序进程里传,跨度还是有较大的!
技术背景:文章中提到的安卓源码均以Android N 为基础,安卓系统里管理音频模块Native层的核心系统服务包括两部分,第一部分是AudioFlinger(简称AF),负责音频策略的执行,音频流的混音操作,Audio硬件接口的加载及访问等,JAVA层的每个AudioTrack对象(一路音频流)在AF中都会抽象为一个Track对象;另外一部分是AudioPolicyService(简称APS),负责音频策略的制定,音频模块配置文件的加载等(不过本文的改动不涉及);最后,管理音频模块Java层的核心系统服务是AudioService(简称AS)主要负责各类音频流音量的设置以及音频焦点的管理等。另外,从进程的角度看,AudioFlinger和AudioPolicyService是运行在audioserver进程中的,并不是在system_server进程,只是由system_server进程启动了audioserver进程;AudioService运行于system_server进程里;AudioSystem和AudioTrack都运行于用户App应用程序进程里。
具体实现步骤:
1.基于响度增强音效(LoudnessEnhancer),新增左/右声道/低音的控制接口(包括Native和Java层)
system/media/audio_effects/include/audio_effects/effect_loudnessenhancer.h
...
#define LOUDNESS_ENHANCER_DEFAULT_TARGET_GAIN_MB 0 // mB
// enumerated parameters for DRC effect
// to keep in sync with frameworks/base/media/java/android/media/audiofx/LoudnessEnhancer.java
typedef enum
{
LOUDNESS_ENHANCER_PARAM_TARGET_GAIN_MB = 0,// target gain expressed in mB
/**增加左右声道的控制**/
LOUDNESS_ENHANCER_PARAM_LEFT_CHANNEL = 1, //left channel volume add by xpzhi
LOUDNESS_ENHANCER_PARAM_RIGHT_CHANNEL = 2, //right channel volume add by xpzhi
} t_level_monitor_params;
#if __cplusplus
} // extern "C"
#endif
...
头文件只是增加后续的条件判断的头,具体实现得看cpp源文件
frameworks/av/media/libeffects/loudness/EffectLoudnessEnhancer.cpp
extern "C" {
// effect_handle_t interface implementation for LE effect
extern const struct effect_interface_s gLEInterface;
// AOSP Loudness Enhancer UUID: fa415329-2034-4bea-b5dc-5b381c8d1e2c
const effect_descriptor_t gLEDescriptor = {
{0xfe3199be, 0xaed0, 0x413f, 0x87bb, {0x11, 0x26, 0x0e, 0xb6, 0x3c, 0xf1}}, // type
{0xfa415329, 0x2034, 0x4bea, 0xb5dc, {0x5b, 0x38, 0x1c, 0x8d, 0x1e, 0x2c}}, // uuid
EFFECT_CONTROL_API_VERSION,
(EFFECT_FLAG_TYPE_INSERT | EFFECT_FLAG_INSERT_FIRST | EFFECT_FLAG_VOLUME_CTRL /*add by xpzhi 新增声道的控制*/),
0, // TODO
1,
"Loudness Enhancer",
"The Android Open Source Project",
};
...
struct LoudnessEnhancerContext {
const struct effect_interface_s *mItfe;
effect_config_t mConfig;
uint8_t mState;
int32_t mTargetGainmB;// target gain in mB
volatile uint32_t mLeftVolume; // add by xpzhi 左声道
volatile uint32_t mRightVolume;// add by xpzhi 右声道
// in this implementation, there is no coupling between the compression on the left and right
// channels
le_fx::AdaptiveDynamicRangeCompression* mCompressor;
};
...
int LE_init(LoudnessEnhancerContext *pContext)
{
ALOGV("LE_init(%p)", pContext);
pContext->mConfig.inputCfg.accessMode = EFFECT_BUFFER_ACCESS_READ;
pContext->mConfig.inputCfg.channels = AUDIO_CHANNEL_OUT_STEREO;
pContext->mConfig.inputCfg.format = AUDIO_FORMAT_PCM_16_BIT;
pContext->mConfig.inputCfg.samplingRate = 44100;
pContext->mConfig.inputCfg.bufferProvider.getBuffer = NULL;
pContext->mConfig.inputCfg.bufferProvider.releaseBuffer = NULL;
pContext->mConfig.inputCfg.bufferProvider.cookie = NULL;
pContext->mConfig.inputCfg.mask = EFFECT_CONFIG_ALL;
pContext->mConfig.outputCfg.accessMode = EFFECT_BUFFER_ACCESS_ACCUMULATE;
pContext->mConfig.outputCfg.channels = AUDIO_CHANNEL_OUT_STEREO;
pContext->mConfig.outputCfg.format = AUDIO_FORMAT_PCM_16_BIT;
pContext->mConfig.outputCfg.samplingRate = 44100;
pContext->mConfig.outputCfg.bufferProvider.getBuffer = NULL;
pContext->mConfig.outputCfg.bufferProvider.releaseBuffer = NULL;
pContext->mConfig.outputCfg.bufferProvider.cookie = NULL;
pContext->mConfig.outputCfg.mask = EFFECT_CONFIG_ALL;
pContext->mLeftVolume = (0x1000000 >> 4); //add by xpzhi 左声道默认值
pContext->mRightVolume = (0x1000000 >> 4); //add by xpzhi 右声道默认值
pContext->mTargetGainmB = LOUDNESS_ENHANCER_DEFAULT_TARGET_GAIN_MB;
...
return 0;
}
...
int LE_command(effect_handle_t self, uint32_t cmdCode, uint32_t cmdSize,
void *pCmdData, uint32_t *replySize, void *pReplyData) {
LoudnessEnhancerContext * pContext = (LoudnessEnhancerContext *)self;
int retsize;
if (pContext == NULL || pContext->mState == LOUDNESS_ENHANCER_STATE_UNINITIALIZED) {
return -EINVAL;
}
// ALOGV("LE_command command %d cmdSize %d",cmdCode, cmdSize);
switch (cmdCode) {
...
case EFFECT_CMD_GET_PARAM: {
...
switch (*(uint32_t *)p->data) {
case LOUDNESS_ENHANCER_PARAM_TARGET_GAIN_MB:
ALOGV("get target gain(mB) = %d", pContext->mTargetGainmB);
*((int32_t *)p->data + 1) = pContext->mTargetGainmB;
p->vsize = sizeof(int32_t);
*replySize += sizeof(int32_t);
break;
//获取左/右声道的值
case LOUDNESS_ENHANCER_PARAM_LEFT_CHANNEL:
ALOGV("===xpzhi====get the left ch volume = %d", pContext->mLeftVolume);
*((int32_t *)p->data + 1) = pContext->mLeftVolume;
p->vsize = sizeof(int32_t);
*replySize += sizeof(int32_t);
break;
case LOUDNESS_ENHANCER_PARAM_RIGHT_CHANNEL:
ALOGV("===xpzhi=====get the right ch volume = %d", pContext->mRightVolume);
*((int32_t *)p->data + 1) = pContext->mRightVolume;
p->vsize = sizeof(int32_t);
*replySize += sizeof(int32_t);
break;
default:
p->status = -EINVAL;
}
} break;
...
case EFFECT_CMD_SET_PARAM: {
...
switch (*(uint32_t *)p->data) {
case LOUDNESS_ENHANCER_PARAM_TARGET_GAIN_MB:
pContext->mTargetGainmB = *((int32_t *)p->data + 1);
ALOGV("set target gain(mB) = %d", pContext->mTargetGainmB);
LE_reset(pContext); // apply parameter update
break;
//设置左/右声道的值
case LOUDNESS_ENHANCER_PARAM_LEFT_CHANNEL:
pContext->mLeftVolume = *((int32_t *)p->data + 1);
ALOGV("===xpzhi====set the left volume = %d", pContext->mLeftVolume);
if (pContext->mLeftVolume > 0x1000000) {
pContext->mLeftVolume = 0x1000000;
}
break;
case LOUDNESS_ENHANCER_PARAM_RIGHT_CHANNEL:
pContext->mRightVolume = *((int32_t *)p->data + 1);
ALOGV("===xpzhi======set the right volume = %d", pContext->mRightVolume);
if (pContext->mRightVolume > 0x1000000) {
pContext->mRightVolume = 0x1000000;
}
break;
default:
*(int32_t *)pReplyData = -EINVAL;
}
} break;
...
case EFFECT_CMD_SET_VOLUME:
ALOGD("===xpzhi=======EFFECT_CMD_SET_VOLUME enter!!");
*(uint32_t *)pCmdData = pContext->mLeftVolume;
*((uint32_t *)pCmdData + 1) = pContext->mRightVolume;
break;
...
}
}
}
Native层新增了左/右声道设置/ 获取的指令,以及声道的默认值,再看java层
frameworks/base/media/java/android/media/audiofx/LoudnessEnhancer.java
public class LoudnessEnhancer extends AudioEffect {
private final static String TAG = "LoudnessEnhancer";
// These parameter constants must be synchronized with those in
// /system/media/audio_effects/include/audio_effects/effect_loudnessenhancer.h
/**
* The maximum gain applied applied to the signal to process.
* It is expressed in millibels (100mB = 1dB) where 0mB corresponds to no amplification.
*/
public static final int PARAM_TARGET_GAIN_MB = 0;
public static final int PARAM_LEFT_CHANNEL = 1;//add by xpzhi
public static final int PARAM_RIGHT_CHANNEL = 2;// add by xpzhi
/**
* Registered listener for parameter changes.
*/
private OnParameterChangeListener mParamListener = null;
...
/**
* Return the target gain.
* @return the effect target gain expressed in mB.
* @throws IllegalStateException
* @throws IllegalArgumentException
* @throws UnsupportedOperationException
*/
public float getTargetGain()
throws IllegalStateException, IllegalArgumentException, UnsupportedOperationException {
int[] value = new int[1];
checkStatus(getParameter(PARAM_TARGET_GAIN_MB, value));
return value[0];
}
//add by xpzhi start 新增设置/获取左/右声道音量的接口
public void setLeftVolume(int vol)
throws IllegalStateException, IllegalArgumentException, UnsupportedOperationException {
Log.e(TAG,"===xpzhi====setLeftVolume()====");
checkStatus(setParameter(PARAM_LEFT_CHANNEL, vol));
}
public int getLeftVolume()
throws IllegalStateException, IllegalArgumentException, UnsupportedOperationException {
Log.e(TAG,"===xpzhi====getLeftVolume()====");
int[] value = new int[1];
checkStatus(getParameter(PARAM_LEFT_CHANNEL, value));
return value[0];
}
public void setRightVolume(int vol)
throws IllegalStateException, IllegalArgumentException, UnsupportedOperationException {
Log.e(TAG,"===xpzhi====setRightVolume()====");
checkStatus(setParameter(PARAM_RIGHT_CHANNEL, vol));
}
public int getRightVolume()
throws IllegalStateException, IllegalArgumentException, UnsupportedOperationException {
Log.e(TAG,"===xpzhi====getRightVolume()====");
int[] value = new int[1];
checkStatus(getParameter(PARAM_RIGHT_CHANNEL, value));
return value[0];
}
//add by xpzhi end
/**
* @hide
* The OnParameterChangeListener interface defines a method called by the LoudnessEnhancer
* when a parameter value has changed.
*/
...
}
Java层添加左/右声道设置和获取的接口,跟Native层一一对应,另外值得注意的是从此处开始Java层的API已经改变,后续编译系统版本前要更新系统的API!
2.AudioSystem类新增第三方应用程序申请sessionID的回调函数接口(包括Native,JNI和JAVA层)
frameworks/av/include/media/AudioSystem.h
...
namespace android {
typedef void (*audio_acquire_session_callback)(int pid,int sid);//add by xpzhi 定义带2个整型参数的函数指针
typedef void (*audio_error_callback)(status_t err);
typedef void (*dynamic_policy_callback)(int event, String8 regId, int val);
typedef void (*record_config_callback)(int event, audio_session_t session, int source,
const audio_config_base_t *clientConfig, const audio_config_base_t *deviceConfig,
audio_patch_handle_t patchHandle);
...
// The versions without audio_io_handle_t are intended for JNI.
static status_t setParameters(const String8& keyValuePairs);
static String8 getParameters(const String8& keys);
static void setAcquireSessionCallback(audio_acquire_session_callback cb);//add by xpzhi 声明 设置应用程序申请sessionID后的回调 函数
static void setErrorCallback(audio_error_callback cb);
...
static Mutex gLock; // protects gAudioFlinger and gAudioErrorCallback,
static Mutex gLockAPS; // protects gAudioPolicyService and gAudioPolicyServiceClient
static sp<IAudioFlinger> gAudioFlinger;
static audio_acquire_session_callback gAudioAcquireSessionCallback;//add by xpzhi 声明 上述定义的函数指针 变量 用于保存 应用程序申请sessionID后的回调
static audio_error_callback gAudioErrorCallback;
...
}
头文件中声明了 两个整型类的函数指针audio_acquire_session_callback 以及该指针类型的变量 gAudioAcquireSessionCallback和 设置 申请sessionID后的回调接口 setErrorCallback ,不过恰好参数是 audio_acquire_session_callback 类型的函数,而且该函数会保存到函数指针变量gAudioAcquireSessionCallback 。
frameworks/av/media/libmedia/AudioSystem.cpp
...
namespace android {
// client singleton for AudioFlinger binder interface
Mutex AudioSystem::gLock;
Mutex AudioSystem::gLockAPS;
sp<IAudioFlinger> AudioSystem::gAudioFlinger;
sp<AudioSystem::AudioFlingerClient> AudioSystem::gAudioFlingerClient;
audio_acquire_session_callback AudioSystem::gAudioAcquireSessionCallback = NULL;//add by xpzhi 引入函数指针变量
audio_error_callback AudioSystem::gAudioErrorCallback = NULL;
dynamic_policy_callback AudioSystem::gDynPolicyCallback = NULL;
record_config_callback AudioSystem::gRecordConfigCallback = NULL;
...
audio_unique_id_t AudioSystem::newAudioUniqueId(audio_unique_id_use_t use)
{
const sp<IAudioFlinger>& af = AudioSystem::get_audio_flinger();
if (af == 0) return AUDIO_UNIQUE_ID_ALLOCATE;
return af->newAudioUniqueId(use);
}
void AudioSystem::acquireAudioSessionId(audio_session_t audioSession, pid_t pid)
{
ALOGE("===xpzhi=====acquireAudioSessionId()=====");
const sp<IAudioFlinger>& af = AudioSystem::get_audio_flinger();
if (af != 0) {
af->acquireAudioSessionId(audioSession, pid);
}
//add by xpzhi start 应用程序申请sessionID后调用之前保存的回调函数以通知上层
{
Mutex::Autolock _l(gLock);
if (gAudioAcquireSessionCallback) {
ALOGD("===xpzhi===AudioSystem::acquireAudioSessionId(): gAudioSessionCallback: %p; audioSession(%d)", gAudioAcquireSessionCallback, audioSession);
gAudioAcquireSessionCallback(pid,audioSession);
} else {
ALOGE("===xpzhi===AudioSystem::acquireAudioSessionId(): gAudioSessionCallback is null---!audioSession(%d)", audioSession);
}
}
//add by xpzhi end
}
...
/*static*/ void AudioSystem::setRecordConfigCallback(record_config_callback cb)
{
Mutex::Autolock _l(gLock);
gRecordConfigCallback = cb;
}
//add by xpzhi start 实现头文件中定义的 设置应用程序申请sessionID的回调函数
/*static*/ void AudioSystem::setAcquireSessionCallback(audio_acquire_session_callback cb)
{
Mutex::Autolock _l(gLock);
//将回调函数保存到 函数指针的变量中
gAudioAcquireSessionCallback=cb;
ALOGE("===xpzhi==AudioSystem::setAcquireSessionCallback======%p",cb);
}
//add by xpzhi end
...
}
为了让第三方的应用程序申请sessionID后,将其pid和sessionID通知到我们自己的应用程序层里,Framework层会提前设置一个回调到AudioSystem.cpp类里并保存到gAudioAcquireSessionCallback 函数指针的变量中,一旦有第三方应用程序申请新的sessionID,回调函数也将跟着被调用,并且会携带第三方应用程序的pid和sessionID,acquireAudioSessionId()函数即是申请新的sessionID必须要执行的。接下来cpp层会进一步通知到jni层
frameworks/base/core/jni/android_media_AudioSystem.cpp
class JNIAudioPortCallback: public AudioSystem::AudioPortCallback
{
...
static jstring
android_media_AudioSystem_getParameters(JNIEnv *env, jobject thiz, jstring keys)
{
const jchar* c_keys = env->GetStringCritical(keys, 0);
String8 c_keys8;
if (keys) {
c_keys8 = String8(reinterpret_cast<const char16_t*>(c_keys),
env->GetStringLength(keys));
env->ReleaseStringCritical(keys, c_keys);
}
return env->NewStringUTF(AudioSystem::getParameters(c_keys8).string());
}
//add by xpzhi start 添加 JNI层 申请sessionID后的回调函数
static JavaVM *g_JavaVM;
bool g_bAttatedT = false;
static JNIEnv *GetEnv();
static void DetachCurrent();
static void
android_media_AudioSystem_acquire_session_callback(int pid,int sid)
{
JNIEnv *env = AndroidRuntime::getJNIEnv();
JNIEnv *envnow = env;
ALOGW("===xpzhi==android_media_AudioSystem_session_callback() enter-----");
if (env == NULL) {
ALOGE("android_media_AudioSystem_session_callback() env == NULL---");
ALOGE("android_media_AudioSystem_session_callback() g_JavaVM = %p", g_JavaVM);
if (g_JavaVM) {
envnow = GetEnv();
if (!envnow) {
ALOGE("android_media_AudioSystem_session_callback() envnow == NULL---");
return;
}
} else {
ALOGE("android_media_AudioSystem_session_callback() g_JavaVM = NULL");
return;
}
} else {
g_JavaVM = AndroidRuntime::getJavaVM();
ALOGD("android_media_AudioSystem_session_callback() g_JavaVM = %p----", g_JavaVM);
}
//通过反射机制 查找 JAVA层的sessionCallbackFromNative()方法
jclass clazz = envnow->FindClass(kClassPathName);
ALOGW("===xpzhi===android_media_AudioSystem_session_callback() CallStaticVoidMethod()");
envnow->CallStaticVoidMethod(clazz, envnow->GetStaticMethodID(clazz,
"sessionCallbackFromNative","(II)V"), pid,sid);
envnow->DeleteLocalRef(clazz);
if (g_JavaVM)
DetachCurrent();
}
static JNIEnv *GetEnv()
{
int status;
JNIEnv *envnow = NULL;
status = g_JavaVM->GetEnv((void **)&envnow, JNI_VERSION_1_4);
if(status < 0)
{
status = g_JavaVM->AttachCurrentThread(&envnow, NULL);
if(status < 0)
{
return NULL;
}
g_bAttatedT = true;
}
return envnow;
}
static void DetachCurrent()
{
if(g_bAttatedT)
{
g_JavaVM->DetachCurrentThread();
}
}
//add by xpzhi end
...
int register_android_media_AudioSystem(JNIEnv *env)
{
jclass arrayListClass = FindClassOrDie(env, "java/util/ArrayList");
gArrayListClass = MakeGlobalRefOrDie(env, arrayListClass);
gArrayListMethods.add = GetMethodIDOrDie(env, arrayListClass, "add", "(Ljava/lang/Object;)Z");
gArrayListMethods.toArray = GetMethodIDOrDie(env, arrayListClass, "toArray", "()[Ljava/lang/Object;");
...
//add by xpzhi 向AudioSystem.cpp注册JNI层的 申请sessionID的 回调函数
AudioSystem::setAcquireSessionCallback(android_media_AudioSystem_acquire_session_callback);
RegisterMethodsOrDie(env, kClassPathName, gMethods, NELEM(gMethods));
return RegisterMethodsOrDie(env, kEventHandlerClassPathName,gEventHandlerMethods,NELEM(gEventHandlerMethods));
}
}
随着JNI函数的注册,申请 sessionID的回调函数也会被注册到 AudioSystem.cpp ,那么第三方应用程序申请新的sessionID后 JNI层的android_media_AudioSystem_acquire_session_callback()函数也会被调用,在JNI层的回调函数中又会通过反射机制查找Java层的sessionCallbackFromNative()方法,并且携带第三方应用程序的pid和sessionID。看对应Java层方法的实现
frameworks/base/media/java/android/media/AudioSystem.java
public class AudioSystem
{
...
private static ErrorCallback mErrorCallback;
private static IAudioService sService; // add by xpzhi
/*
* Handles the audio error callback.
*/
public interface ErrorCallback
{
/*
* Callback for audio server errors.
* param error error code:
* - AUDIO_STATUS_OK
* - AUDIO_STATUS_SERVER_DIED
* - AUDIO_STATUS_ERROR
*/
void onError(int error);
};
/***add by xpzhi 请求AudioService服务***/
private static IAudioService getService()
{
if (sService != null) {
return sService;
}
IBinder b = ServiceManager.getService(Context.AUDIO_SERVICE);
sService = IAudioService.Stub.asInterface(b);
return sService;
}
...
public static void setRecordingCallback(AudioRecordingCallback cb) {
synchronized (AudioSystem.class) {
sRecordingCallback = cb;
native_register_recording_callback();
}
}
/*** add by xpzhi 被JNI层调用,一旦有 第三方应用程序申请sessionID将会收到通知***/
private static void sessionCallbackFromNative(int pid,int sid)
{
Log.e(TAG,"==xpzhi=====sessionCallbackFromNative==========");//add by xpzhi 2019-11-26
IAudioService service = getService();
try {
Log.e(TAG, "===xpzhi==AudioSystem(java):audioSystemToService() set!");
//立刻将pid和sessionID转发到AudioService服务中
service.audioSystemToService(pid,sid);
} catch (RemoteException e) {
throw e.rethrowFromSystemServer();
}
}
/**
* Callback from native for recording configuration updates.
* @param event
* @param session
* @param source
* @param recordingFormat see
* {@link AudioRecordingCallback#onRecordingConfigurationChanged(int, int, int, int[])} for
* the description of the record format.
*/
...
}
至此,一旦第三方的应用程序构建AudioTrack对象,向AudioFlinger服务申请sessionID后,AudioSystem.cpp将会通过提前设置好的函数回调得到其pid和sessionID,紧接着cpp层立即会通知对应的jni层,java层,最后通过AIDL的Binder接口转发到AudioService的服务里,继续看AudioService
3.AudioService类新增第三方应用程序申请sessionID的回调函数的AIDL接口
frameworks/base/media/java/android/media/IAudioService.aidl
oneway void unregisterRecordingCallback(in IRecordingConfigDispatcher rcdb);
List<AudioRecordingConfiguration> getActiveRecordingConfigurations();
void audioSystemToService(int pid,int sid);//新增AIDL接口
void updateRemoteControllerOnExistingMediaPlayers();
添加 audioSystemToService接口,在后续的java代码中实现
frameworks/base/services/core/java/com/android/server/audio/AudioService.java
public class AudioService extends IAudioService.Stub {
...
private SoundPool mSoundPool;
private final Object mSoundEffectsLock = new Object();
private static final int NUM_SOUNDPOOL_CHANNELS = 4;
//add by xpzhi start 自定义一些广播的ACTION
private final static String ACTION_MOGO_AUDIO_FOCUS="com.mogo.effect.focus";
private final static String ACTION_MOGO_AUDIO_ABANDON="com.mogo.effect.abandon";
private final static String ACTION_MOGO_AUDIO_SESSION="com.mogo.effect.session";
//add by xpzhi end
...
//==========================================================================================
// Audio Focus
//==========================================================================================
public int requestAudioFocus(AudioAttributes aa, int durationHint, IBinder cb,
IAudioFocusDispatcher fd, String clientId, String callingPackageName, int flags,
IAudioPolicyCallback pcb) {
// permission checks
Log.e(TAG,callingPackageName+"===xpzhi====requestAudioFocus==="+Binder.getCallingPid()+"===="+clientId);
if ((flags & AudioManager.AUDIOFOCUS_FLAG_LOCK) == AudioManager.AUDIOFOCUS_FLAG_LOCK) {
if (AudioSystem.IN_VOICE_COMM_FOCUS_ID.equals(clientId)) {
if (PackageManager.PERMISSION_GRANTED != mContext.checkCallingOrSelfPermission(
android.Manifest.permission.MODIFY_PHONE_STATE)) {
Log.e(TAG, "Invalid permission to (un)lock audio focus", new Exception());
return AudioManager.AUDIOFOCUS_REQUEST_FAILED;
}
} else {
// only a registered audio policy can be used to lock focus
synchronized (mAudioPolicies) {
if (!mAudioPolicies.containsKey(pcb.asBinder())) {
Log.e(TAG, "Invalid unregistered AudioPolicy to (un)lock audio focus");
return AudioManager.AUDIOFOCUS_REQUEST_FAILED;
}
}
}
}
//add by xpzhi 第三方应用程序申请音频焦点 向应用程序发送自定义的广播通知应用程序层包名及其对应的pid
sendMogoBroadcast(ACTION_MOGO_AUDIO_FOCUS,Binder.getCallingPid(),-1,callingPackageName);
return mMediaFocusControl.requestAudioFocus(aa, durationHint, cb, fd,
clientId, callingPackageName, flags);
}
//add by xpzhi start 实现AIDL的接口
public void audioSystemToService(int pid,int sid){
Log.e(TAG, "===xpzhi==audioSystemToService() enter==== " + id+"===="+sid);
sendMogoBroadcast(ACTION_MOGO_AUDIO_SESSION,pid,sid,"");
}
public void sendMogoBroadcast(String action, int pid, int sessionId, String pckName) {
if (mContext != null) {
Intent intent = new Intent();
intent.setAction(action);
intent.putExtra("pid", pid);
if (sessionId >= 0) {
intent.putExtra("sessionId", sessionId);
} else {
intent.putExtra("pck", pckName);
}
mContext.sendBroadcast(intent);
Log.e(TAG, "====xpzhi==sendBroadcast=====");
} else {
Log.e(TAG, "====xpzhi==mContext is null=====");
}
}
//add by xpzhi end
public int abandonAudioFocus(IAudioFocusDispatcher fd, String clientId, AudioAttributes aa) {
Log.e(TAG,"====xpzhi==abandonAudioFocus====="+clientId);
//第三方应用程序失去音频焦点,通过自定义广播通知应用程序层
sendMogoBroadcast(ACTION_MOGO_AUDIO_ABANDON,Binder.getCallingPid(),-1,"");//add by xpzhi
return mMediaFocusControl.abandonAudioFocus(fd, clientId, aa);
}
public void unregisterAudioFocusClient(String clientId) {
mMediaFocusControl.unregisterAudioFocusClient(clientId);
}
...
}
到这里,AudioSystem.java即通过AIDL的Binder接口将第三方应用程序的pid和sessionID从应用程序所在进程传输到AudioService.java所在的System_Server进程里,最后,AudioService又会通过自定义广播继续往应用程序层返。
4.AudioService类在第三方应用程序请求/失去音频焦点时以自定义广播的形式通知应用程序层
第三方应用程序播放音频前一般会请求音频焦点,停止播放后会释放音频焦点,对应的方法是requestAudioFocus()和
abandonAudioFocus(),在上述两个方法中可以通过自定义广播通知应用程序层其包名和pid,这样在我们自己的应用程序里通过接收广播就能知道正在播放音频应用的包名,pid信息(代码参考第3步)
=================================================================================================
到第4步其实在自己的应用程序层已经能获取第三方应用程序的pid和AudioTrack对应的sessionID,不过在测试过程中又发现:
音视频类应用每切换一个节目,AudioTrack对象都会创建一个新的,sessionID并不是固定不变,它是动态变化的,而且实际测试表面每个应用程序创建了不止一个AudioTrack对象,两三个比较常见,真正的使用的AudioTrack对象并不一定是最新的(而接口每次向应用程序层传的是最新的sessionID),这会导致添加的接口获取的sessionID和实际播放的不一致!
为了解决这个测试的BUG,我想到了Property属性,属性服务读写数据方便而且可以跨进程共享。
应用程序层每创建一个AudioTrack对象在AF服务里都会创建一个对应的Track对象,而且如果程序使用了这个AudioTrack对象,一定会调用Track类的start()函数,于是可以在start()函数里向Property属性中写入pid和sessionID,供应用程序层读取。看看后续的第5/6步:
5.Framework中新增Property属性并赋予AudioServer进程读写的权限,便于后续进程间数据的交互
/device/qcom/msm8953_64/msm8953_64.mk
...
PRODUCT_COPY_FILES += device/qcom/msm8953_64/whitelistedapps.xml:system/etc/whitelistedapps.xml \
device/qcom/msm8953_64/gamedwhitelist.xml:system/etc/gamedwhitelist.xml
#add by xpzhi 增加persist属性服务
PRODUCT_PROPERTY_OVERRIDES += \
persist.audio.mogo=mogo
PRODUCT_PROPERTY_OVERRIDES += \
dalvik.vm.heapminfree=4m \
dalvik.vm.heapstartsize=16m
$(call inherit-product, frameworks/native/build/phone-xhdpi-2048-dalvik-heap.mk)
$(call inherit-product, device/qcom/common/common64.mk)
...
此时公司使用的是高通8953的方案,读者注意根据自己的实际情况变换路径
system/sepolicy/audioserver.te
...
# Grant access to audio files to audioserver
allow audioserver audio_data_file:dir ra_dir_perms;
allow audioserver audio_data_file:file create_file_perms;
# Grant access to property_set add by xpzhi 允许AudioServer进程读写该属性
set_prop(audioserver, audio_prop)
allow audioserver audio_prop:property_service set;
# Needed on some devices for playing audio on paired BT device,
# but seems appropriate for all devices.
unix_socket_connect(audioserver, bluetooth, bluetooth)
...
这里添加的权限其实是SeLinux权限,忘记加的话也可以利用adb shell命令把Android设备的SeLinux权限临时关闭,
set enforce 0 设置为宽容模式 (1是强制执行模式 ;get enforce 获取当前SeLinux状态)
6.音频流开始播放时,在Property属性中记录该Track所属的pid和sessionID,供应用程序层读取
frameworks/av/services/audioflinger/Tracks.cpp
...
status_t AudioFlinger::PlaybackThread::Track::start(AudioSystem::sync_event_t event __unused,
audio_session_t triggerSession __unused)
{
status_t status = NO_ERROR;
//add by xpzhi starts
ALOGD("===xpzhi====start(%d), calling pid %d session %d", mName, IPCThreadState::self()->getCallingPid(), mSessionId);
std::string tempStr=("pid="+std::to_string(IPCThreadState::self()->getCallingPid())+";sid="+std::to_string(mSessionId));
char prop[90];
property_get("persist.audio.mogo",prop,"");
ALOGE("===xpzhi==========%s====%d====%s=====",(char *)tempStr.c_str(),(int)tempStr.size(),prop);
if(strcmp(prop,(char *)tempStr.c_str())!=0)
{
ALOGE("===xpzhi====set========");
int mogo=property_set("persist.audio.mogo",(char *)tempStr.c_str());
ALOGE("===xpzhi====set====end====%d===",mogo);
}
//add by xpzhi end
sp<ThreadBase> thread = mThread.promote();
...
}
...
在应用程序里通过反射获取属性服务的值
String prop = "";
try {
Class<?> clazz = Class.forName("android.os.SystemProperties");
Method method = clazz.getMethod("get", String.class);
//返回的结果格式:pid=479;sid=289
prop = method.invoke(null, "persist.audio.mogo") + "";
Log.i(TAG, "=========prop======" + prop);
} catch (Exception e) {
e.printStackTrace();
}
以上就是分区音效功能的Framework层实现部分,其实还有应用程序部分,这里就省略了,附最终的项目截图