androidP audio system (三) ---> audioFlinger

介绍

  我们前面知道AudioFlinger是android audio的核心内容,启动了承上启下的左右,向上向上层提供音频应用层所需的功能接口,向下直接接触
audio hal来管理音频设备。

audioFlinger的启动和运行

贴士: 一下代码讲解均是androidP 源码

frameworks/av/media/audioserver/audioserver.rc

service audioserver /system/bin/audioserver
    class core
    user audioserver
    # media gid needed for /dev/fm (radio) and for /data/misc/media (tee)
    group audio camera drmrpc inet media mediadrm net_bt net_bt_admin net_bw_acct
    ioprio rt 4
    writepid /dev/cpuset/foreground/tasks /dev/stune/foreground/tasks
    onrestart restart vendor.audio-hal-2-0
    # Keep the original service name for backward compatibility when upgrading
    # O-MR1 devices with framework-only.
    onrestart restart audio-hal-2-0

on property:vts.native_server.on=1
    stop audioserver
on property:vts.native_server.on=0
    start audioserver

init进程会执行audioserver.rc, 随后会执行bin文件audioserver, 它来自frameworks/av/media/audioserver/Android.mk

LOCAL_PATH:= $(call my-dir)

include $(CLEAR_VARS)

LOCAL_SRC_FILES := \
	main_audioserver.cpp \
	../libaudioclient/aidl/android/media/IAudioRecord.aidl  # main_audioserver.cpp编译出audioserver

.......... 省略了一些使用到的lib

LOCAL_MODULE := audioserver    # 可执行文件audioserver

LOCAL_INIT_RC := audioserver.rc

LOCAL_CFLAGS := -Werror -Wall

include $(BUILD_EXECUTABLE)

frameworks/av/media/audioserver/main_audioserver.cpp

int main(int argc __unused, char **argv)
{
        .......
        android::hardware::configureRpcThreadpool(4, false /*callerWillJoin*/);
        sp<ProcessState> proc(ProcessState::self());
        sp<IServiceManager> sm = defaultServiceManager();
        ALOGI("ServiceManager: %p", sm.get());
        AudioFlinger::instantiate();  //启动静态的AudioFlinger
        AudioPolicyService::instantiate();  //启动静态的AudioPolicyService
    }
}

但是我在frameworks/av/services/audioflinger/AudioFlinger.h并没有找到instantiate(),但是AudioFlinger继承自BinderService 和 BnAudioFlinger。

AudioFlinger.h:


class AudioFlinger :
    public BinderService<AudioFlinger>,
    public BnAudioFlinger
{
    friend class BinderService<AudioFlinger>;   // for AudioFlinger()

public:
    static const char* getServiceName() ANDROID_API { return "media.audio_flinger"; } //在bindservice中注册到serviceManager时会调用,方便别的进程getservice("media.audio_flinger")拿到AudioFlinger服务。
......
}

在 BinderService模板类中发现了 instantiate()

/*
 * Copyright (C) 2010 The Android Open Source Project
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

#ifndef ANDROID_BINDER_SERVICE_H
#define ANDROID_BINDER_SERVICE_H

#include <stdint.h>

#include <utils/Errors.h>
#include <utils/String16.h>

#include <binder/IServiceManager.h>
#include <binder/IPCThreadState.h>
#include <binder/ProcessState.h>
#include <binder/IServiceManager.h>

// ---------------------------------------------------------------------------
namespace android {

template<typename SERVICE>
class BinderService
{
public:
    static status_t publish(bool allowIsolated = false,
                            int dumpFlags = IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT) {
        sp<IServiceManager> sm(defaultServiceManager());
        return sm->addService(String16(SERVICE::getServiceName()), new SERVICE(), allowIsolated,
                              dumpFlags);
    }

    static void publishAndJoinThreadPool(
            bool allowIsolated = false,
            int dumpFlags = IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT) {
        publish(allowIsolated, dumpFlags);
        joinThreadPool();
    }

    static void instantiate() { publish(); }  // AudioFlinger::instantiate()调用的是它。

    static status_t shutdown() { return NO_ERROR; }

private:
    static void joinThreadPool() {
        sp<ProcessState> ps(ProcessState::self());
        ps->startThreadPool();
        ps->giveThreadPoolName();
        IPCThreadState::self()->joinThreadPool();
    }
};


}; // namespace android
// ---------------------------------------------------------------------------
#endif // ANDROID_BINDER_SERVICE_H

instantiate()又内部调用 publish(),publish()的主要作用就是new出 SERVICE 而这里的SERVICE 就是模板传进来的 AudioFlinger ,那么这里就是new 出 AudioFlinger 对象,然后将AudioFlinger add到ServiceManager中作为一个system service,供别的进程可以夸进程调用。

AudioFlinger 对音频设备的管理

我们上面只是看到了AudioFlinger的实例创建,但具体的运用的地方还没有开始。在开始介绍AudioFlinger的运用之前我们先明确一下, AudioFlinger和AudioPolicyService的职责:

AudioPolicyService:是策略的制定者,如什么时候打开音频接口设备、某种Stream类型的音频对应什么设备等,都由它来完成。

AudioFlinger:是策略的执行者,如具体如何和音频设备进行通信、如何维护现有系统中的音频设备以及多个音频流的“混音”等,都由它来完成。

目前Audio系统中支持的音频设备接口分为3大类,即

frameworks/av/services/audioflinger/AudioFlinger.cpp

static const char * const audio_interfaces[] = {
    AUDIO_HARDWARE_MODULE_ID_PRIMARY, // 主音频设备,必须存在
    AUDIO_HARDWARE_MODULE_ID_A2DP, // 蓝牙A2DP音频 
    AUDIO_HARDWARE_MODULE_ID_USB, // USB音频
};

上面的三个音频接口都会对应一个“so库”,AudioPolicyService会根据用户配置来指导AudioFlinger加载当前设备支持的接口的“so库”。

在frameworks/av/services/audiopolicy下有个文件audio_policy.conf(厂商关于音频设备的描述文件,根据这个配置来打开以上3类音频接口),如果音频接口存在的话,最终会调用到AudioFlinger::loadHwModule_1(const char* name):
frameworks/av/services/audioflinger/AudioFlinger.cpp

// loadHwModule_l() must be called with AudioFlinger::mLock held  // 调用loadHwModule_l()时必须要加锁
audio_module_handle_t AudioFlinger::loadHwModule_l(const char *name)
{
    // step 1: 在这个for循环中查找传进来的音频设备接口 name是否已经添加到了mAudioHwDevs中,第一进来肯定mAudioHwDevs.size肯定
    // 为 0.
    for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
        if (strncmp(mAudioHwDevs.valueAt(i)->moduleName(), name, strlen(name)) == 0) {
            ALOGW("loadHwModule() module %s already loaded", name);
            return mAudioHwDevs.keyAt(i);
        }
    }

    sp<DeviceHalInterface> dev;
    // step 2: 如果上面没有添加到mAudioHwDevs中,那这里就根据name,打开对应的hal库,比如name是
    // AUDIO_HARDWARE_MODULE_ID_A2DP,这里就打开audio.a2dp.default.so,然后将句柄信息赋值给AudioHwDevs对象,接着后面会将
    // AudioHwDevs 对象又保存到vector mAudioHwDevs中
    int rc = mDevicesFactoryHal->openDevice(name, &dev);
    if (rc) {
        ALOGE("loadHwModule() error %d loading module %s", rc, name);
        return AUDIO_MODULE_HANDLE_NONE;
    }

    // step 3: 检查这个audio interface是否已经加载成功,如果加载成功看看这个audio interface是否支持主音量控制。
    mHardwareStatus = AUDIO_HW_INIT;
    rc = dev->initCheck();
    mHardwareStatus = AUDIO_HW_IDLE;
    if (rc) {
        ALOGE("loadHwModule() init check error %d for module %s", rc, name);
        return AUDIO_MODULE_HANDLE_NONE;
    }

    // Check and cache this HAL's level of support for master mute and master
    // volume.  If this is the first HAL opened, and it supports the get
    // methods, use the initial values provided by the HAL as the current
    // master mute and volume settings.

    AudioHwDevice::Flags flags = static_cast<AudioHwDevice::Flags>(0);
    {  // scope for auto-lock pattern
        AutoMutex lock(mHardwareLock);

        if (0 == mAudioHwDevs.size()) {
            mHardwareStatus = AUDIO_HW_GET_MASTER_VOLUME;
            float mv;
            if (OK == dev->getMasterVolume(&mv)) {
                mMasterVolume = mv;
            }

            mHardwareStatus = AUDIO_HW_GET_MASTER_MUTE;
            bool mm;
            if (OK == dev->getMasterMute(&mm)) {
                mMasterMute = mm;
            }
        }

        mHardwareStatus = AUDIO_HW_SET_MASTER_VOLUME;
        if (OK == dev->setMasterVolume(mMasterVolume)) {
            flags = static_cast<AudioHwDevice::Flags>(flags |
                    AudioHwDevice::AHWD_CAN_SET_MASTER_VOLUME);
        }

        mHardwareStatus = AUDIO_HW_SET_MASTER_MUTE;
        if (OK == dev->setMasterMute(mMasterMute)) {
            flags = static_cast<AudioHwDevice::Flags>(flags |
                    AudioHwDevice::AHWD_CAN_SET_MASTER_MUTE);
        }

        mHardwareStatus = AUDIO_HW_IDLE;
    }

   // step 4: nextUniqueId产生一个唯一的key id给这个 audio interface。然后将 AudioHwDevs 对象又保存到vector mAudioHwDevs中
    audio_module_handle_t handle = (audio_module_handle_t) nextUniqueId(AUDIO_UNIQUE_ID_USE_MODULE);
    mAudioHwDevs.add(handle, new AudioHwDevice(handle, name, dev, flags));

    ALOGI("loadHwModule() Loaded %s audio interface, handle %d", name, handle);

    return handle;

}

上面说了如何加载各个audio interface,其实每个audio interface里面又有好多具体的外放设备。
frameworks/av/media/libmedia/TypeConverter.cpp


#define MAKE_STRING_FROM_ENUM(string) { #string, string }
#define TERMINATOR { .literal = nullptr }

template <>
const OutputDeviceConverter::Table OutputDeviceConverter::mTable[] = {
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_NONE),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_EARPIECE),                         // 听筒
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_SPEAKER),                          // 喇叭
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_SPEAKER_SAFE),                     // 
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_WIRED_HEADSET),                    // 带话筒的耳机
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_WIRED_HEADPHONE),                  // 耳机
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_BLUETOOTH_SCO),                    // sco 蓝牙
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_BLUETOOTH_SCO_HEADSET),            // soc 蓝牙耳机
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_BLUETOOTH_SCO_CARKIT),             // soc 车载套件
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_ALL_SCO),                          // soc的所有设备
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_BLUETOOTH_A2DP),                   // A2DP 蓝牙
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_HEADPHONES),        // A2DP 蓝牙耳机
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_SPEAKER),           // A2DP 蓝牙喇叭
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_ALL_A2DP),                         // A2DP 的所有设备
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_AUX_DIGITAL),                      // AUX 
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_HDMI),                             // HDMI
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_ANLG_DOCK_HEADSET),                // 模拟dock headset
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_DGTL_DOCK_HEADSET),                // 数字 dock headset
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_USB_ACCESSORY),                    // 下面就不一一说明了,大家知道设备的细分就行了
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_USB_DEVICE),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_ALL_USB),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_REMOTE_SUBMIX),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_TELEPHONY_TX),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_LINE),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_HDMI_ARC),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_SPDIF),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_FM),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_AUX_LINE),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_IP),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_BUS),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_PROXY),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_USB_HEADSET),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_HEARING_AID),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_ECHO_CANCELLER),
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_DEFAULT),
    // STUB must be after DEFAULT, so the latter is picked up by toString first.
    MAKE_STRING_FROM_ENUM(AUDIO_DEVICE_OUT_STUB),
    TERMINATOR
};

以上的audio interfaace的选择和 每个interface里面具体的设备选择都是由AudioPolicyService来做决策。但具体的音频输还是AudioFlinger
打开音频输出通道(output)在audioFlinger对应的接口是openOutput(),注意一个audio interface 可能包含多个 output。
frameworks/av/services/audioflinger/AudioFlinger.cpp


status_t AudioFlinger::openOutput(audio_module_handle_t module,
                                  audio_io_handle_t *output,
                                  audio_config_t *config,
                                  audio_devices_t *devices,
                                  const String8& address,
                                  uint32_t *latencyMs,
                                  audio_output_flags_t flags)
{
    // 入参中的module 是由前面的loadHwModule获得的,它是一个audio interface的id号,可以通过此id在maudioHwDevs中查找对应的AudioHwDevice对象
    ALOGI("openOutput() this %p, module %d Device %#x, SamplingRate %d, Format %#08x, "
              "Channels %#x, flags %#x",
              this, module,
              (devices != NULL) ? *devices : 0,
              config->sample_rate,
              config->format,
              config->channel_mask,
              flags);

    if (devices == NULL || *devices == AUDIO_DEVICE_NONE) {
        return BAD_VALUE;
    }

    Mutex::Autolock _l(mLock);

    // 进来之后参数基本都传给了openOutput_l(),在它里面查找合适的音频接口、创建音频输出流,创建播放线程等
    sp<ThreadBase> thread = openOutput_l(module, output, config, *devices, address, flags);
    if (thread != 0) {
        if ((flags & AUDIO_OUTPUT_FLAG_MMAP_NOIRQ) == 0) {
            PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
            *latencyMs = playbackThread->latency();

            // notify client processes of the new output creation
            playbackThread->ioConfigChanged(AUDIO_OUTPUT_OPENED);

            // the first primary output opened designates the primary hw device
            if ((mPrimaryHardwareDev == NULL) && (flags & AUDIO_OUTPUT_FLAG_PRIMARY)) {
                ALOGI("Using module %d as the primary audio interface", module);
                mPrimaryHardwareDev = playbackThread->getOutput()->audioHwDev;

                AutoMutex lock(mHardwareLock);
                mHardwareStatus = AUDIO_HW_SET_MODE;
                mPrimaryHardwareDev->hwDevice()->setMode(mMode);
                mHardwareStatus = AUDIO_HW_IDLE;
            }
        } else {
            MmapThread *mmapThread = (MmapThread *)thread.get();
            mmapThread->ioConfigChanged(AUDIO_OUTPUT_OPENED);
        }
        return NO_ERROR;
    }

    return NO_INIT;
}

AudioFlinger::openOutput_l()


sp<AudioFlinger::ThreadBase> AudioFlinger::openOutput_l(audio_module_handle_t module,
                                                            audio_io_handle_t *output,
                                                            audio_config_t *config,
                                                            audio_devices_t devices,
                                                            const String8& address,
                                                            audio_output_flags_t flags)
{
    // step 1: 根据module id,查找对应的audio interface
    AudioHwDevice *outHwDev = findSuitableHwDev_l(module, devices);
    if (outHwDev == NULL) {
        return 0;
    }

    if (*output == AUDIO_IO_HANDLE_NONE) {
        *output = nextUniqueId(AUDIO_UNIQUE_ID_USE_OUTPUT);
    } else {
        // Audio Policy does not currently request a specific output handle.
        // If this is ever needed, see openInput_l() for example code.
        ALOGE("openOutput_l requested output handle %d is not AUDIO_IO_HANDLE_NONE", *output);
        return 0;
    }

    mHardwareStatus = AUDIO_HW_OUTPUT_OPEN;

    // FOR TESTING ONLY:
    // This if statement allows overriding the audio policy settings
    // and forcing a specific format or channel mask to the HAL/Sink device for testing.
    if (!(flags & (AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD | AUDIO_OUTPUT_FLAG_DIRECT))) {
        // Check only for Normal Mixing mode
        if (kEnableExtendedPrecision) {
            // Specify format (uncomment one below to choose)
            //config->format = AUDIO_FORMAT_PCM_FLOAT;
            //config->format = AUDIO_FORMAT_PCM_24_BIT_PACKED;
            //config->format = AUDIO_FORMAT_PCM_32_BIT;
            //config->format = AUDIO_FORMAT_PCM_8_24_BIT;
            // ALOGV("openOutput_l() upgrading format to %#08x", config->format);
        }
        if (kEnableExtendedChannels) {
            // Specify channel mask (uncomment one below to choose)
            //config->channel_mask = audio_channel_out_mask_from_count(4);  // for USB 4ch
            //config->channel_mask = audio_channel_mask_from_representation_and_bits(
            //        AUDIO_CHANNEL_REPRESENTATION_INDEX, (1 << 4) - 1);  // another 4ch example
        }
    }

    // step 2:为设备打开一个输出流
    AudioStreamOut *outputStream = NULL;
    status_t status = outHwDev->openOutputStream(
            &outputStream,
            *output,
            devices,
            flags,
            config,
            address.string());

    mHardwareStatus = AUDIO_HW_IDLE;

    // step 3: 创建播放线程,对于不同类型的设备会创建不同的线程,flag和thread的对应关系,后面有张表
    if (status == NO_ERROR) {
        if (flags & AUDIO_OUTPUT_FLAG_MMAP_NOIRQ) {
            sp<MmapPlaybackThread> thread =
                    new MmapPlaybackThread(this, *output, outHwDev, outputStream,
                                          devices, AUDIO_DEVICE_NONE, mSystemReady);
            mMmapThreads.add(*output, thread);
            ALOGV("openOutput_l() created mmap playback thread: ID %d thread %p",
                  *output, thread.get());
            return thread;
        } else {
            sp<PlaybackThread> thread;
            if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
                thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady);
                ALOGV("openOutput_l() created offload output: ID %d thread %p",
                      *output, thread.get());
            } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
                    || !isValidPcmSinkFormat(config->format)
                    || !isValidPcmSinkChannelMask(config->channel_mask)) {
                thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady);
                ALOGV("openOutput_l() created direct output: ID %d thread %p",
                      *output, thread.get());
            } else {
                thread = new MixerThread(this, outputStream, *output, devices, mSystemReady);
                ALOGV("openOutput_l() created mixer output: ID %d thread %p",
                      *output, thread.get());
            }
            // step 4: 添加播放线程
            mPlaybackThreads.add(*output, thread);
            return thread;
        }
    }

    return 0;
}

在这里插入图片描述
这几种线程的继承关系如下:
在这里插入图片描述

这里明确一下这些线程的任务, 它们就是“不断”的处理上层的音频数据回放请求, 然后将其传递到下一层,最终写入硬件设备

上面分析出了线程的创建但是,线程的启动和“不断”处理回放请求的循环在哪里?我们下面继续看

线程的启动和循环:

我们的这些线程都是间接的继承了Refbase 类,在PlayBackTread 类里实现了 onFirstRef()方法,当目标对象第一次被引用的时候就会调用onFirstRef().

frameworks/av/services/audioflinger/Threads.cpp


void AudioFlinger::PlaybackThread::onFirstRef()
{
    run(mThreadName, ANDROID_PRIORITY_URGENT_AUDIO);
}

只要播放线程被引用,就会调用到 onFirstRef(), 就会调用run方法,接着启动一个新的线程并间接调用threadLoop(),不断处理播放请求。这边大家肯定有个疑问 run() 怎么就调到了 threadLoop(), 看看这个博客就大概知道了

  • 2
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值