a2dp场景分析

简述

这次,我们来看下a2dp下面的播放场景吧。a2dp是蓝牙用来播放音乐的协议,正常情况下,播放音乐只从蓝牙耳机端输出,但是如果是来个通知类的声音,那策略的选择会同时从蓝牙耳机和speaker同时输出,由于speaker和蓝牙是采用的不同硬件设备输出,所以它在hal层应该是对应两个so的。所以,谷歌针对这种场景,继承PlaybackThread,实现了DuplicatingThread,这样就需要在audioflinger这边实现将数据拷贝到两个缓冲区的操作了。

正文

在android下,对于链接蓝牙的音乐播放的话,按照我们的认知,肯定得从蓝牙出来,这是毋庸置疑的,也合乎常理,这个和一般的音乐播放一样,只是需要单纯的选择到对应的设备通过一个output即可完成播放。但是有一些特殊的场景,需要声音即从蓝牙耳机输出,又从设备的speaker输出,这个时候对于单个output的播放方式就无法满足这种场景了,于是谷歌引入duplicate播放该种类型声音的方案,接下来我们将一起来分析这部分的代码。

链接蓝牙

这部分的代码在AudioPolicyManager.cpp中

从蓝牙链接上来,到链接完成,总共会执行3次的setDeviceConnectionStateInt,分别对应SCO、A2DP、IN_SCO。
A2DP的场景会触发打开两个output,一个是mixer output,另外一个是duplicate output,关联新建的output和primary output。
IN_SCO会触发创建一个input,紧接着会被关闭,所以目的更确切的说,应该是更新设备的desc。
对于input的关闭是针对所有的input的,所以这个时候如果正巧有录音线程在跑的话,会触发audiorecord的重新restore的操作。

status_t AudioPolicyManagerCustom::setDeviceConnectionStateInt(audio_devices_t device,
                                                         audio_policy_dev_state_t state,
                                                         const char *device_address,
                                                         const char *device_name)
{
    // 同一时间只能是单个设备的连入或者断开
    if (!audio_is_output_device(device) && !audio_is_input_device(device)) return BAD_VALUE;
    // 找到对应的设备描述对象
    sp<DeviceDescriptor> devDesc =
            mHwModules.getDeviceDescriptor(device, device_address, device_name);

    // 如果是输出设备
    if (audio_is_output_device(device)) {
        SortedVector <audio_io_handle_t> outputs;

        ssize_t index = mAvailableOutputDevices.indexOf(devDesc);
        // 后面我们可能会更新output,所以这边做下备份,这个变量在后面checkOutputForAllStrategies中
        // 用到
        mPreviousOutputs = mOutputs;
        switch (state)
        {
        // 连入
        case AUDIO_POLICY_DEVICE_STATE_AVAILABLE: {
            // 说明设备已经接入,直接返回非法操作
            if (index >= 0) {
                return INVALID_OPERATION;
            }

            // 将新的设备添加到mAvailableOutputDevices中来
            index = mAvailableOutputDevices.add(devDesc);
            if (index >= 0) {
                // 获取到对应的so的对象
                sp<HwModule> module = mHwModules.getModuleForDevice(device);
                if (module == 0) {
                    mAvailableOutputDevices.remove(devDesc);
                    return INVALID_OPERATION;
                }
                // 将module对象保存到device descriptor中的成员变量中
                mAvailableOutputDevices[index]->attach(module);
            } else {
                return NO_MEMORY;
            }
            // 连入:这个函数主要是将包含这个device的profile都给打开,然后更新参数
            if (checkOutputsForDevice(devDesc, state, outputs, devDesc->mAddress) != NO_ERROR) {
                mAvailableOutputDevices.remove(devDesc);
                return INVALID_OPERATION;
            }
            // 告诉policy 引擎知道有新设备连入了
            mEngine->setDeviceConnectionState(devDesc, state);
            // 将新设备连入的信息通知给hal层
            AudioParameter param = AudioParameter(devDesc->mAddress);
            param.addInt(String8(AUDIO_PARAMETER_DEVICE_CONNECT), device);
            mpClientInterface->setParameters(AUDIO_IO_HANDLE_NONE, param.toString());

            } break;
        // 断开
        case AUDIO_POLICY_DEVICE_STATE_UNAVAILABLE: {
            if (index < 0) {
                return INVALID_OPERATION;
            }
            // 将设备断开的信息传递给hal层
            AudioParameter param = AudioParameter(devDesc->mAddress);
            param.addInt(String8(AUDIO_PARAMETER_DEVICE_DISCONNECT), device);
            mpClientInterface->setParameters(AUDIO_IO_HANDLE_NONE, param.toString());

            // remove device from available output devices
            mAvailableOutputDevices.remove(devDesc);
            // 断开:找出需要关闭的output,并且将profile的参数重置回去
            checkOutputsForDevice(devDesc, state, outputs, devDesc->mAddress);

            // Propagate device availability to Engine
            mEngine->setDeviceConnectionState(devDesc, state);
            } break;

        default:
            return BAD_VALUE;
        }
        // 蓝牙场景下,到这边就已经打开两个output了,这边是确认当前状态是否有必要让a2dp进入suspend状态。
        // 判断依据主要就是:sco连接的时候,当前是否通话、以及是强制录音或communication是否走sco,则会suspend
        checkA2dpSuspend();
        // 针对所有的策略,只要存在output发生变化,如果原outputs中处于active状态,会进行mute和unmute操作
        // 对于media的策略,只针对全局的音效,会进行effect的搬迁到播放的output上面。
        // 某个策略下面对应的stream进行invalid的操作,这样对应的audiotrack会进行重新选择output输出。
        checkOutputForAllStrategies();
        // 执行完checkOutputForAllStrategies()就需要将不需要的output关闭
        // 如果是断开或者是由于前面checkoutput打开的direct output的话,需要有关闭output的操作。
        if (!outputs.isEmpty()) {
            for (size_t i = 0; i < outputs.size(); i++) {
                sp<SwAudioOutputDescriptor> desc = mOutputs.valueFor(outputs[i]);
                // 断开设备链接的时候,关闭没有使用的outputs。或者关闭是用来查询参数用的direct output(在
                // checkOutputsForDevice中打开的)
                if ((state == AUDIO_POLICY_DEVICE_STATE_UNAVAILABLE) ||
                        (((desc->mFlags & AUDIO_OUTPUT_FLAG_DIRECT) != 0) &&
                         (desc->mDirectOpenCount == 0))) {
                    closeOutput(outputs[i]);
                }
            }
            // check again after closing A2DP output to reset mA2dpSuspended if needed
            checkA2dpSuspend();
        }
        // 更新策略输出设备,以及更新mOutputs
        updateDevicesAndOutputs();
        // 如果是通话过程中,则获取primary output的device,然后更新routing
        if (mEngine->getPhoneState() == AUDIO_MODE_IN_CALL && hasPrimaryOutput()) {
            audio_devices_t newDevice = getNewOutputDevice(mPrimaryOutput, false /*fromCache*/);
            updateCallRouting(newDevice);
        }
        for (size_t i = 0; i < mOutputs.size(); i++) {
            sp<SwAudioOutputDescriptor> desc = mOutputs.valueAt(i);
            // 在通话并且当前output为primary才不会进入,就是过滤上面已经UpdateCallRouting的处理
            // 其他的output,需要重新设置device,因为别人有可能正在播放。
            if ((mEngine->getPhoneState() != AUDIO_MODE_IN_CALL) || (desc != mPrimaryOutput)) {
                audio_devices_t newDevice = getNewOutputDevice(desc, true /*fromCache*/);
                // 这边是遍历所有的output,duplicate也在里面,但是duplicate的两个output也在,所以duplicate
                // 直接就不用强制更新了。防止别人刚修改过,duplicate又来一次force操作。因为别的output可能在跑
                // 但是并不需要用到duplicate,这个时候可能导致路由出现问题。
                bool force = !desc->isDuplicated()
                        && (!device_distinguishes_on_address(device)
                                // always force when disconnecting (a non-duplicated device)
                                || (state == AUDIO_POLICY_DEVICE_STATE_UNAVAILABLE));
                setOutputDevice(desc, newDevice, force, 0);
            }
        }
        // 通知上层这边设备发生变化了
        mpClientInterface->onAudioPortListUpdate();
        return NO_ERROR;
    }  // end if is output device

    // handle input devices
    if (audio_is_input_device(device)) {
        SortedVector <audio_io_handle_t> inputs;

        ssize_t index = mAvailableInputDevices.indexOf(devDesc);
        switch (state)
        {
        // handle input device connection
        case AUDIO_POLICY_DEVICE_STATE_AVAILABLE: {
            if (index >= 0) {
                return INVALID_OPERATION;
            }
            sp<HwModule> module = mHwModules.getModuleForDevice(device);
            if (module == NULL) {
                return INVALID_OPERATION;
            }
            // 针对这个device找到对应的outputs
            if (checkInputsForDevice(devDesc, state, inputs, devDesc->mAddress) != NO_ERROR) {
                return INVALID_OPERATION;
            }

            index = mAvailableInputDevices.add(devDesc);
            if (index >= 0) {
                mAvailableInputDevices[index]->attach(module);
            } else {
                return NO_MEMORY;
            }

            // Set connect to HALs
            AudioParameter param = AudioParameter(devDesc->mAddress);
            param.addInt(String8(AUDIO_PARAMETER_DEVICE_CONNECT), device);
            mpClientInterface->setParameters(AUDIO_IO_HANDLE_NONE, param.toString());

            // Propagate device availability to Engine
            mEngine->setDeviceConnectionState(devDesc, state);
        } break;

        // handle input device disconnection
        case AUDIO_POLICY_DEVICE_STATE_UNAVAILABLE: {
            if (index < 0) {
                return INVALID_OPERATION;
            }

            // Set Disconnect to HALs
            AudioParameter param = AudioParameter(devDesc->mAddress);
            param.addInt(String8(AUDIO_PARAMETER_DEVICE_DISCONNECT), device);
            mpClientInterface->setParameters(AUDIO_IO_HANDLE_NONE, param.toString());
            // 针对这个device找到对应的outputs
            checkInputsForDevice(devDesc, state, inputs, devDesc->mAddress);
            mAvailableInputDevices.remove(devDesc);

            // Propagate device availability to Engine
            mEngine->setDeviceConnectionState(devDesc, state);
        } break;

        default:
            return BAD_VALUE;
        }
        // 关闭所有的inputs,关闭会触发对应input中的record restore,这样就会通过新的input录入声音了。
        closeAllInputs();
        // 如果当前还处在通话过程,则更新下通话路径。这边也是更新的output device,并不是input
        if (mEngine->getPhoneState() == AUDIO_MODE_IN_CALL && hasPrimaryOutput()) {
            audio_devices_t newDevice = getNewOutputDevice(mPrimaryOutput, false /*fromCache*/);
            updateCallRouting(newDevice);
        }

        mpClientInterface->onAudioPortListUpdate();
        return NO_ERROR;
    } // end if is input device

    return BAD_VALUE;
}

setDeviceConnectionStateInt函数的逻辑我们有必要这边稍微宏观的梳理下,针对output并且是插入设备的场景:

  1. 通过device信息,从audio_policy.conf中去找寻符合该device的profiles
  2. 对于找到的profiles的话,如果有尚未打开的话,则执行打开操作并更新device的信息
  3. 检测下是否需要将a2dp线程进行suspend
  4. 对所有的策略进行相关处理:将原outputs处于激活状态下面的stream进行mute和unmute;media策略中effect的搬迁;有stream在跑的output,进行对该stream的invalid操作。
  5. 关闭direct的output
  6. 更新下各个策略下面的输出设备以及mOutputs
  7. 如果当前处在通话,则更新通话路由
  8. 设置各个output的输出设备

上面是逻辑上的认识,我们需要深入去看各个函数的实现。我们从checkOutputsForDevice开始。

status_t AudioPolicyManager::checkOutputsForDevice(const sp<DeviceDescriptor> devDesc,
                                                   audio_policy_dev_state_t state,
                                                   SortedVector<audio_io_handle_t>& outputs,
                                                   const String8 address)
{
    audio_devices_t device = devDesc->type();
    sp<SwAudioOutputDescriptor> desc;

    if (audio_device_is_digital(device)) {
        // erase all current sample rates, formats and channel masks
        devDesc->clearCapabilities();
    }

    if (state == AUDIO_POLICY_DEVICE_STATE_AVAILABLE) {
        // 1.先找出所有可以路由到这个设备的outputs
        for (size_t i = 0; i < mOutputs.size(); i++) {
            desc = mOutputs.valueAt(i);
            if (!desc->isDuplicated() && (desc->supportedDevices() & device)) {
                if (!device_distinguishes_on_address(device)) {
                    outputs.add(mOutputs.keyAt(i));
                } else {
                    findIoHandlesByAddress(desc, device, address, outputs);
                }
            }
        }
        // 2.接着看下遍历所有的profiles,看那个可以路由到这个设备
        SortedVector< sp<IOProfile> > profiles;
        for (size_t i = 0; i < mHwModules.size(); i++)
        {
            if (mHwModules[i]->mHandle == 0) {
                continue;
            }
            for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++)
            {
                sp<IOProfile> profile = mHwModules[i]->mOutputProfiles[j];
                if (profile->mSupportedDevices.types() & device) {
                    if (!device_distinguishes_on_address(device) ||
                            address == profile->mSupportedDevices[0]->mAddress) {
                        profiles.add(profile);
                    }
                }
            }
        }

        if (profiles.isEmpty() && outputs.isEmpty()) {
            return BAD_VALUE;
        }

        // 如果有需要的话,将会打开符合匹配的profiles
        // Direct outputs将会被临时打开用来动态查询参数,并且将会在setDeviceConnectionState中被关闭。
        for (ssize_t profile_index = 0; profile_index < (ssize_t)profiles.size(); profile_index++) {
            sp<IOProfile> profile = profiles[profile_index];

            // 如果已经打开了这个profile,那就看下一个profile
            size_t j;
            for (j = 0; j < outputs.size(); j++) {
                desc = mOutputs.valueFor(outputs.itemAt(j));
                if (!desc->isDuplicated() && desc->mProfile == profile) {
                    // 如果跟该profile匹配的话,我们将获取这个profile的详细信息,存储到devDesc中
                    if (audio_device_is_digital(device)) {
                        devDesc->importAudioPort(profile);
                    }
                    break;
                }
            }
            // 没有已经打开的output是对应这个的话,那就继续执行open的操作。
            if (j != outputs.size()) {
                continue;
            }

            desc = new SwAudioOutputDescriptor(profile, mpClientInterface);
            desc->mDevice = device;
            audio_config_t config = AUDIO_CONFIG_INITIALIZER;
            config.sample_rate = desc->mSamplingRate;
            config.channel_mask = desc->mChannelMask;
            config.format = desc->mFormat;
            config.offload_info.sample_rate = desc->mSamplingRate;
            config.offload_info.channel_mask = desc->mChannelMask;
            config.offload_info.format = desc->mFormat;
            audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
            // 去尝试打开这个profile
            status_t status = mpClientInterface->openOutput(profile->getModuleHandle(),
                                                            &output,
                                                            &config,
                                                            &desc->mDevice,
                                                            address,
                                                            &desc->mLatency,
                                                            desc->mFlags);
            if (status == NO_ERROR) {
                // 将参数更新到desc中
                desc->mSamplingRate = config.sample_rate;
                desc->mChannelMask = config.channel_mask;
                desc->mFormat = config.format;

                // Here is where the out_set_parameters() for card & device gets called
                if (!address.isEmpty()) {
                    char *param = audio_device_address_to_parameter(device, address);
                    mpClientInterface->setParameters(output, String8(param));
                    free(param);
                }

                // Here is where we step through and resolve any "dynamic" fields
                String8 reply;
                char *value;
                // 下面是对于profile数据不全的情况,从hal层采集数据上来填充。
                // 如果profile中没有记录采样率的话,则问hal层要。
                if (profile->mSamplingRates[0] == 0) {
                    reply = mpClientInterface->getParameters(output,
                                            String8(AUDIO_PARAMETER_STREAM_SUP_SAMPLING_RATES));
                    value = strpbrk((char *)reply.string(), "=");
                    if (value != NULL) {
                        profile->loadSamplingRates(value + 1);
                    }
                }
                if (profile->mFormats[0] == AUDIO_FORMAT_DEFAULT) {
                    reply = mpClientInterface->getParameters(output,
                                                   String8(AUDIO_PARAMETER_STREAM_SUP_FORMATS));
                    value = strpbrk((char *)reply.string(), "=");
                    if (value != NULL) {
                        profile->loadFormats(value + 1);
                    }
                }
                if (profile->mChannelMasks[0] == 0) {
                    reply = mpClientInterface->getParameters(output,
                                                  String8(AUDIO_PARAMETER_STREAM_SUP_CHANNELS));
                    value = strpbrk((char *)reply.string(), "=");
                    if (value != NULL) {
                        profile->loadOutChannels(value + 1);
                    }
                }
                // 1. mSamplingRates、mFormats、mChannelMasks只要有一个数据是缺失的话,就会关闭该output
                // 2. 如果该profile的这三个参数的数组中,存在第一个为0的,就需要关闭,重新配置参数再打开。
                if (((profile->mSamplingRates[0] == 0) &&
                         (profile->mSamplingRates.size() < 2)) ||
                     ((profile->mFormats[0] == AUDIO_FORMAT_DEFAULT) &&
                         (profile->mFormats.size() < 2)) ||
                     ((profile->mChannelMasks[0] == 0) &&
                         (profile->mChannelMasks.size() < 2))) {
                    mpClientInterface->closeOutput(output);
                    output = AUDIO_IO_HANDLE_NONE;
                } else if (profile->mSamplingRates[0] == 0 || profile->mFormats[0] == 0 ||
                            profile->mChannelMasks[0] == 0) {
                    mpClientInterface->closeOutput(output);
                    // 根据一定的策略,从数组中选出合适的参数。
                    config.sample_rate = profile->pickSamplingRate();
                    config.channel_mask = profile->pickChannelMask();
                    config.format = profile->pickFormat();
                    config.offload_info.sample_rate = config.sample_rate;
                    config.offload_info.channel_mask = config.channel_mask;
                    config.offload_info.format = config.format;
                    status = mpClientInterface->openOutput(profile->getModuleHandle(),
                                                           &output,
                                                           &config,
                                                           &desc->mDevice,
                                                           address,
                                                           &desc->mLatency,
                                                           desc->mFlags);
                    if (status == NO_ERROR) {
                        desc->mSamplingRate = config.sample_rate;
                        desc->mChannelMask = config.channel_mask;
                        desc->mFormat = config.format;
                    } else {
                        output = AUDIO_IO_HANDLE_NONE;
                    }
                }
                // 打开成功
                if (output != AUDIO_IO_HANDLE_NONE) {
                    // 添加到mOutputs中
                    addOutput(output, desc);
                    // 如果这个设备是带有地址的话,并且设备属于通过地址区分的,尝试去获取PolicyMix
                    // 然后设置output
                    if (device_distinguishes_on_address(device) && address != "0") {
                        sp<AudioPolicyMix> policyMix;
                        if (mPolicyMixes.getAudioPolicyMix(address, policyMix) != NO_ERROR) {
                            ALOGE("checkOutputsForDevice() cannot find policy for address %s",
                                  address.string());
                        }
                        policyMix->setOutput(desc);
                        desc->mPolicyMix = policyMix->getMix();
                    // 上面已经过滤过打开的output了,所以能到这边注定是一个新的profile。
                    // 如果打开的是一个非direct的output并且存在primary output的话,会去尝试创建duplicate
                    } else if (((desc->mFlags & AUDIO_OUTPUT_FLAG_DIRECT) == 0) &&
                                    hasPrimaryOutput()) {
                        // no duplicated output for direct outputs and
                        // outputs used by dynamic policy mixes
                        audio_io_handle_t duplicatedOutput = AUDIO_IO_HANDLE_NONE;

                        // set initial stream volume for device
                        applyStreamVolumes(desc, device, 0, true);

                        //TODO: configure audio effect output stage here

                        // 打开一个duplicate output
                        duplicatedOutput =
                                mpClientInterface->openDuplicateOutput(output,
                                                                       mPrimaryOutput->mIoHandle);
                        if (duplicatedOutput != AUDIO_IO_HANDLE_NONE) {
                            // add duplicated output descriptor
                            sp<SwAudioOutputDescriptor> dupOutputDesc =
                                    new SwAudioOutputDescriptor(NULL, mpClientInterface);
                            dupOutputDesc->mOutput1 = mPrimaryOutput;
                            dupOutputDesc->mOutput2 = desc;
                            dupOutputDesc->mSamplingRate = desc->mSamplingRate;
                            dupOutputDesc->mFormat = desc->mFormat;
                            dupOutputDesc->mChannelMask = desc->mChannelMask;
                            dupOutputDesc->mLatency = desc->mLatency;
                            addOutput(duplicatedOutput, dupOutputDesc);
                            applyStreamVolumes(dupOutputDesc, device, 0, true);
                        } else {
                            mpClientInterface->closeOutput(output);
                            removeOutput(output);
                            nextAudioPortGeneration();
                            output = AUDIO_IO_HANDLE_NONE;
                        }
                    }
                }
            } else {
                output = AUDIO_IO_HANDLE_NONE;
            }
            if (output == AUDIO_IO_HANDLE_NONE) {
                profiles.removeAt(profile_index);
                profile_index--;
            } else {
                outputs.add(output);
                // Load digital format info only for digital devices
                if (audio_device_is_digital(device)) {
                    devDesc->importAudioPort(profile);
                }
                // 通过address进行分辨的话,需要强制setOutputDevice,因为device的值是不变的。
                if (device_distinguishes_on_address(device)) {
                    setOutputDevice(desc, device, true/*force*/, 0/*delay*/,
                            NULL/*patch handle*/, address.string());
                }
            }
        }

        if (profiles.isEmpty()) {
            return BAD_VALUE;
        }
    } else { // 断开情况
        // 检测是否已经打开的output需要关闭的
        for (size_t i = 0; i < mOutputs.size(); i++) {
            desc = mOutputs.valueAt(i);
            // 必须为非duplicate
            if (!desc->isDuplicated()) {
                // exact match on device
                // 可以看出如果是通过地址区分的话,那这个profile只支持一个设备
                // 如果移除的这个device对应的profile所支持的device都已经从available list移除的话,则关闭该
                // Output
                if (device_distinguishes_on_address(device) &&
                        (desc->supportedDevices() == device)) {
                    findIoHandlesByAddress(desc, device, address, outputs);
                } else if (!(desc->supportedDevices() & mAvailableOutputDevices.types())) {
                    outputs.add(mOutputs.keyAt(i));
                }
            }
        }
        // Clear any profiles associated with the disconnected device.
        for (size_t i = 0; i < mHwModules.size(); i++)
        {
            if (mHwModules[i]->mHandle == 0) {
                continue;
            }
            for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++)
            {
                sp<IOProfile> profile = mHwModules[i]->mOutputProfiles[j];
                if (profile->mSupportedDevices.types() & device) {
                    if (profile->mSamplingRates[0] == 0) {
                        profile->mSamplingRates.clear();
                        profile->mSamplingRates.add(0);
                    }
                    if (profile->mFormats[0] == AUDIO_FORMAT_DEFAULT) {
                        profile->mFormats.clear();
                        profile->mFormats.add(AUDIO_FORMAT_DEFAULT);
                    }
                    if (profile->mChannelMasks[0] == 0) {
                        profile->mChannelMasks.clear();
                        profile->mChannelMasks.add(0);
                    }
                }
            }
        }
    }
    return NO_ERROR;
}

对于我们的场景,从上面我们已经知道打开了两个output。一个是mixer output,一个是duplicate output。
接着我们来看打开函数checkOutputForAllStrategies,这个函数是在找到outputs之后就执行了。

void AudioPolicyManager::checkOutputForAllStrategies()
{
    if (mEngine->getForceUse(AUDIO_POLICY_FORCE_FOR_SYSTEM) == AUDIO_POLICY_FORCE_SYSTEM_ENFORCED)
        checkOutputForStrategy(STRATEGY_ENFORCED_AUDIBLE);
    checkOutputForStrategy(STRATEGY_PHONE);
    if (mEngine->getForceUse(AUDIO_POLICY_FORCE_FOR_SYSTEM) != AUDIO_POLICY_FORCE_SYSTEM_ENFORCED)
        checkOutputForStrategy(STRATEGY_ENFORCED_AUDIBLE);
    checkOutputForStrategy(STRATEGY_SONIFICATION);
    checkOutputForStrategy(STRATEGY_SONIFICATION_RESPECTFUL);
    checkOutputForStrategy(STRATEGY_ACCESSIBILITY);
    checkOutputForStrategy(STRATEGY_MEDIA);
    checkOutputForStrategy(STRATEGY_DTMF);
    checkOutputForStrategy(STRATEGY_REROUTING);
}

void AudioPolicyManager::checkOutputForStrategy(routing_strategy strategy)
{
    audio_devices_t oldDevice = getDeviceForStrategy(strategy, true /*fromCache*/);
    audio_devices_t newDevice = getDeviceForStrategy(strategy, false /*fromCache*/);
    SortedVector<audio_io_handle_t> srcOutputs = getOutputsForDevice(oldDevice, mPreviousOutputs);
    SortedVector<audio_io_handle_t> dstOutputs = getOutputsForDevice(newDevice, mOutputs);

    // also take into account external policy-related changes: add all outputs which are
    // associated with policies in the "before" and "after" output vectors
    // 把external的policy的output也include到srcOutputs来。
    for (size_t i = 0 ; i < mPreviousOutputs.size() ; i++) {
        const sp<SwAudioOutputDescriptor> desc = mPreviousOutputs.valueAt(i);
        if (desc != 0 && desc->mPolicyMix != NULL) {
            srcOutputs.add(desc->mIoHandle);
        }
    }
    for (size_t i = 0 ; i < mOutputs.size() ; i++) {
        const sp<SwAudioOutputDescriptor> desc = mOutputs.valueAt(i);
        if (desc != 0 && desc->mPolicyMix != NULL) {
            dstOutputs.add(desc->mIoHandle);
        }
    }
    // 只要两个向量数组的存在差异,就会执行下面逻辑
    if (!vectorsEqual(srcOutputs,dstOutputs)) {
        // 将track进行output之间的搬迁,需要先mute然后再unmute
        for (size_t i = 0; i < srcOutputs.size(); i++) {
            sp<SwAudioOutputDescriptor> desc = mOutputs.valueFor(srcOutputs[i]);
            if (isStrategyActive(desc, strategy)) {
                setStrategyMute(strategy, true, desc);
                setStrategyMute(strategy, false, desc, MUTE_TIME_MS, newDevice);
            }
        }

        // Move effects associated to this strategy from previous output to new output
        // 针对media策略的效果的搬迁
        if (strategy == STRATEGY_MEDIA) {
            // 获取合适的output,优先级:offload>deep>first
            audio_io_handle_t fxOutput = selectOutputForEffects(dstOutputs);
            SortedVector<audio_io_handle_t> moved;
            for (size_t i = 0; i < mEffects.size(); i++) {
                sp<EffectDescriptor> effectDesc = mEffects.valueAt(i);
                // 效果必须是全局性的效果,并且output跟新选中的不一样,将触发效果搬迁操作。
                if (effectDesc->mSession == AUDIO_SESSION_OUTPUT_MIX &&
                        effectDesc->mIo != fxOutput) {
                    if (moved.indexOf(effectDesc->mIo) < 0) {
                        mpClientInterface->moveEffects(AUDIO_SESSION_OUTPUT_MIX, effectDesc->mIo,
                                                       fxOutput);
                        moved.add(effectDesc->mIo);
                    }
                    effectDesc->mIo = fxOutput;
                }
            }
        }
        // 移动在这个策略下面的track从之前的output到新的output中,其实只要将track进行invalid就行,
        // track那边会负责重新给track找output等一些操作。
        for (int i = 0; i < AUDIO_STREAM_CNT; i++) {
            if (i == AUDIO_STREAM_PATCH) {
                continue;
            }
            if (getStrategy((audio_stream_type_t)i) == strategy) {
                mpClientInterface->invalidateStream((audio_stream_type_t)i);
            }
        }
    }
}

这个函数主要完成一下几点:

  1. 可以实现将stream切换到新的output中,先对其进行mute,然后过2s之后在unmute
  2. 对于是media策略的,效果搬迁到对应的output上
  3. 对于当前激活的stream进行invalid,触发其restore audiotrack。

这个是选择output的策略函数selectOutputForEffects.

audio_io_handle_t AudioPolicyManager::selectOutputForEffects(
                                            const SortedVector<audio_io_handle_t>& outputs)
{
    // 1:offload output.如果效果不是offloadable,audioflinger将把该track设置为invalid,这个offload output将会被关闭,然后这个效果将会被移到PCM output
    // 2: deep buffer output
    // 3: output列表中的第一个。

    if (outputs.size() == 0) {
        return 0;
    }

    audio_io_handle_t outputOffloaded = 0;
    audio_io_handle_t outputDeepBuffer = 0;

    for (size_t i = 0; i < outputs.size(); i++) {
        sp<SwAudioOutputDescriptor> desc = mOutputs.valueFor(outputs[i]);
        if ((desc->mFlags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) != 0) {
            outputOffloaded = outputs[i];
        }
        if ((desc->mFlags & AUDIO_OUTPUT_FLAG_DEEP_BUFFER) != 0) {
            outputDeepBuffer = outputs[i];
        }
    }
    if (outputOffloaded != 0) {
        return outputOffloaded;
    }
    if (outputDeepBuffer != 0) {
        return outputDeepBuffer;
    }

    return outputs[0];
}

优先级为:offload>deepbuffer>output的第一个。

接下来我们要看下更新通话路由的逻辑,函数为updateCallRouting

void AudioPolicyManager::updateCallRouting(audio_devices_t rxDevice, int delayMs)
{
    bool createTxPatch = false;
    struct audio_patch patch;
    patch.num_sources = 1;
    patch.num_sinks = 1;
    status_t status;
    audio_patch_handle_t afPatchHandle;
    DeviceVector deviceList;

    if(!hasPrimaryOutput()) {
        return;
    }
    audio_devices_t txDevice = getDeviceAndMixForInputSource(AUDIO_SOURCE_VOICE_COMMUNICATION);

    // release existing RX patch if any
    if (mCallRxPatch != 0) {
        mpClientInterface->releaseAudioPatch(mCallRxPatch->mAfPatchHandle, 0);
        mCallRxPatch.clear();
    }
    // release TX patch if any
    if (mCallTxPatch != 0) {
        mpClientInterface->releaseAudioPatch(mCallTxPatch->mAfPatchHandle, 0);
        mCallTxPatch.clear();
    }

    // 如果rx设备在primary的so中,那我们就使用原生的setOutputDevice,否则我们就通过audio patch来完成切换path
    if (availablePrimaryOutputDevices() & rxDevice) {
        setOutputDevice(mPrimaryOutput, rxDevice, true, delayMs);
        // If the TX device is also on the primary HW module, setOutputDevice() will take care
        // of it due to legacy implementation. If not, create a patch.
        if ((availablePrimaryInputDevices() & txDevice & ~AUDIO_DEVICE_BIT_IN)
                == AUDIO_DEVICE_NONE) {
            createTxPatch = true;
        }
    } else {
        // create RX path audio patch
        deviceList = mAvailableOutputDevices.getDevicesFromType(rxDevice);
        ALOG_ASSERT(!deviceList.isEmpty(),
                    "updateCallRouting() selected device not in output device list");
        sp<DeviceDescriptor> rxSinkDeviceDesc = deviceList.itemAt(0);
        deviceList = mAvailableInputDevices.getDevicesFromType(AUDIO_DEVICE_IN_TELEPHONY_RX);
        ALOG_ASSERT(!deviceList.isEmpty(),
                    "updateCallRouting() no telephony RX device");
        sp<DeviceDescriptor> rxSourceDeviceDesc = deviceList.itemAt(0);

        rxSourceDeviceDesc->toAudioPortConfig(&patch.sources[0]);
        rxSinkDeviceDesc->toAudioPortConfig(&patch.sinks[0]);

        // request to reuse existing output stream if one is already opened to reach the RX device
        SortedVector<audio_io_handle_t> outputs =
                                getOutputsForDevice(rxDevice, mOutputs);
        audio_io_handle_t output = selectOutput(outputs,
                                                AUDIO_OUTPUT_FLAG_NONE,
                                                AUDIO_FORMAT_INVALID);
        if (output != AUDIO_IO_HANDLE_NONE) {
            sp<SwAudioOutputDescriptor> outputDesc = mOutputs.valueFor(output);
            ALOG_ASSERT(!outputDesc->isDuplicated(),
                        "updateCallRouting() RX device output is duplicated");
            outputDesc->toAudioPortConfig(&patch.sources[1]);
            patch.sources[1].ext.mix.usecase.stream = AUDIO_STREAM_PATCH;
            patch.num_sources = 2;
        }

        afPatchHandle = AUDIO_PATCH_HANDLE_NONE;
        status = mpClientInterface->createAudioPatch(&patch, &afPatchHandle, 0);
        ALOGW_IF(status != NO_ERROR, "updateCallRouting() error %d creating RX audio patch",
                                               status);
        if (status == NO_ERROR) {
            mCallRxPatch = new AudioPatch(&patch, mUidCached);
            mCallRxPatch->mAfPatchHandle = afPatchHandle;
            mCallRxPatch->mUid = mUidCached;
        }
        createTxPatch = true;
    }
    if (createTxPatch) {

        struct audio_patch patch;
        patch.num_sources = 1;
        patch.num_sinks = 1;
        deviceList = mAvailableInputDevices.getDevicesFromType(txDevice);
        ALOG_ASSERT(!deviceList.isEmpty(),
                    "updateCallRouting() selected device not in input device list");
        sp<DeviceDescriptor> txSourceDeviceDesc = deviceList.itemAt(0);
        txSourceDeviceDesc->toAudioPortConfig(&patch.sources[0]);
        deviceList = mAvailableOutputDevices.getDevicesFromType(AUDIO_DEVICE_OUT_TELEPHONY_TX);
        ALOG_ASSERT(!deviceList.isEmpty(),
                    "updateCallRouting() no telephony TX device");
        sp<DeviceDescriptor> txSinkDeviceDesc = deviceList.itemAt(0);
        txSinkDeviceDesc->toAudioPortConfig(&patch.sinks[0]);

        SortedVector<audio_io_handle_t> outputs =
                                getOutputsForDevice(AUDIO_DEVICE_OUT_TELEPHONY_TX, mOutputs);
        audio_io_handle_t output = selectOutput(outputs,
                                                AUDIO_OUTPUT_FLAG_NONE,
                                                AUDIO_FORMAT_INVALID);
        // request to reuse existing output stream if one is already opened to reach the TX
        // path output device
        if (output != AUDIO_IO_HANDLE_NONE) {
            sp<AudioOutputDescriptor> outputDesc = mOutputs.valueFor(output);
            ALOG_ASSERT(!outputDesc->isDuplicated(),
                        "updateCallRouting() RX device output is duplicated");
            outputDesc->toAudioPortConfig(&patch.sources[1]);
            patch.sources[1].ext.mix.usecase.stream = AUDIO_STREAM_PATCH;
            patch.num_sources = 2;
        }

        afPatchHandle = AUDIO_PATCH_HANDLE_NONE;
        status = mpClientInterface->createAudioPatch(&patch, &afPatchHandle, 0);
        ALOGW_IF(status != NO_ERROR, "setPhoneState() error %d creating TX audio patch",
                                               status);
        if (status == NO_ERROR) {
            mCallTxPatch = new AudioPatch(&patch, mUidCached);
            mCallTxPatch->mAfPatchHandle = afPatchHandle;
            mCallTxPatch->mUid = mUidCached;
        }
    }
}

该函数为了兼容通话声音输入输出通过非primary的场景,引入了audio patch的方案。并且只能存在我i恶意一个tx patch和rx patch。
如果是primary的设备的话,那就跟之前的处理方式是一致的,采用setOutputDevice的方式。

上面我们基本上看的都是output的device,现在我们来看看input device中的函数checkInputsForDevice

status_t AudioPolicyManager::checkInputsForDevice(const sp<DeviceDescriptor> devDesc,
                                                  audio_policy_dev_state_t state,
                                                  SortedVector<audio_io_handle_t>& inputs,
                                                  const String8 address)
{
    audio_devices_t device = devDesc->type();
    sp<AudioInputDescriptor> desc;

    if (audio_device_is_digital(device)) {
        // erase all current sample rates, formats and channel masks
        devDesc->clearCapabilities();
    }

    if (state == AUDIO_POLICY_DEVICE_STATE_AVAILABLE) {
        // first list already open inputs that can be routed to this device
        for (size_t input_index = 0; input_index < mInputs.size(); input_index++) {
            desc = mInputs.valueAt(input_index);
            if (desc->mProfile->mSupportedDevices.types() & (device & ~AUDIO_DEVICE_BIT_IN)) {
               inputs.add(mInputs.keyAt(input_index));
            }
        }

        // then look for input profiles that can be routed to this device
        SortedVector< sp<IOProfile> > profiles;
        for (size_t module_idx = 0; module_idx < mHwModules.size(); module_idx++)
        {
            if (mHwModules[module_idx]->mHandle == 0) {
                continue;
            }
            for (size_t profile_index = 0;
                 profile_index < mHwModules[module_idx]->mInputProfiles.size();
                 profile_index++)
            {
                sp<IOProfile> profile = mHwModules[module_idx]->mInputProfiles[profile_index];

                if (profile->mSupportedDevices.types() & (device & ~AUDIO_DEVICE_BIT_IN)) {
                    if (!device_distinguishes_on_address(device) ||
                            address == profile->mSupportedDevices[0]->mAddress) {
                        profiles.add(profile);
                        ALOGV("checkInputsForDevice(): adding profile %zu from module %zu",
                              profile_index, module_idx);
                    }
                }
            }
        }

        if (profiles.isEmpty() && inputs.isEmpty()) {
            ALOGW("checkInputsForDevice(): No input available for device 0x%X", device);
            return BAD_VALUE;
        }

        // open inputs for matching profiles if needed. Direct inputs are also opened to
        // query for dynamic parameters and will be closed later by setDeviceConnectionState()
        for (ssize_t profile_index = 0; profile_index < (ssize_t)profiles.size(); profile_index++) {

            sp<IOProfile> profile = profiles[profile_index];
            // nothing to do if one input is already opened for this profile
            size_t input_index;
            for (input_index = 0; input_index < mInputs.size(); input_index++) {
                desc = mInputs.valueAt(input_index);
                if (desc->mProfile == profile) {
                    if (audio_device_is_digital(device)) {
                        devDesc->importAudioPort(profile);
                    }
                    break;
                }
            }
            if (input_index != mInputs.size()) {
                continue;
            }

            desc = new AudioInputDescriptor(profile);
            desc->mDevice = device;
            audio_config_t config = AUDIO_CONFIG_INITIALIZER;
            config.sample_rate = desc->mSamplingRate;
            config.channel_mask = desc->mChannelMask;
            config.format = desc->mFormat;
            audio_io_handle_t input = AUDIO_IO_HANDLE_NONE;
            status_t status = mpClientInterface->openInput(profile->getModuleHandle(),
                                                           &input,
                                                           &config,
                                                           &desc->mDevice,
                                                           address,
                                                           AUDIO_SOURCE_MIC,
                                                           AUDIO_INPUT_FLAG_NONE /*FIXME*/);

            if (status == NO_ERROR) {
                desc->mSamplingRate = config.sample_rate;
                desc->mChannelMask = config.channel_mask;
                desc->mFormat = config.format;

                if (!address.isEmpty()) {
                    char *param = audio_device_address_to_parameter(device, address);
                    mpClientInterface->setParameters(input, String8(param));
                    free(param);
                }

                // Here is where we step through and resolve any "dynamic" fields
                String8 reply;
                char *value;
                if (profile->mSamplingRates[0] == 0) {
                    reply = mpClientInterface->getParameters(input,
                                            String8(AUDIO_PARAMETER_STREAM_SUP_SAMPLING_RATES));
                    value = strpbrk((char *)reply.string(), "=");
                    if (value != NULL) {
                        profile->loadSamplingRates(value + 1);
                    }
                }
                if (profile->mFormats[0] == AUDIO_FORMAT_DEFAULT) {
                    reply = mpClientInterface->getParameters(input,
                                                   String8(AUDIO_PARAMETER_STREAM_SUP_FORMATS));
                    value = strpbrk((char *)reply.string(), "=");
                    if (value != NULL) {
                        profile->loadFormats(value + 1);
                    }
                }
                if (profile->mChannelMasks[0] == 0) {
                    reply = mpClientInterface->getParameters(input,
                                                  String8(AUDIO_PARAMETER_STREAM_SUP_CHANNELS));
                    ALOGV("checkInputsForDevice() direct input sup channel masks %s",
                              reply.string());
                    value = strpbrk((char *)reply.string(), "=");
                    if (value != NULL) {
                        profile->loadInChannels(value + 1);
                    }
                }
                if (((profile->mSamplingRates[0] == 0) && (profile->mSamplingRates.size() < 2)) ||
                     ((profile->mFormats[0] == 0) && (profile->mFormats.size() < 2)) ||
                     ((profile->mChannelMasks[0] == 0) && (profile->mChannelMasks.size() < 2))) {
                    mpClientInterface->closeInput(input);
                    input = AUDIO_IO_HANDLE_NONE;
                }

                if (input != 0) {
                    addInput(input, desc);
                }
            } // endif input != 0

            if (input == AUDIO_IO_HANDLE_NONE) {
                profiles.removeAt(profile_index);
                profile_index--;
            } else {
                inputs.add(input);
                if (audio_device_is_digital(device)) {
                    devDesc->importAudioPort(profile);
                }
            }
        } // end scan profiles

        if (profiles.isEmpty()) {
            ALOGW("checkInputsForDevice(): No input available for device 0x%X", device);
            return BAD_VALUE;
        }
    } else {
        // Disconnect
        // check if one opened input is not needed any more after disconnecting one device
        for (size_t input_index = 0; input_index < mInputs.size(); input_index++) {
            desc = mInputs.valueAt(input_index);
            if (!(desc->mProfile->mSupportedDevices.types() & mAvailableInputDevices.types() &
                    ~AUDIO_DEVICE_BIT_IN)) {
                inputs.add(mInputs.keyAt(input_index));
            }
        }
        // Clear any profiles associated with the disconnected device.
        for (size_t module_index = 0; module_index < mHwModules.size(); module_index++) {
            if (mHwModules[module_index]->mHandle == 0) {
                continue;
            }
            for (size_t profile_index = 0;
                 profile_index < mHwModules[module_index]->mInputProfiles.size();
                 profile_index++) {
                sp<IOProfile> profile = mHwModules[module_index]->mInputProfiles[profile_index];
                if (profile->mSupportedDevices.types() & (device & ~AUDIO_DEVICE_BIT_IN)) {
                    if (profile->mSamplingRates[0] == 0) {
                        profile->mSamplingRates.clear();
                        profile->mSamplingRates.add(0);
                    }
                    if (profile->mFormats[0] == AUDIO_FORMAT_DEFAULT) {
                        profile->mFormats.clear();
                        profile->mFormats.add(AUDIO_FORMAT_DEFAULT);
                    }
                    if (profile->mChannelMasks[0] == 0) {
                        profile->mChannelMasks.clear();
                        profile->mChannelMasks.add(0);
                    }
                }
            }
        }
    } // end disconnect

    return NO_ERROR;
}

该函数的主要逻辑跟output中的基本上是一致的,主要的区别就在于output会更复杂一些,它会有duplicate output或者是policyMix的场景。

duplicate需要是连入的设备属于不同hw so的才会触发。
policyMix目前还不确定。

duplicate的创建

上面我们分析完了链接,但是我们没有仔细去看duplicate output的创建逻辑,我们先来看下如何实现这部分的创建。

audioflinger.cpp

audio_io_handle_t AudioFlinger::openDuplicateOutput(audio_io_handle_t output1,
        audio_io_handle_t output2)
{
    Mutex::Autolock _l(mLock);
    MixerThread *thread1 = checkMixerThread_l(output1);
    MixerThread *thread2 = checkMixerThread_l(output2);

    if (thread1 == NULL || thread2 == NULL) {
        return AUDIO_IO_HANDLE_NONE;
    }

    audio_io_handle_t id = nextUniqueId();
    // 创建duplicateThread,传入了一个线程对象,id等
    DuplicatingThread *thread = new DuplicatingThread(this, thread1, id, mSystemReady);
    // 通过duplicate线程对象,再进行第二个线程对象的添加。
    thread->addOutputTrack(thread2);
    mPlaybackThreads.add(id, thread);
    // 通知客户端新的output的打开
    thread->ioConfigChanged(AUDIO_OUTPUT_OPENED);
    return id;
}

上面函数实现了:

  1. 创建duplicate线程,将第一个thread添加进来,thread1为新创建的output
  2. 往duplicate线程添加第二个线程,thread2为primary output
  3. 通知client有新的output被打开了。

duplicate output的父类为mixer thread。

看下构造函数:

AudioFlinger::DuplicatingThread::DuplicatingThread(const sp<AudioFlinger>& audioFlinger,
        AudioFlinger::MixerThread* mainThread, audio_io_handle_t id, bool systemReady)
    :   MixerThread(audioFlinger, mainThread->getOutput(), id, mainThread->outDevice(),
                    systemReady, DUPLICATING),
        mWaitTimeMs(UINT_MAX)
{
    addOutputTrack(mainThread);
}

只是多调用了addOutputTrack函数:

void AudioFlinger::DuplicatingThread::addOutputTrack(MixerThread *thread)
{
    Mutex::Autolock _l(mLock);
    // The downstream MixerThread consumes thread->frameCount() amount of frames per mix pass.
    // Adjust for thread->sampleRate() to determine minimum buffer frame count.
    // 为什么采用三倍的缓冲区是因为线程并不是同步执行并且应该不会被clock给锁住了。
    const size_t frameCount =
            3 * sourceFramesNeeded(mSampleRate, thread->frameCount(), thread->sampleRate());

    // 创建了一个outputtrack对象
    sp<OutputTrack> outputTrack = new OutputTrack(thread,
                                            this,
                                            mSampleRate,
                                            mFormat,
                                            mChannelMask,
                                            frameCount,
                                            IPCThreadState::self()->getCallingUid());
    // 如果创建成功的话,则设置AUDIO_STREAM_PATCH音量为1.0f,并且添加outputTrack到数组中
    if (outputTrack->cblk() != NULL) {
        thread->setStreamVolume(AUDIO_STREAM_PATCH, 1.0f);
        mOutputTracks.add(outputTrack);
        updateWaitTime_l();
    }
}

AudioFlinger::PlaybackThread::OutputTrack::OutputTrack(
            PlaybackThread *playbackThread,
            DuplicatingThread *sourceThread,
            uint32_t sampleRate,
            audio_format_t format,
            audio_channel_mask_t channelMask,
            size_t frameCount,
            int uid)
    :   Track(playbackThread, NULL, AUDIO_STREAM_PATCH,
              sampleRate, format, channelMask, frameCount,
              NULL, 0, 0, uid, IAudioFlinger::TRACK_DEFAULT, TYPE_OUTPUT),
    mActive(false), mSourceThread(sourceThread), mClientProxy(NULL)
{

    if (mCblk != NULL) {
        mOutBuffer.frameCount = 0;
        // 将该track添加到对应的thread中。
        playbackThread->mTracks.add(this);
        // 这边创建了一个clientproxy,track的构造函数中会创建server端。client端和server端在同一个进程,所以
        // 虚拟地址是公用的。
        mClientProxy = new AudioTrackClientProxy(mCblk, mBuffer, mFrameCount, mFrameSize,
                true /*clientInServer*/);
        mClientProxy->setVolumeLR(GAIN_MINIFLOAT_PACKED_UNITY);
        mClientProxy->setSendLevel(0.0);
        mClientProxy->setSampleRate(sampleRate);
    } else {
        ALOGW("Error creating output track on thread %p", playbackThread);
    }
}

总结下函数做了哪些事情:

  1. 创建了个outputTrack,每个outputTrack有一个对应的clientProxy和一个serverProxy
  2. 将outputTrack添加到duplicateoutput的mOutputTracks和对应output的mTracks中

到此就完成了如下图的创建:

播放数据流

播放的逻辑其实跟MixerThread基本上一致,主要就多实现了将mix完之后的数据分发给两个outputTrack,实现的方式:就是DuplicatingThread重写了threadLoop_mixthreadLoop_sleepTimethreadLoop_writethreadLoop_standby,这些重写的意义,主要是因为DuplicatingThread需要去维护自己的mOutputTracks。

这样的写就会牵扯到三个线程在跑,duplicatingThread获取到数据之后,分发给两个MixerThread,MixerThread按照之间的逻辑继续运行。相比较之前的实现,就多了DuplicatingThread的操作了。

我们主要看下threadLoop_write

ssize_t AudioFlinger::DuplicatingThread::threadLoop_write()
{
    for (size_t i = 0; i < outputTracks.size(); i++) {
        outputTracks[i]->write(mSinkBuffer, writeFrames);
    }
    mStandby = false;
    return (ssize_t)mSinkBufferSize;
}

此时我们可以把outputTrack当成是一个client端的audiotrack来看待,它也实现了自己的writestartstopobtainBuffer等操作。

这部分写的逻辑跟audiotrack中写的逻辑基本上一致,不打算细看了。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值