AudioRecord

bsp-audio wanghao

1.Android 音频框架概述

1.1Audio Application Framework:音频应用框架

AudioTrack:负责回放数据的输出,属 Android 应用框架 API 类

AudioRecord:负责录音数据的采集,属 Android 应用框架 API 类

AudioSystem: 负责音频事务的综合管理,属 Android 应用框架 API 类

1.2Audio Native Framework:音频本地框架

AudioTrack:负责回放数据的输出,属 Android 本地框架 API 类

AudioRecord:负责录音数据的采集,属 Android 本地框架 API 类

AudioSystem: 负责音频事务的综合管理,属 Android 本地框架 API 类

1.3Audio Services:音频服务

AudioPolicyService:音频策略的制定者,负责音频设备切换的策略抉择、音量调节策略等

AudioFlinger:音频策略的执行者,负责输入输出流设备的管理及音频流数据的处理传输

Audio HAL:音频硬件抽象层,负责与音频硬件设备的交互,由 AudioFlinger 直接调用


2.AudioRecord API 概述

2.1AudioRecord Java API 音频源设备:

AudioSourceDescription
AUDIO_SOURCE_DEFAULT默认输入源
AUDIO_SOURCE_MIC麦克风输入源
AUDIO_SOURCE_VOICE_UPLINK语音呼叫上行(Tx)输入源
AUDIO_SOURCE_VOICE_DOWNLINK语音呼叫下行(Rx)输入源
AUDIO_SOURCE_VOICE_CALL语音呼叫上下行输入源
AUDIO_SOURCE_CAMCORDER视频录制的麦克风音频源
AUDIO_SOURCE_VOICE_RECOGNITION针对语音唤醒的输入源
AUDIO_SOURCE_VOICE_COMMUNICATION针对VOIP语音的输入源

Andriod 定义多个输入源的原因:

​ 根据Source选择合适的输入设备,如AUDIO_SOURCE_DEFAULT、AUDIO_SOURCE_MIC,会默认使用手机Mic,其他的Source会选择到蓝牙Mic,有线耳机Mic等等,适配更多场景。


2.2AudioRecord接口

public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,int bufferSizeInBytes)
    //AudioSource输入源
    //sampleRateInHz采样率
    //channelConfig声道
    //audioFormat//编码格式
    //bufferSizeInbyte数据缓冲区
    throws IllegalArgumentException {
    this((new AudioAttributes.Builder())
         .setInternalCapturePreset(audioSource)
         .build(),
         (new AudioFormat.Builder())
              .setChannelMask(getChannelMaskFromLegacyConfig(channelConfig,true/*allow legacy configurations*/))
         .setEncoding(audioFormat)
         .setSampleRate(sampleRateInHz)
         .build(),
         bufferSizeInBytes,
         AudioManager.AUDIO_SESSION_ID_GENERATE);
}
2.2.1 BufferSize大小

​ 详细说明下 getMinBufferSize() 接口,字面意思是返回最小数据缓冲区的大小,它是声音能正常播放的最低保障,从函数参数来看,返回值取决于采样率、采样深度、声道数这三个属性。应用程序重点参考其返回值然后确定分配多大的数据缓冲区。如果数据缓冲区分配得过小,那么播放声音会频繁遭遇 underrun,underrun 是指生产者(AudioRecord)提供数据的速度跟不上消费者(AudioFlinger::RecordThread)消耗数据的速度,反映到现实的后果就是录制声音断续卡顿,严重影响听觉体验。

/**
     * Returns the minimum buffer size required for the successful creation of an AudioRecord
     * object, in byte units.
     * Note that this size doesn't guarantee a smooth recording under load, and higher values
     * should be chosen according to the expected frequency at which the AudioRecord instance
     * will be polled for new data.
     * See {@link #AudioRecord(int, int, int, int, int)} for more information on valid
     * configuration values.
     * @param sampleRateInHz the sample rate expressed in Hertz.
     *   {@link AudioFormat#SAMPLE_RATE_UNSPECIFIED} is not permitted.
     * @param channelConfig describes the configuration of the audio channels.
     *   See {@link AudioFormat#CHANNEL_IN_MONO} and
     *   {@link AudioFormat#CHANNEL_IN_STEREO}
     * @param audioFormat the format in which the audio data is represented.
     *   See {@link AudioFormat#ENCODING_PCM_16BIT}.
     * @return {@link #ERROR_BAD_VALUE} if the recording parameters are not supported by the
     *  hardware, or an invalid parameter was passed,
     *  or {@link #ERROR} if the implementation was unable to query the hardware for its
     *  input properties,
     *   or the minimum buffer size expressed in bytes.
     * @see #AudioRecord(int, int, int, int, int)
     */
    static public int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat) {
        int channelCount = 0;
        switch (channelConfig) {
        case AudioFormat.CHANNEL_IN_DEFAULT: // AudioFormat.CHANNEL_CONFIGURATION_DEFAULT
        case AudioFormat.CHANNEL_IN_MONO:
        case AudioFormat.CHANNEL_CONFIGURATION_MONO:
            channelCount = 1;//单声道
            break;
        case AudioFormat.CHANNEL_IN_STEREO:
        case AudioFormat.CHANNEL_CONFIGURATION_STEREO:
        case (AudioFormat.CHANNEL_IN_FRONT | AudioFormat.CHANNEL_IN_BACK):
            channelCount = 2;//双声道
            break;
//MIUI MOD:START
        case AudioFormat.CHANNEL_IN_5POINT1:
            channelCount = 6;
            break;
//END
        case AudioFormat.CHANNEL_INVALID:
        default:
            loge("getMinBufferSize(): Invalid channel configuration.");
            return ERROR_BAD_VALUE;
        }
//调用JNI方法
        int size = native_get_min_buff_size(sampleRateInHz, channelCount, audioFormat);
        if (size == 0) {
            return ERROR_BAD_VALUE;
        }
        else if (size == -1) {
            return ERROR;
        }
        else {
            return size;
        }
    }



// ----------------------------------------------------------------------------
// returns the minimum required size for the successful creation of an AudioRecord instance.
// returns 0 if the parameter combination is not supported.
// return -1 if there was an error querying the buffer size.
static jint android_media_AudioRecord_get_min_buff_size(JNIEnv *env,  jobject thiz,
    jint sampleRateInHertz, jint channelCount, jint audioFormat) {

    ALOGV(">> android_media_AudioRecord_get_min_buff_size(%d, %d, %d)",
          sampleRateInHertz, channelCount, audioFormat);

    size_t frameCount = 0;
    audio_format_t format = audioFormatToNative(audioFormat);
    //format的大小的概念是帧数,代表有多少个frame可以保证最小帧数
    status_t result = AudioRecord::getMinFrameCount(&frameCount,
            sampleRateInHertz,
            format,
            audio_channel_in_mask_from_count(channelCount));

    if (result == BAD_VALUE) {
        return 0;
    }
    if (result != NO_ERROR) {
        return -1;
    }
    return frameCount * channelCount * audio_bytes_per_sample(format);
    //PCM数据缓冲区的最小值
}

​ 可见最小缓冲区的大小与 最低帧数format、声道数channelCount、采样深度有关。计算公式如下:
B u f f e r S i z e = f r a m e C o u n t ∗ c h a n n e l C o u n t ∗ A u d i o B y t e s P e r S a m p l e ( f o r m a t ) BufferSize = frameCount * channelCount * AudioBytesPerSample(format) BufferSize=frameCountchannelCountAudioBytesPerSample(format)

最 小 缓 冲 区 大 小 = 最 低 帧 数 ∗ 声 道 数 ∗ 采 样 深 度 最小缓冲区大小 = 最低帧数 * 声道数 * 采样深度 =

其他参数获取方式参考Android开发者AudioRecord


2.3AudioRecord Native API

2.3.1AudioRecord Native API 传输模式
Transfer ModeDescription
TRANSFER_CALLBACK在AudioRecordThread 线程中通过callback传输数据
TRANSFER_OBTAIN调用obtainbuff()/releaseBuffer()填充数据
TRANSFER_SYNC应用进程调用read函数读取数据,与AudioRecord的input线程为生产者消费者的关系,基本适用于所有录音场景。
TRANSFER_DEFAULT没有明确规定,按照参数选择执行
2.3.2AudioRecord Native API 音频源类型
AudioSourceDescription
AUDIO_SOURCE_DEFAULT默认输入源
AUDIO_SOURCE_MIC麦克风输入源
AUDIO_SOURCE_VOICE_UPLINK语音呼叫上行(Tx)输入源
AUDIO_SOURCE_VOICE_DOWNLINK语音呼叫下行(Rx)输入源
AUDIO_SOURCE_VOICE_CALL语音呼叫上下行输入源
AUDIO_SOURCE_CAMCORDER视频录制的麦克风音频源
AUDIO_SOURCE_VOICE_RECOGNITION针对语音唤醒的输入源
AUDIO_SOURCE_VOICE_COMMUNICATION针对VOIP语音的输入源
2.3.3AudioRecord Native输入标识
Audio_INPUT_FLAG
AUDIO_INPUT_FLAG_NONE,0x0
AUDIO_INPUT_FLAG_FAST,0x1
AUDIO_INPUT_FLAG_HW_HOTWORD, 0x2
AUDIO_INPUT_FLAG_RAW, 0x4
AUDIO_INPUT_FLAG_SYNC, 0x8
AUDIO_INPUT_FLAG_MMAP_NOIRQ, 0x10
AUDIO_INPUT_FLAG_VOIP_TX, 0x20
AUDIO_INPUT_FLAG_HW_AV_SYNC, 0x40
AUDIO_INPUT_FLAG_DIRECT, 0x80
AUDIO_INPUT_FLAG_CAR, 0x200
AUDIO_INPUT_FLAG_INCALL_UPLINK_DOWNLINK, 0x80000000
AUDIO_INPUT_FLAG_VOIP_RECORD , 0x100
02-25 17:40:06.756 20242 20442 D APM_AudioPolicyManager: getInputForAttr() source 1, sampling rate 48000, format 0x1, channel mask 0xc, session 138425, flags 0 attributes={ Content type: AUDIO_CONTENT_TYPE_UNKNOWN Usage: AUDIO_USAGE_UNKNOWN Source: AUDIO_SOURCE_MIC Flags: 0x0 Tags:  }
//Flags: 0x0
2.3.4AudioRecord set()
status_t AudioRecord::set(
        audio_source_t inputSource,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        callback_t cbf,
        void* user,
        uint32_t notificationFrames,
        bool threadCanCallJava,
        audio_session_t sessionId,
        transfer_type transferType,
        audio_input_flags_t flags,
        uid_t uid,
        pid_t pid,
        const audio_attributes_t* pAttributes,
        audio_port_handle_t selectedDeviceId,
        audio_microphone_direction_t selectedMicDirection,
        float microphoneFieldDimension,
        int32_t maxSharedAudioHistoryMs)

根据transferType参数选择传输模式

switch (transferType) {
    case TRANSFER_DEFAULT:
        if (cbf == NULL || threadCanCallJava) {
            transferType = TRANSFER_SYNC;
        } else {
            transferType = TRANSFER_CALLBACK;
        }
        break;
    case TRANSFER_CALLBACK:
        if (cbf == NULL) {
            ALOGE("%s(): Transfer type TRANSFER_CALLBACK but cbf == NULL", __func__);
            status = BAD_VALUE;
            goto exit;
        }
        break;
    case TRANSFER_OBTAIN:
    case TRANSFER_SYNC:
        break;
    default:
        ALOGE("%s(): Invalid transfer type %d", __func__, transferType);
        status = BAD_VALUE;
        goto exit;
    }
    mTransfer = transferType;
if (cbf != NULL) {
        mAudioRecordThread = new AudioRecordThread(*this);
        mAudioRecordThread->run("AudioRecord", ANDROID_PRIORITY_AUDIO);
        // thread begins in paused state, and will not reference us until start()
    }

// create the IAudioRecord
    {
        AutoMutex lock(mLock);
        status = createRecord_l(0 /*epoch*/);

​ 如果 cbf(audioCallback 回调函数)非空,会处理回调函数,根据AudioSource,flags等参数调用 getInputForDevice返回audio_io_handle_t* input 然后调用AudioSystem::getOutputForAttr();

audio_io_handle_t AudioPolicyManager::getInputForDevice(const sp<DeviceDescriptor>                                                               &device,
                                                        audio_session_t session,
                                                        uid_t uid,
                                                        const audio_attributes_t                                                                 &attributes,
                                                        const audio_config_base_t                                                                 *config,
                                                        audio_input_flags_t flags,
                                                        const sp<AudioPolicyMix>                                                                 &policyMix,
                                                        audio_app_type_f appType)

得到Input后,将录音的参数分配,得到一个正常的status_t。

status_t AudioPolicyManager::getInputForAttr(const audio_attributes_t *attr,
                                             audio_io_handle_t *input,
                                             audio_unique_id_t riid,
                                             audio_session_t session,
                                             const AttributionSourceState&                                                            attributionSource,
                                             const audio_config_base_t *config,
                                             audio_input_flags_t flags,
                                             audio_port_handle_t *selectedDeviceId,
                                             input_type_t *inputType,
                                             audio_port_handle_t *portId)
    status_t status = NO_ERROR
  1. 通过 Binder 机制调用 AudioFlinger::createRecord() AudioRecord 已经拿到一个 audio_io_handle_t 了,此时把这个 audio_io_handle_t 传入给 createRecord()):
    • 根据传入的 audio_io_handle_t 找到它对应的 RecordThread;
    • RecordThread 新建一个音频流管理对象Record;Record 构造时会分配一块匿名共享内存用于 AudioFlinger 与 AudioRecord 的buffer及其控制块(audio_track_cblk_t),并创建一个 AudioRecordServerProxy 对象(RecordThread 将使用它从buffer上取得可读数据的位置);
    • 最后新建一个 Record的通讯代理 RecordHandle,并以 IAudioRecord 作为返回值给 AudioTrack(RecordHandle、BnAudioRecord、BpAudioTrack、IAudioTrack 的关系见AudioFlinger章);
  2. 通过 IAudioRecord接口,取得 AudioFlinger 中的 buffer首地址;
  3. 创建一个 AudioRecordClientProxy 对象(AudioRecord将使用它从 FIFO 上取得可用空间的位置);

AudioRecord 由此建立了和 AudioFlinger 的全部联系工作

status_t AudioFlinger::createRecord(const media::CreateRecordRequest& _input,
                                    media::CreateRecordResponse& _output)
{
    CreateRecordInput input = VALUE_OR_RETURN_STATUS(CreateRecordInput::fromAidl(_input));
    CreateRecordOutput output;

    sp<RecordThread::RecordTrack> recordTrack;
    sp<RecordHandle> recordHandle;
    sp<Client> client;
    status_t lStatus;
    audio_session_t sessionId = input.sessionId;
    audio_port_handle_t portId = AUDIO_PORT_HANDLE_NONE;

    output.cblk.clear();
    output.buffers.clear();
    output.inputId = AUDIO_IO_HANDLE_NONE;

然后通过共享buffer地址,memcpy数据

ssize_t AudioRecord::read(void* buffer, size_t userSize, bool blocking)
{
    if (mTransfer != TRANSFER_SYNC) {
        return INVALID_OPERATION;
    }

    if (ssize_t(userSize) < 0 || (buffer == NULL && userSize != 0)) {
        // Validation. user is most-likely passing an error code, and it would
        // make the return value ambiguous (actualSize vs error).
        ALOGE("%s(%d) (buffer=%p, size=%zu (%zu)",
                __func__, mPortId, buffer, userSize, userSize);
        return BAD_VALUE;
    }

    ssize_t read = 0;
    Buffer audioBuffer;

    while (userSize >= mFrameSize) {
        audioBuffer.frameCount = userSize / mFrameSize;

        status_t err = obtainBuffer(&audioBuffer,
                blocking ? &ClientProxy::kForever : &ClientProxy::kNonBlocking);
        if (err < 0) {
            if (read > 0) {
                break;
            }
            if (err == TIMED_OUT || err == -EINTR) {
                err = WOULD_BLOCK;
            }
            return ssize_t(err);
        }

        size_t bytesRead = audioBuffer.size;
        memcpy(buffer, audioBuffer.i8, bytesRead);
        buffer = ((char *) buffer) + bytesRead;
        userSize -= bytesRead;
        read += bytesRead;

        releaseBuffer(&audioBuffer);
    }
    if (read > 0) {
        mFramesRead += read / mFrameSize;
        // mFramesReadTime = systemTime(SYSTEM_TIME_MONOTONIC); // not provided at this time.
    }
    return read;
}

此时APP已经创建了AudioRecord的全部接口,接下来将通过AudioPolicy(策略制定者)选择对应的RecordThread(intput)线程,AudioFlinger(行动执行者)来控制其行为。

3.AudioFlinger

3.1AudioFlinger 服务接口

InterfaceDescription
sampleRate获取硬件设备的采样率
format获取硬件设备的音频格式
frameCount获取硬件设备的周期帧数
latency获取硬件设备的传输延迟
setMasterVolume调节主输出设备的音量
setMasterMute静音主输出设备
setStreamVolume调节指定类型的音频流的音量,这种调节不影响其他类型的音频流的音量
setStreamMute静音指定类型的音频流
setVoiceVolume调节通话音量
setMicMute静音麦克风输入
setMode切换音频模式:音频模式有 4 种,分别是 Normal、Ringtone、Call、Communicatoin
setParameters设置音频参数:往下调用 HAL 层相应接口,常用于切换音频通道
getParameters获取音频参数:往下调用 HAL 层相应接口
openOutput打开输出流:打开输出流设备,并创建 PlaybackThread 对象
closeOutput关闭输出流:移除并销毁 PlaybackThread 上面挂着的所有的 Track,退出 PlaybackThread,关闭输出流设备
openInput打开输入流:打开输入流设备,并创建 RecordThread 对象
closeInput关闭输入流:退出 RecordThread,关闭输入流设备
createTrack新建输出流管理对象: 找到对应的 PlaybackThread,创建输出流管理对象 Track,然后创建并返回该 Track 的代理对象 TrackHandle
createRecord新建输入流管理对象:找到 RecordThread,创建输入流管理对象 RecordTrack,然后创建并返回该 RecordTrack 的代理对象 RecordHandle

可以归纳出 AudioFlinger 响应的服务请求主要有:

  • 获取硬件设备的配置信息
  • 音量调节
  • 静音操作
  • 音频模式切换
  • 音频参数设置
  • 输入输出流设备管理
  • 音频流管理

3.2RecordThread选择

​ AndioFlinger 作为 Android 的音频系统引擎,重任之一是负责输入输出流设备的管理及音频流数据的处理传输,这是由回放线程(PlaybackThread 及其派生的子类)和录制线程(RecordThread)进行的,RecordThread由基类ThreadBase派生。

​ AudioFlinger打开输入流设备,创建RecordThread与其对应的audio_io_handle_t

sp<AudioFlinger::ThreadBase> AudioFlinger::openInput_l(audio_module_handle_t module,
                                                         audio_io_handle_t *input,
                                                         audio_config_t *config,
                                                         audio_devices_t devices,
                                                         const char* address,
                                                         audio_source_t source,
                                                         audio_input_flags_t flags,
                                                         audio_devices_t outputDevice,
                                                         const String8& outputDeviceAddress)
    //input 是RecordThread的 audio_io_handle_t,用于与其他进程交互
{
    AudioHwDevice *inHwDev = findSuitableHwDev_l(module, devices);
    if (inHwDev == NULL) {
        *input = AUDIO_IO_HANDLE_NONE;
        return 0;
    }//分配一个audio_io_handle_t

    // Audio Policy can request a specific handle for hardware hotword.
    // The goal here is not to re-open an already opened input.
    // It is to use a pre-assigned I/O handle.
    if (*input == AUDIO_IO_HANDLE_NONE) {
        *input = nextUniqueId(AUDIO_UNIQUE_ID_USE_INPUT);
    } else if (audio_unique_id_get_use(*input) != AUDIO_UNIQUE_ID_USE_INPUT) {
        ALOGE("openInput_l() requested input handle %d is invalid", *input);
        return 0;
    } else if (mRecordThreads.indexOfKey(*input) >= 0) {
        // This should not happen in a transient state with current design.
        ALOGE("openInput_l() requested input handle %d is already assigned", *input);
        return 0;
    }

    audio_config_t halconfig = *config;
    sp<DeviceHalInterface> inHwHal = inHwDev->hwDevice();
    sp<StreamInHalInterface> inStream;
    status_t status = inHwHal->openInputStream(
            *input, devices, &halconfig, flags, address, source,
            outputDevice, outputDeviceAddress, &inStream);
    ALOGV("openInput_l() openInputStream returned input %p, devices %#x, SamplingRate %d"
           ", Format %#x, Channels %#x, flags %#x, status %d addr %s",
            inStream.get(),
            devices,
            halconfig.sample_rate,
            halconfig.format,
            halconfig.channel_mask,
            flags,
            status, address);

    // If the input could not be opened with the requested parameters and we can handle the
    // conversion internally, try to open again with the proposed parameters.
    if (status == BAD_VALUE &&
        audio_is_linear_pcm(config->format) &&
        audio_is_linear_pcm(halconfig.format) &&
        (halconfig.sample_rate <= AUDIO_RESAMPLER_DOWN_RATIO_MAX * config->sample_rate) &&
        (audio_channel_count_from_in_mask(halconfig.channel_mask) <= FCC_LIMIT) &&
        (audio_channel_count_from_in_mask(config->channel_mask) <= FCC_LIMIT)) {
        // FIXME describe the change proposed by HAL (save old values so we can log them here)
        ALOGV("openInput_l() reopening with proposed sampling rate and channel mask");
        inStream.clear();
        status = inHwHal->openInputStream(
                *input, devices, &halconfig, flags, address, source,
                outputDevice, outputDeviceAddress, &inStream);
        // FIXME log this new status; HAL should not propose any further changes
    }

    if (status == NO_ERROR && inStream != 0) {
        AudioStreamIn *inputStream = new AudioStreamIn(inHwDev, inStream, flags);
        if ((flags & AUDIO_INPUT_FLAG_MMAP_NOIRQ) != 0) {
            sp<MmapCaptureThread> thread =
                    new MmapCaptureThread(this, *input, inHwDev, inputStream, mSystemReady);
            mMmapThreads.add(*input, thread);
            ALOGV("openInput_l() created mmap capture thread: ID %d thread %p", *input,
                    thread.get());
            return thread;
        } else {
            // Start record thread
            // RecordThread requires both input and output device indication to forward to audio
            // pre processing modules
            sp<RecordThread> thread = new RecordThread(this, inputStream, *input, mSystemReady);
            mRecordThreads.add(*input, thread);
            ALOGV("openInput_l() created record thread: ID %d thread %p", *input, thread.get());
            return thread;
            // 把 audio_io_handle_t 和 RecordThread 添加到键值对向量 mRecordThreads 中
        // 键值对向量 mRecordThreads 中,由于 audio_io_handle_t 和 RecordThread 是一
        // 一对应的关系,所以拿到一个 audio_io_handle_t,就能找到它对应的 RecordThread
        // 所以可以理解 audio_io_handle_t 为 RecordThread 的索引号
        }
    }

    *input = AUDIO_IO_HANDLE_NONE;
    return 0;
}

​ 后续在HAL层选择usecase时,由AudioSource与Flag两个参数决定,选择过程如下

int StreamInPrimary::GetInputUseCase(audio_input_flags_t halStreamFlags, audio_source_t source)
{
    // TODO: cover other usecases
    int usecase = USECASE_AUDIO_RECORD;
    if (config_.sample_rate == LOW_LATENCY_CAPTURE_SAMPLE_RATE &&
        (halStreamFlags & AUDIO_INPUT_FLAG_TIMESTAMP) == 0 &&
        (halStreamFlags & AUDIO_INPUT_FLAG_COMPRESS) == 0 &&
        (halStreamFlags & AUDIO_INPUT_FLAG_FAST) != 0 &&
        (!(isDeviceAvailable(PAL_DEVICE_IN_PROXY))))
        usecase = USECASE_AUDIO_RECORD_LOW_LATENCY;

    if ((halStreamFlags & AUDIO_INPUT_FLAG_MMAP_NOIRQ) != 0)
        usecase = USECASE_AUDIO_RECORD_MMAP;
    else if (source == AUDIO_SOURCE_VOICE_COMMUNICATION &&
             halStreamFlags & AUDIO_INPUT_FLAG_VOIP_TX)
        usecase = USECASE_AUDIO_RECORD_VOIP;

    return usecase;
}

3.3AudioFlinger音频流管理

​ 从硬件把音频数据送入到对应的 RecordThread 中,那么应用进程想控制这些音频流的话,比如开始start()、停止放stop()、暂停 pause(),怎么办呢?注意应用进程与 AudioFlinger 并不在一个进程上。这就需要 AudioFlinger 提供音频流管理功能,并提供一套通讯接口可以让应用进程跨进程控制 AudioFlinger 中的音频流状态。

AudioFlinger 音频流管理由 AudioFlinger::RecordThread::Track 实现,Track 与 AudioRecord 是一对一的关系,一个 AudioRecord 创建后,那么 AudioFlinger 会创建一个 Track 与之对应;RecordThread 与 AudioTrack/Track 是一对多的关系,一个 RecordThread 可以挂着多个 Track。

具体来说:AudioTrack 创建后,AudioPolicyManager 根据 AudioRecord 的Flags和AudioSource,找到对应的输出流设备和 RecordThread(如果没有找到的话,则系统会打开对应的输出流设备并新建一个 RecordThread),然后创建一个 Track 并挂到这个 RecordThread 下面。

RecordThread 有两个私有成员向量与此强相关:

  • mInput:该 RecordThread 创建的所有 Track 均添加保存到这个向量中

  • mActiveTracks:只有需要录制(设置了 ACTIVE 状态)的 Track 会添加到这个向量中;RecordThread 会从该向量上找到设置了 ACTIVE 状态的 Track,把这些 Track 数据混音后写到输出流设备

    AudioFlinger::RecordThread::RecordThread(const sp<AudioFlinger>& audioFlinger,
                                             AudioStreamIn *input,
                                             audio_io_handle_t id,
                                             bool systemReady
                                             ) :
        ThreadBase(audioFlinger, id, RECORD, systemReady, false /* isOut */),
        mInput(input),
        mSource(mInput),
        mActiveTracks(&this->mLocalLog),
        mRsmpInBuffer(NULL),
        // mRsmpInFrames, mRsmpInFramesP2, and mRsmpInFramesOA are set by readInputParameters_l()
        mRsmpInRear(0)
        , mReadOnlyHeap(new MemoryDealer(kRecordThreadReadOnlyHeapSize,
                "RecordThreadRO", MemoryHeapBase::READ_ONLY))
        // mFastCapture below
        , mFastCaptureFutex(0)
        // mInputSource
        // mPipeSink
        // mPipeSource
        , mPipeFramesP2(0)
        // mPipeMemory
        // mFastCaptureNBLogWriter
        , mFastTrackAvail(false)
        , mBtNrecSuspended(false)
    

音频流控制最常用的三个接口:

  • AudioFlinger::RecordThread::Track::start:开始播放:把该 Track 置 ACTIVE 状态,然后添加到 mActiveTracks 向量中,最后调用 AudioFlinger::RecordThread::broadcast_l() 告知 RecordThread 情况有变
  • AudioFlinger::RecordThread::Track::stop:停止播放:把该 Track 置 STOPPED 状态,最后调用 AudioFlinger::RecordThread::broadcast_l() 告知 RecordThread 情况有变
  • AudioFlinger::RecordThread::Track::pause:暂停播放:把该 Track 置 PAUSING 状态,最后调用 AudioFlinger::RecordThread::broadcast_l() 告知 PlaybackThread 情况有变

可见这三个音频流控制接口是非常简单的,主要是设置一下 Track 的状态,然后发个事件通知RecordThread 就行,复杂的处理都在 AudioFlinger::RecordThread::threadLoop() 中了。

未完待续

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值