(三)Audio子系统之AudioRecord.startRecording

本文详细分析了Android中AudioRecord.startRecording()的实现过程,从Java层到Native层,涉及AudioRecord对象初始化、startRecording内部逻辑、AudioFlinger服务、HAL层交互以及录音数据的读取。文章揭示了AudioRecord如何开启录音、建立输入流通道、创建Audio通路,以及在不同层级中的数据传输和状态管理。
摘要由CSDN通过智能技术生成

在上一篇文章《(二)Audio子系统之new AudioRecord()》中已经介绍了Audio系统如何创建AudioRecord对象以及输入流,并创建了RecordThread线程,接下来,继续分析AudioRecord方法中的startRecording的实现

  

  函数原型:

     public void startRecording() throws IllegalStateException

       作用:

    开始进行录制

  参数:

    无

  返回值:

    无

      异常:

    若没有初始化完成时,抛出IllegalStateException

 

接下来进入系统分析具体实现

frameworks/base/media/java/android/media/AudioRecord.java

    public void startRecording()
    throws IllegalStateException {
        if (mState != STATE_INITIALIZED) {
            throw new IllegalStateException("startRecording() called on an "
                    + "uninitialized AudioRecord.");
        }

        // start recording
        synchronized(mRecordingStateLock) {
            if (native_start(MediaSyncEvent.SYNC_EVENT_NONE, 0) == SUCCESS) {
                handleFullVolumeRec(true);
                mRecordingState = RECORDSTATE_RECORDING;
            }
        }
    }

首先判断是否已经初始化完毕了,在前一篇文章中,mState已经是STATE_INITIALIZED状态了。所以继续分析native_start函数

frameworks/base/core/jni/android_media_AudioRecord.cpp

static jint
android_media_AudioRecord_start(JNIEnv *env, jobject thiz, jint event, jint triggerSession)
{
    sp<AudioRecord> lpRecorder = getAudioRecord(env, thiz);
    if (lpRecorder == NULL ) {
        jniThrowException(env, "java/lang/IllegalStateException", NULL);
        return (jint) AUDIO_JAVA_ERROR;
    }

    return nativeToJavaStatus(
            lpRecorder->start((AudioSystem::sync_event_t)event, triggerSession));
}

继续往下:lpRecorder->start

frameworks\av\media\libmedia\AudioRecord.cpp

status_t AudioRecord::start(AudioSystem::sync_event_t event, int triggerSession)
{
    AutoMutex lock(mLock);
    if (mActive) {
        return NO_ERROR;
    }

    // reset current position as seen by client to 0
    mProxy->setEpoch(mProxy->getEpoch() - mProxy->getPosition());
    // force refresh of remaining frames by processAudioBuffer() as last
    // read before stop could be partial.
    mRefreshRemaining = true;

    mNewPosition = mProxy->getPosition() + mUpdatePeriod;
	
    int32_t flags = android_atomic_acquire_load(&mCblk->mFlags);

    status_t status = NO_ERROR;
    if (!(flags & CBLK_INVALID)) {
        ALOGV("mAudioRecord->start()");
        status = mAudioRecord->start(event, triggerSession);
        if (status == DEAD_OBJECT) {
            flags |= CBLK_INVALID;
        }
    }
    if (flags & CBLK_INVALID) {
        status = restoreRecord_l("start");	
    }

    if (status != NO_ERROR) {
        ALOGE("start() status %d", status);
    } else {
        mActive = true;
        sp<AudioRecordThread> t = mAudioRecordThread;
        if (t != 0) {
            t->resume();
        } else {
            mPreviousPriority = getpriority(PRIO_PROCESS, 0);
            get_sched_policy(0, &mPreviousSchedulingGroup);
            androidSetThreadPriority(0, ANDROID_PRIORITY_AUDIO);
        }
    }

    return status;
}

在这个函数中主要的工作如下:

    1.重置当前录音Buffer中的录音数据写入的起始位置,录音Buffer的组成在第一篇文章中已经介绍了;

    2.标记mRefreshRemaining为true,从注释中可以看到,他应该是用来强制刷新剩余的frames,后面应该会突出这个变量的作用,先不急;

    3.从mCblk->mFlags的地方获取flags,这里是0x0;

    4.第一次来,肯定走mAudioRecord->start();

    5.如果start失败了,会重新调用restoreRecord_l函数,再次建立输入流通道,这个函数在前一篇文章已经分析过了;

    6.调用AudioRecordThread线程的resume函数;

这里我们主要分析第4、6步;

首先分析下AudioRecord.cpp::start()的第4步:mAudioRecord->start()

mAudioRecord是sp<IAudioRecord>类型的,也就是说他是Binder中的Bp端,我们需要找到BnAudioRecord,可以在AudioFlinger.h中找到Bn端的定义

frameworks\av\services\audioflinger\AudioFlinger.h

    // server side of the client's IAudioRecord
    class RecordHandle : public android::BnAudioRecord {
    public:
        RecordHandle(const sp<RecordThread::RecordTrack>& recordTrack);
        virtual             ~RecordHandle();
        virtual status_t    start(int /*AudioSystem::sync_event_t*/ event, int triggerSession);
        virtual void        stop();
        virtual status_t onTransact(
            uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags);
    private:
        const sp<RecordThread::RecordTrack> mRecordTrack;

        // for use from destructor
        void                stop_nonvirtual();
    };

所以我们继续找RecordHandle类是在哪里实现的,同时,这里可以看到除了start方法以外还有stop方法。

frameworks\av\services\audioflinger\Tracks.cpp

status_t AudioFlinger::RecordHandle::start(int /*AudioSystem::sync_event_t*/ event,
        int triggerSession) {

    return mRecordTrack->start((AudioSystem::sync_event_t)event, triggerSession);
}

在AudioFlinger.h文件中可以看到const sp<RecordThread::RecordTrack> mRecordTrack;他还是在Tracks.cpp中实现的,继续往下走

status_t AudioFlinger::RecordThread::RecordTrack::start(AudioSystem::sync_event_t event,
                                                        int triggerSession)
{

    sp<ThreadBase> thread = mThread.promote();
    if (thread != 0) {
        RecordThread *recordThread = (RecordThread *)thread.get();
        return recordThread->start(this, event, triggerSession);
    } else {
        return BAD_VALUE;
    }
}

这里的Thread是在AudioRecord.cpp::openRecord_l()中调用createRecordTrack_l的Thread对象,再深入一点,在thread->createRecordTrack_l方法中调用了new RecordTrack(this,...),而RecordTrack是继承TrackBase的,在TrackBase父类的构造函数中TrackBase(ThreadBase *thread,...): RefBase(), mThread(thread),...{},这个父类的实现也是在Tracks.cpp。所以这里的mThread就是RecordThread

所以这里继续调用RecordThread的start方法

frameworks\av\services\audioflinger\Threads.cpp

status_t AudioFlinger::RecordThread::start(RecordThread::RecordTrack* recordTrack,
                                           AudioSystem::sync_event_t event,
                                           int triggerSession)
{
    sp<ThreadBase> strongMe = this;
    status_t status = NO_ERROR;

    if (event == AudioSystem::SYNC_EVENT_NONE) {
        recordTrack->clearSyncStartEvent();
    } else if (event != AudioSystem::SYNC_EVENT_SAME) {
        recordTrack->mSyncStartEvent = mAudioFlinger->createSyncEvent(event,
                                       triggerSession,
                                       recordTrack->sessionId(),
                                       syncStartEventCallback,
                                       recordTrack);
        // Sync event can be cancelled by the trigger session if the track is not in a
        // compatible state in which case we start record immediately
        if (recordTrack->mSyncStartEvent->isCancelled()) {
            recordTrack->clearSyncStartEvent();
        } else {
            // do not wait for the event for more than AudioSystem::kSyncRecordStartTimeOutMs
            recordTrack->mFramesToDrop = -
                    ((AudioSystem::kSyncRecordStartTimeOutMs * recordTrack->mSampleRate) / 1000);
        }
    }

    {
        // This section is a rendezvous between binder thread executing start() and RecordThread
        AutoMutex lock(mLock);
        if (mActiveTracks.indexOf(recordTrack) >= 0) {
            if (recordTrack->mState == TrackBase::PAUSING) {
                ALOGV("active record track PAUSING -> ACTIVE");
                recordTrack->mState = TrackBase::ACTIVE;
            } else {
                ALOGV("active record track state %d", recordTrack->mState);
            }
            return status;
        }

        // TODO consider other ways of handling this, such as changing the state to :STARTING and
        //      adding the track to mActiveTracks after returning from AudioSystem::startInput(),
        //      or using a separate command thread
        recordTrack->mState = TrackBase::STARTING_1;
        mActiveTracks.add(recordTrack);
        mActiveTracksGen++;
        status_t status = NO_ERROR;
        if (recordTrack->isExternalTrack()) {
            mLock.unlock();
            status = AudioSystem::startInput(mId, (audio_session_t)recordTrack->sessionId());
            mLock.lock();
            // FIXME should verify that recordTrack is still in mActiveTracks
            if (status != NO_ERROR) {//0
                mActiveTracks.remove(recordTrack);
                mActiveTracksGen++;
                recordTrack->clearSyncStartEvent();
                ALOGV("RecordThread::start error %d", status);
                return status;
            }
        }
        // Catch up with current buffer indices if thread is already running.
        // This is what makes a new client discard all buffered data.  If the track's mRsmpInFront
        // was initialized to some value closer to the thread's mRsmpInFront, then the track could
        // see previously buffered data before it called start(), but with greater risk of overrun.

        recordTrack->mRsmpInFront = mRsmpInRear;
        recordTrack->mRsmpInUnrel = 0;
        // FIXME why reset?
        if (recordTrack->mResampler != NULL) {
            recordTrack->mResampler->reset();
        }
        recordTrack->mState = TrackBase::STARTING_2;
        // signal thread to start
        mWaitWorkCV.broadcast();
        if (mActiveTracks.indexOf(recordTrack) < 0) {
            ALOGV("Record failed to start");
            status = BAD_VALUE;
            goto startError;
        }
        return status;
    }

startError:
    if (recordTrack->isExternalTrack()) {
        AudioSystem::stopInput(mId, (audio_session_t)recordTrack->sessionId());
    }
    recordTrack->clearSyncStartEvent();
    // FIXME I wonder why we do not reset the state here?
    return status;
}

在这个函数中主要的工作如下:

    1.判断传过来的event的值,从AudioRecord.java可以看到他一直是SYNC_EVENT_NONE,所以这里就清除SyncStartEvent;

    2.判断在mActiveTracks集合中传过来的recordTrack是否是第一个,而我们这是第一次来,肯定会是第一个,而如果不是第一个,也就是说之前因为某种状态已经开始了录音,所以再判断是否是PAUSING状态,更新状态到ACTIVE,然后直接return;

    3.设置recordTrack的状态为STARTING_1,然后加到mActiveTracks集合中,如果此时再去indexOf的话,肯定就是1了;

    4.判断recordTrack是否是外部的Track,而isExternalTrack的定义如下:

            bool        isTimedTrack() const { return (mType == TYPE_TIMED); }
            bool        isOutputTrack() const { return (mType == TYPE_OUTPUT); }
            bool        isPatchTrack() const { return (mType == TYPE_PATCH); }
            bool        isExternalTrack() const { return !isOutputTrack() && !isPatchTrack(); }

再回忆下,我们在new RecordTrack的时候传入的mType是TrackBase::TYPE_DEFAULT,所以这个recordTrack是外部的Track;

    5.确定是ExternalTrack,那么就会调用AudioSystem::startInput方法开始采集数据,这个sessionId就是上一篇文章中出现的那个了,而对于这个mId,在AudioSystem::startInput中他的类型是audio_io_handle_t,在上一篇文章中,这个io_handle是通过AudioSystem::getInputForAttr获取到的,获取到之后通过checkRecordThread_l(input)获取到了一个RecordThread对象,我们看下RecordThread类:class RecordThread : public ThreadBase,再看下ThreadBase父类,父类的构造函数实现在Threads.cpp文件中,在这里我们发现把input赋值给了mId,也就是说,调用AudioSystem::startInput函数的参数,就是之前建立的输入流input以及生成的sessionId了。

AudioFlinger::ThreadBase::ThreadBase(const sp<AudioFlinger>& audioFlinger, audio_io_handle_t id,
        audio_devices_t outDevice, audio_devices_t inDevice, type_t type)
    :   Thread(false /*canCallJava*/),
        mType(type),
        mAudioFlinger(audioFlinger),
        // mSampleRate, mFrameCount, mChannelMask, mChannelCount, mFrameSize, mFormat, mBufferSize
        // are set by PlaybackThread::readOutputParameters_l() or
        // RecordThread::readInputParameters_l()
        //FIXME: mStandby should be true here. Is this some kind of hack?
        mStandby(false), mOutDevice(outDevice), mInDevice(inDevice),
        mAudioSource(AUDIO_SOURCE_DEFAULT), mId(id),
        // mName will be set by concrete (non-virtual) subclass
        mDeathRecipient(new PMDeathRecipient(this))
{
}

    6.如果mRsmpInRear不为null的话,就重置mRsmpInFront等缓冲区索引;这里显然还没开始录音,所以mRsmpInRear是null的;

    7.设置recordTrack的状态为STARTING_2,然后调用mWaitWorkCV.broadcast()广播通知所有的线程开始工作。注意:这里不得不提前剧透下,在AudioSystem::startInput中,AudioFlinger::RecordThread已经开始跑起来了,所以其实broadcast对RecordThread是没有作用的,并且,需要特别注意的是,这里更新了recordTrack->mState为STARTING_2,之前在加入mActiveTracks时的状态是STARTING_1,这个地方比较有意思,这里先标记下,到时候在分析RecordThread的时候揭晓答案;

    8.判断下recordTrack是否已经加到mActiveTracks集合中了,如果没有的话,就说明start失败了,需要stopInput等;

接下来继续分析AudioSystem::startInput方法

frameworks\av\media\libmedia\AudioSystem.cpp

status_t AudioSystem::startInput(audio_io_handle_t input,
                                 audio_session_t session)
{
    const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
    if (aps == 0) return PERMISSION_DENIED;
    return aps->startInput(input, session);
}

继续调用AudioPolicyService的startInput方法

frameworks\av\services\audiopolicy\AudioPolicyInterfaceImpl.cpp

status_t AudioPolicyService::startInput(audio_io_handle_t input,
                                        audio_session_t session)
{
    if (mAudioPolicyManager == NULL) {
        return NO_INIT;
    }
    Mutex::Autolock _l(mLock);

    return mAudioPolicyManager->startInput(input, session);
}

继续转发

frameworks\av\services\audiopolicy\AudioPolicyManager.cpp

status_t AudioPolicyManager::startInput(audio_io_handle_t input,
                                        audio_session_t session)
{
    ssize_t index = mInputs.indexOfKey(input);
    if (index < 0) {
        ALOGW("startInput() unknown input %d", input);
        return BAD_VALUE;
    }
    sp<AudioInputDescriptor> inputDesc = mInputs.valueAt(index);

    index = inputDesc->mSessions.indexOf(session);
    if (index < 0) {
        ALOGW("startInput() unknown session %d on input %d", session, input);
        return BAD_VALUE;
    }

    // virtual input devices are compatible with other input devices
    if (!isVirtualInputDevice(inputDesc->mDevice)) {
        // for a non-virtual input device, check if there is another (non-virtual) active input
        audio_io_handle_t activeInput = getActiveInput();
        if (activeInput != 0 && activeInput != input) {
            // If the already active input uses AUDIO_SOURCE_HOTWORD then it is closed,
            // otherwise the active input continues and the new input cannot be started.
            sp<AudioInputDescriptor> activeDesc = mInputs.valueFor(activeInput);
            if (activeDesc->mInputSource == AUDIO_SOURCE_HOTWORD) {
                ALOGW("startInput(%d) preempting low-priority input %d", input, activeInput);
			
                stopInput(activeInput, activeDesc->mSessions.itemAt(0));
                releaseInput(activeInput, activeDesc->mSessions.itemAt(0));
            } else {
                ALOGE("startInput(%d) failed: other input %d already started", input, activeInput);
                return INVALID_OPERATION;
            }
        }
    }

    if (inputDesc->mRefCount == 0) {
        if (activeInputsCount() == 0) {
            SoundTrigger::setCaptureState(true);
        }
        setInputDevice(input, getNewInputDevice(input), true /* force */);

        // automatically enable the remote submix output when input is started if not
        // used by a policy mix of type MIX_TYPE_RECORDERS
        // For remote submix (a virtual device), we open only one input per capture request.
        if (audio_is_remote_submix_device(inputDesc->mDevice)) {
			ALOGV("audio_is_remote_submix_device(inputDesc->mDevice)");
            String8 address = String8("");
            if (inputDesc->mPolicyMix == NULL) {
                address = String8("0");
            } else if (inputDesc->mPolicyMix->mMixType == MIX_TYPE_PLAYERS) {
                address = inputDesc->mPolicyMix->mRegistrationId;
            }
            if (address != "") {
                setDeviceConnectionStateInt(AUDIO_DEVICE_OUT_REMOTE_SUBMIX,
                        AUDIO_POLICY_DEVICE_STATE_AVAILABLE,
                        address);
            }
        }
    }

    ALOGV("AudioPolicyManager::startInput() input source = %d", inputDesc->mInputSource);

    inputDesc->mRefCount++;
    return NO_ERROR;
}

在这个函数中主要工作如下:

    1.通过input找到mInputs集合中的位置,并获取他的inputDesc;

    2.判断input设备是否是虚拟设备,若不是则再判断是否存在active的设备,我们第一次来,不存在的!

    3.第一次来嘛,所以会调用SoundTrigger::setCaptureState(true),不过这个是和语音识别有关系,这里就不多说了;

    4.继续调用setInputDevice函数,其中getNewInputDevice函数的作用是根据input获取audio_devices_t设备,同样,这个设备在上一篇文章中的AudioPolicyManager::getInputForAttr方法中通过getDeviceAndMixForInputSource获取到的,即AUDIO_DEVICE_IN_BUILTIN_MIC内置MIC设备,同时在该函数最后更新了inputDesc->mDevice;

    5.判断是否是remote_submix设备,然后做相应处理;

    6.inputDesc的mRefCount计数+1;

继续分析setInputDevice函数

status_t AudioPolicyManager::setInputDevice(audio_io_handle_t input,
                                            audio_devices_t device,
                                            bool force,
                                            audio_patch_handle_t *patchHandle)
{
    status_t status = NO_ERROR;

    sp<AudioInputDescriptor> inputDesc = mInputs.valueFor(input);
    if ((device != AUDIO_DEVICE_NONE) && ((device != inputDesc->mDevice) || force)) {
        inputDesc->mDevice = device;

        DeviceVector deviceList = mAvailableInputDevices.getDevicesFromType(device);
        if (!deviceList.isEmpty()) {
            struct audio_patch patch;
            inputDesc->toAudioPortConfig(&patch.sinks[0]);
            // AUDIO_SOURCE_HOTWORD is for internal use only:
            // handled as AUDIO_SOURCE_VOICE_RECOGNITION by the audio HAL
            if (patch.sinks[0].ext.mix.usecase.source == AUDIO_SOURCE_HOTWORD &&
                    !inputDesc->mIsSoundTrigger) {
                patch.sinks[0].ext.mix.usecase.source = AUDIO_SOURCE_VOICE_RECOGNITION;
            }
            patch.num_sinks = 1;
            //only one input device for now
            deviceList.itemAt(0)->toAudioPortConfig(&patch.sources[0]);
            patch.num_sources = 1;
            ssize_t index;
            if (patchHandle && *patchHandle != AUDIO_PATCH_HANDLE_NONE) {
                index = mAudioPatches.indexOfKey(*patchHandle);
            } else {
                index = mAudioPatches.indexOfKey(inputDesc->mPatchHandle);
            }
            sp< AudioPatch> patchDesc;
            audio_patch_handle_t afPatchHandle = AUDIO_PATCH_HANDLE_NONE;
            if (index >= 0) {
                patchDesc = mAudioPatches.valueAt(index);
                afPatchHandle = patchDesc->mAfPatchHandle;
            }

            status_t status = mpClientInterface->createAudioPatch(&patch,
                                                                  &afPatchHandle,
                                                                  0);
            if (status == NO_ERROR) {
                if (index < 0) {
                    patchDesc = new AudioPatch((audio_patch_handle_t)nextUniqueId(),
                                               &patch, mUidCached);
                    addAudioPatch(patchDesc->mHandle, patchDesc);
                } else {
                    patchDesc->mPatch = patch;
                }
                patchDesc->mAfPatchHandle = afPatchHandle;
                patchDesc->mUid = mUidCached;
                if (patchHandle) {
                    *patchHandle = patchDesc->mHandle;
                }
                inputDesc->mPatchHandle = patchDesc->mHandle;
                nextAudioPortGeneration();
                mpClientInterface->onAudioPatchListUpdate();
            }
        }
    }
    return status;
}

在这个函数中主要的工作如下:

    1.这里已经知道device与inputDesc->mDevice都已经是AUDIO_DEVICE_IN_BUILTIN_MIC,但是force是true;

    2.通过device获取mAvailableInputDevices集合中的所有设备,到此刻,我们还只向该集合中添加一个device;

    3.这里我们分析下struct audio_patch;他定义在system\core\include\system\audio.h,这里对audio_patch中的source与sinks进行赋值,注意一点,他把mId(audio_io_handle_t)赋值给了id,然后在这个audio_patch中保存了InputSource,sample_rate,channel_mask,format,hw_module等等,几乎都存进去了;

struct audio_patch {
    audio_patch_handle_t id;            /* patch unique ID */
    unsigned int      num_sources;      /* number of sources in following array */
    struct audio_port_config sources[AUDIO_PATCH_PORTS_MAX];
    unsigned int      num_sinks;        /* number of sinks in following array */
    struct audio_port_config sinks[AUDIO_PATCH_PORTS_MAX];
};

struct audio_port_config {
    audio_port_handle_t      id;           /* port unique ID */
    audio_port_role_t        role;         /* sink or source */
    audio_port_type_t        type;         /* device, mix ... */
    unsigned int             config_mask;  /* e.g AUDIO_PORT_CONFIG_ALL */
    unsigned int             sample_rate;  /* sampling rate in Hz */
    audio_channel_mask_t     channel_mask; /* channel mask if applicable */
    audio_format_t           format;       /* format if applicable */
    struct audio_gain_config gain;         /* gain to apply if applicable */
    union {
        struct audio_port_config_device_ext  device;  /* device specific info */
        struct audio_port_config_mix_ext     mix;     /* mix specific info */
        struct audio_port_config_session_ext session; /* session specific info */
    } ext;
};
struct audio_port_config_device_ext {
    audio_module_handle_t hw_module;                /* module the device is attached to */
    audio_devices_t       type;                     /* device type (e.g AUDIO_DEVICE_OUT_SPEAKER) */
    char                  address[AUDIO_DEVICE_MAX_ADDRESS_LEN]; /* device address. "" if N/A */
};
struct audio_port_config_mix_ext {
    audio_module_handle_t hw_module;    /* module the stream is attached to */
    audio_io_handle_t handle;           /* I/O handle of the input/output stream */
    union {
        //TODO: change use case for output streams: use strategy and mixer attributes
        audio_stream_type_t stream;
        audio_source_t      source;
    } usecase;
};

    4.调用mpClientInterface->createAudioPatch创建Audio通路;

    5.更

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值