若有需要请查看前面章节分析:
【一】Android MediaRecorder整体架构源码浅析
【二】Android MediaRecorder C++底层架构音视频处理过程和音视频同步源码分析
然后分析音频数据源AudioSource的start()方法:
status_t AudioSource::start(MetaData *params) {
// 加锁多线程操作
Mutex::Autolock autoLock(mLock);
if (mStarted) {
return UNKNOWN_ERROR;
}
if (mInitCheck != OK) {
return NO_INIT;
}
// 默认的初始化变量
mTrackMaxAmplitude = false;
mMaxAmplitude = 0; // 最大振幅
mInitialReadTimeUs = 0;// 初始化真实时间
mStartTimeUs = 0;// 开始时间戳
int64_t startTimeUs; // 获取音视频共同的同步获取数据的时间戳
if (params && params->findInt64(kKeyTime, &startTimeUs)) {
mStartTimeUs = startTimeUs;
}
// 此处开始音频录制,下面进行分析
status_t err = mRecord->start();
if (err == OK) {
mStarted = true;
} else {
mRecord.clear();
}
return err;
}
分析mRecord->start();如下:
先分析下mRecord的来历,在构造方法AudioSource::AudioSource()中初始化的:
mRecord = new AudioRecord(
inputSource, sampleRate, AUDIO_FORMAT_PCM_16_BIT,
audio_channel_in_mask_from_count(channelCount),
opPackageName,
(size_t) (bufCount * frameCount),
AudioRecordCallbackFunction,
this,
frameCount /*notificationFrames*/,
AUDIO_SESSION_ALLOCATE,
AudioRecord::TRANSFER_DEFAULT,
AUDIO_INPUT_FLAG_NONE,
uid,
pid);
mInitCheck = mRecord->initCheck();
如上进行了创建AudioRecord实例和赋值,因此该分析AudioRecord的start方法:
status_t AudioRecord::start(AudioSystem::sync_event_t event, audio_session_t triggerSession)
{
ALOGV("start, sync event %d trigger session %d", event, triggerSession);
SEEMPLOG_RECORD(71,"");
AutoMutex lock(mLock);
if (mActive) {
return NO_ERROR;
}
/**
由void AudioRecord::getAudioService() 方法可知道:该对象是通过Binder机制从ServiceManager管理的系统服务中获取到的一个AudioService的Bp代理对象,用于和AudioService服务器端进行音频数据交互等:源码如:
void AudioRecord::getAudioService() {
if (mAudioService == NULL) {
const sp<IServiceManager> sm(defaultServiceManager());
if (sm != NULL) {
const String16 name("audio");
mAudioService = interface_cast<IAudioService>(sm->getService(name));
if (mAudioService == NULL) {
ALOGE("AudioService is NULL");
}else {
ALOGI("AudioService is NOT NULL");
}
}
mAudioStateController = new AudioStateController(this);
}
}
并且getAudioService()该方法是在构造函数中进行调用的,因此此处不为空
**/
if (mAudioService != NULL) {
ALOGI("call onAudioRecordStart when start");
// 通知远程的服务器端的Audio开始录制,并且给一个【mAudioStateController】变量进行操控客户端Audio音频录制交互
mAudioService->onAudioRecordStart(mClientUid, mAudioStateController);
}
/** 刷新数据即可能会丢弃此前存在的数据。
此处分析下mProxy变量的来历:在AudioRecord::openRecord_l()该方法中进行初始化的:
mProxy = new AudioRecordClientProxy(cblk, buffers, mFrameCount, mFrameSize);
即它是一个代理音频录制的客户端代理对象,代理的是服务器端的AudioRecord。
【AudioRecord::openRecord_l()】方法中会获取AudioFlinger的Bp代理服务对象,然后调用其audioFlinger->openRecord()打开录制音频数据的方法开始录制并且返回服务端AudioRecord的代理对象即sp<IAudioRecord> record最后赋值给mAudioRecord变量,
然后最重要的是获取到音频数据的共享内存块【audio_track_cblk_t】结构(类)实例进行操作共享音频数据内存。缓冲区要么紧接在控制块之后,
或在单独的区域由服务器端AudioService自行决定。
并且【AudioRecord::openRecord_l()】该方法在构造函数执行时就初始化执行了
**/
// discard data in buffer
const uint32_t framesFlushed = mProxy->flush();
mFramesReadServerOffset -= mFramesRead + framesFlushed;
mFramesRead = 0;
mProxy->clearTimestamp(); // timestamp is invalid until next server push
// reset current position as seen by client to 0
mProxy->setEpoch(mProxy->getEpoch() - mProxy->getPosition());
// 调用processAudioBuffer()时强制刷新剩余帧数据,并且在停止前读取可能是不完整的部分数据
// force refresh of remaining frames by processAudioBuffer() as last
// read before stop could be partial.
mRefreshRemaining = true;
mNewPosition = mProxy->getPosition() + mUpdatePeriod;
int32_t flags = android_atomic_acquire_load(&mCblk->mFlags);
// we reactivate markers (mMarkerPosition != 0) as the position is reset to 0.
// This is legacy behavior. This is not done in stop() to avoid a race condition
// where the last marker event is issued twice.
mMarkerReached = false;
mActive = true;
status_t status = NO_ERROR;
if (!(flags & CBLK_INVALID)) {
ALOGV("mAudioRecord->start()");
// !!!此处开启了AudioFlinger服务器端的AudioRecord开始录制,此处不再深入AudioFlinger,以后有时间在进行
status = mAudioRecord->start(event, triggerSession);
if (status == DEAD_OBJECT) {
flags |= CBLK_INVALID;
}
}
if (flags & CBLK_INVALID) {
status = restoreRecord_l("start");
}
if (status != NO_ERROR) {
mActive = false;
ALOGE("start() status %d", status);
} else {
sp<AudioRecordThread> t = mAudioRecordThread;
if (t != 0) {
// 正常执行此处即mAudioRecordThread的resume方法,该方法作用是:
允许线程执行,如果没有要求则退出
t->resume();
} else {
mPreviousPriority = getpriority(PRIO_PROCESS, 0);
get_sched_policy(0, &mPreviousSchedulingGroup);
androidSetThreadPriority(0, ANDROID_PRIORITY_AUDIO);
}
}
return status;
}
接着分析:mAudioRecordThread的resume方法:
先分析mAudioRecordThread的来历:即AudioRecord的构造函数会执行如下语句:
if (cbf != NULL) {
mAudioRecordThread = new AudioRecordThread(*this, threadCanCallJava);
mAudioRecordThread->run("AudioRecord", ANDROID_PRIORITY_AUDIO);
// 现在在暂停状态就开始了
// thread begins in paused state, and will not reference us until start()
}
【cbf】是一个回调函数指针,即来自于如下:(在AudioSource.cpp中定义的)
static void AudioRecordCallbackFunction(int event, void *user, void *info) {
AudioSource *source = (AudioSource *) user;
switch (event) {
case AudioRecord::EVENT_MORE_DATA: {
source->dataCallback(*((AudioRecord::Buffer *) info));
break;
}
case AudioRecord::EVENT_OVERRUN: {
ALOGW("AudioRecord reported overrun!");
break;
}
default:
// does nothing
break;
}
}
此函数是非常重要的,用于回调接收AudioFlinger服务端层音频数据回调的,此处先不分析。
再看进行了new AudioRecordThread(*this, threadCanCallJava);操作,创建了其对象实例,
并将客服端的AudioRecord实例作为数据接收者来传入,用于在新线程中通知客户端。
因此在看mAudioRecordThread的resume方法方法时即如下:
void AudioRecord::AudioRecordThread::resume()
{
AutoMutex _l(mMyLock);
mIgnoreNextPausedInt = true;
if (mPaused || mPausedInt) {
mPaused = false;
mPausedInt = false;
/** 此处非常作用的一个信号量机制条件锁来控制其重新线程开启线程循环loop来获取音频数据,该信号量唤醒是在线程循环threadLoop()函数中
**/
mMyCond.signal();
}
}
接着分析:线程循环threadLoop()函数
bool AudioRecord::AudioRecordThread::threadLoop()
{
{
AutoMutex _l(mMyLock);
if (mPaused) {
// TODO check return value and handle or log
// 信号量等待被唤起该操作
mMyCond.wait(mMyLock);
// caller will check for exitPending()
return true