原文:【转】android stagefright 框架
链接:http://blog.chinaunix.net/uid-9838896-id-2976618.html
在Android上,预设的多媒体框架(multimedia framework)是OpenCORE
。OpenCORE的优点是兼顾了跨平台的移植性,而且已经过多方验证,所以相对来说较为稳定;但是其缺点是过於庞大复杂,需要耗费相当多的时间去维护。从Android 2.0开始,Google引进了架构稍为简洁的Stagefright,并且有逐渐取代OpenCORE的趋势 (注1)。
[图1] Stagefright在Android多媒体架构中的位置。
[图2] Stagefright所涵盖的模组 (注2)。 以下我们就先来看看Stagefright是如何 播放一个影片档。 Stagefright
在Android中是以shared library的形式存在(libstagefright.so),其中的module -- AwesomePlayer可用来播放video/audio (注3)。AwesomePlayer提供许多API,可以让上层的应用程式(Java/JNI)来呼叫,我们以一个简单的程式来说明video playback的流程。 在Java中,若要播放一个影片档,我们会这样写: MediaPlayer mp = new MediaPlayer(); mp.setDataSource(PATH_TO_FILE); ...... (1) mp.prepare(); ........................ (2)、(3) mp.start(); .......................... (4) 在Stagefright中,则会看到相对应的处理; (1) 将档案的绝对路径指定给mUri
status_t AwesomePlayer
::setDataSource(const char* uri, ...) { return setDataSource_l(uri, ...); } status_t AwesomePlayer::setDataSource_l(const char* uri, ...) { mUri = uri; }
(2) 启动mQueue,作为event handler
status_t AwesomePlayer
::prepare() { return prepare_l(); } status_t AwesomePlayer::prepare_l() { prepareAsync_l(); while (mFlags & PREPARING) { mPreparedCondition.wait(mLock); } } status_t AwesomePlayer::prepareAsync_l() { mQueue.start(); mFlags |= PREPARING; mAsyncPrepareEvent = new AwesomeEvent( this &AwesomePlayer::onPrepareAsyncEvent); mQueue.postEvent(mAsyncPrepareEvent); }
(3) onPrepareAsyncEvent被触发
void AwesomePlayer::onPrepareAsyncEvent() { finishSetDataSource_l(); initVideoDecoder(); ...... (3.3) initAudioDecoder(); } status_t AwesomePlayer::finishSetDataSource_l() { dataSource = DataSource::CreateFromURI(mUri.string(), ...); sp<MediaExtractor> extractor = MediaExtractor::Create(dataSource); ..... (3.1) return setDataSource_l(extractor); ......................... (3.2) }
(3.1) 解析mUri所指定的档案,并且根据其header来选择对应的extractor
sp
<MediaExtractor> MediaExtractor::Create(const sp<DataSource> &source, ...) { source->sniff(&tmp, ...); mime = tmp.string(); if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MPEG4) { return new MPEG4Extractor(source); } else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_MPEG)) { return new MP3Extractor(source); } else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_NB) { return new AMRExtractor(source); } }
(3.2) 使用extractor对档案做A/V的分离 (mVideoTrack/mAudioTrack)
status_t AwesomePlayer
::setDataSource_l(const sp<MediaExtractor> &extractor) { for (size_t i = 0; i < extractor->countTracks(); ++i) { sp<MetaData> meta = extractor->getTrackMetaData(i); CHECK(meta->findCString(kKeyMIMEType, &mime)); if (!haveVideo && !strncasecmp(mime, "video/", 6)) { setVideoSource(extractor->getTrack(i)); haveVideo = true; } else if (!haveAudio && !strncasecmp(mime, "audio/", 6)) { setAudioSource(extractor->getTrack(i)); haveAudio = true; } } } void AwesomePlayer::setVideoSource(sp<MediaSource> source) { mVideoTrack = source; }
(3.3) 根据mVideoTrack中的编码类型来选择video decoder (mVideoSource)
status_t AwesomePlayer
::initVideoDecoder() { mVideoSource = OMXCodec::Create(mClient.interface(), mVideoTrack->getFormat(), false, mVideoTrack); }
(4)
status_t AwesomePlayer
::play() { return play_l(); } status_t AwesomePlayer::play_l() { postVideoEvent_l(); } void AwesomePlayer::postVideoEvent_l(int64_t delayUs) { mQueue.postEventWithDelay(mVideoEvent, delayUs); } void AwesomePlayer::onVideoEvent() { mVideoSource->read(&mVideoBuffer, &options); [Check Timestamp] mVideoRenderer->render(mVideoBuffer); postVideoEvent_l(); }
将mVideoEvent放入mQueue中,开始
解码 播放,并交由mVideoRenderer来画出
stagefright框架(二)- 和OpenMAX的运作
Stagefright的编解码 功能是利用OpenMAX框架,而且用的还是OpenCORE之OMX的实作,我们来看一下Stagefright和OMX是如何 运作的。 (1) OMX_Init
OMXClient mClient; AwesomePlayer::AwesomePlayer() { mClient.connect(); } status_t OMXClient::connect() { mOMX = service->getOMX(); } sp<IOMX> MediaPlayerService::getOMX() { mOMX = new OMX; } OMX::OMX() : mMaster(new OMXMaster) OMXMaster::OMXMaster() { addPlugin(new OMXPVCodecsPlugin); } OMXPVCodecsPlugin::OMXPVCodecsPlugin() { OMX_MasterInit(); } OMX_ERRORTYPE OMX_MasterInit() <-- under OpenCORE { return OMX_Init(); }
(2) OMX_SendCommand
OMXCodec::function_name() { mOMX->sendCommand(mNode, OMX_CommandStateSet, OMX_StateIdle); } status_t OMX::sendCommand(node, cmd, param) { return findInstance(node)->sendCommand(cmd, param); } status_t OMXNodeInstance::sendCommand(cmd, param) { OMX_SendCommand(mHandle, cmd, param, NULL); }
(3) 其他作用在 OMX 元件的指令 其他作用在OMX元件的指令也和OMX_SendCommand的call path一样,请见下表:
OMXCodec
OMX
OMXNodeInstance
useBuffer
useBuffer (OMX_UseBuffer)
getParameter
getParameter (OMX_GetParameter)
fillBuffer
fillBuffer (OMX_FillThisBuffer)
emptyBuffer
emptyBuffer (OMX_EmptyThisBuffer)
(4) Callback Functions
OMX_CALLBACKTYPE OMXNodeInstance::kCallbacks = { &OnEvent, <--------------- omx_message::EVENT &OnEmptyBufferDone, <----- omx_message::EMPTY_BUFFER_DONE &OnFillBufferDone <------- omx_message::FILL_BUFFER_DONE }
stagefright框架(三)-选择Video Decoder
(1) Video decoder是在onPrepareAsyncEvent中的initVideoDecoder被决定的 OMXCodec::Create()会回传video decoder给mVideoSource。
status_t AwesomePlayer
::initVideoDecoder() { mVideoSource = OMXCodec::Create(mClient.interface(), mVideoTrack->getFormat(), false, mVideoTrack); } sp<MediaSource> OMXCodec::Create(&omx, &meta, createEncoder, &source, matchComponentName) { meta->findCString(kKeyMIMEType, &mime); findMatchingCodecs(mime, ..., &matchingCodecs); ........ (2) for (size_t i = 0; i < matchingCodecs.size(); ++i) { componentName = matchingCodecs[i].string(); softwareCodec = InstantiateSoftwareCodec(componentName, ...); ..... (3) if (softwareCodec != NULL) return softwareCodec; err = omx->allocateNode(componentName, ..., &node); ... (4) if (err == OK) { codec = new OMXCodec(..., componentName, ...); ...... (5) return codec; } } }
(2) 根据mVideoTrack的MIME从kDecoderInfo挑出合适的components
void OMXCodec::findMatchingCodecs(mime, ..., matchingCodecs) { for (int index = 0;; ++index) { componentName = GetCodec( kDecoderInfo, sizeof(kDecoderInfo)/sizeof(kDecoderInfo[0]), mime, index); matchingCodecs->push(String8(componentName)); } } static const CodecInfo kDecoderInfo[] = { ... { MEDIA_MIMETYPE_VIDEO_MPEG4, "OMX.qcom.video.decoder.mpeg4" }, { MEDIA_MIMETYPE_VIDEO_MPEG4, "OMX.TI.Video.Decoder" }, { MEDIA_MIMETYPE_VIDEO_MPEG4, "M4vH263Decoder" }, ... }
GetCodec会依据mime从kDecoderInfo挑出所有的component name,然后存到matchingCodecs中。 (3) 根据matchingCodecs中component的顺序,我们会先去检查其是否为software decoder
static sp<MediaSource> InstantiateSoftwareCodec(name, ...) { FactoryInfo kFactoryInfo[] = { ... FACTORY_REF(M4vH263Decoder) ... }; for (i = 0; i < sizeof(kFactoryInfo)/sizeof(kFactoryInfo[0]); ++i) { if (!strcmp(name, kFactoryInfo[i].name)) return (*kFactoryInfo[i].CreateFunc)(source); } }
所有的software decoder都会被列在kFactoryInfo中,我们藉由传进来的name来对应到适合的decoder。 (4) 如果该component不是software decoder,则试著去配置对应的OMX component
status_t OMX
::allocateNode(name, ..., node) { mMaster->makeComponentInstance( name, &OMXNodeInstance::kCallbacks, instance, handle); } OMX_ERRORTYPE OMXMaster::makeComponentInstance(name, ...) { plugin->makeComponentInstance(name, ...); } OMX_ERRORTYPE OMXPVCodecsPlugin::makeComponentInstance(name, ...) { return OMX_MasterGetHandle(..., name, ...); } OMX_ERRORTYPE OMX_MasterGetHandle(...) { return OMX_GetHandle(...); }
(5) 若该component为OMX deocder,则回传;否则继续检查下一个component
stagefright框架(四)-Video Buffer传输流程
这篇文章将介绍Stagefright中是如何 和OMX video decoder传递buffer。
(1) OMXCodec会在一开始的时候透过read函式来传送未解码 的data给decoder,并且要求decoder将解码 后的data传回来
status_t OMXCodec
::read(...) { if (mInitialBufferSubmit) { mInitialBufferSubmit = false; drainInputBuffers(); <----- OMX_EmptyThisBuffer fillOutputBuffers(); <----- OMX_FillThisBuffer } ... } void OMXCodec::drainInputBuffers() { Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexInput]; for (i = 0; i < buffers->size(); ++i) { drainInputBuffer(&buffers->editItemAt(i)); } } void OMXCodec::drainInputBuffer(BufferInfo *info) { mOMX->emptyBuffer(...); } void OMXCodec::fillOutputBuffers() { Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexOutput]; for (i = 0; i < buffers->size(); ++i) { fillOutputBuffer(&buffers->editItemAt(i)); } } void OMXCodec::fillOutputBuffer(BufferInfo *info) { mOMX->fillBuffer(...); }
(2) Decoder从input port读取资料后,开始进行解码 ,并且回传EmptyBufferDone通知OMXCodec
void OMXCodec::on_message(const omx_message &msg) { switch (msg.type) { case omx_message::EMPTY_BUFFER_DONE: { IOMX::buffer_id buffer = msg.u.extended_buffer_data.buffer; drainInputBuffer(&buffers->editItemAt(i)); } } }
OMXCodec收到EMPTY_BUFFER_DONE之后,继续传送下一个未解码 的资料给decoder。 (3) Decoder将解码 完的资料送到output port,并回传FillBufferDone通知OMXCodec
void OMXCodec::on_message(const omx_message &msg) { switch (msg.type) { case omx_message::FILL_BUFFER_DONE: { IOMX::buffer_id buffer = msg.u.extended_buffer_data.buffer; fillOutputBuffer(info); mFilledBuffers.push_back(i); mBufferFilled.signal(); } } }
OMXCodec收到FILL_BUFFER_DONE之后,将解码 后的资料放入mFilledBuffers,发出mBufferFilled信号,并且要求decoder继续送出资料。 (4) read函式在后段等待mBufferFilled信号。当mFilledBuffers被填入资料后,read函式将其指定给buffer指标,并回传给AwesomePlayer
status_t OMXCodec
::read(MediaBuffer **buffer, ...) { ... while (mFilledBuffers.empty()) { mBufferFilled.wait(mLock); } BufferInfo *info = &mPortBuffers[kPortIndexOutput].editItemAt(index); info->mMediaBuffer->add_ref(); *buffer = info->mMediaBuffer; }
stagefright框架(五)-Video Rendering
AwesomePlayer::onVideoEvent除了透过OMXCodec::read取得解码 后的资料外,还必须将这些资料(mVideoBuffer)传给video renderer,以便画到萤幕上去。 (1) 要将mVideoBuffer中的资料画出来之前,必须先建立mVideoRenderer
void AwesomePlayer::onVideoEvent() { ... if (mVideoRenderer == NULL) { initRenderer_l(); } ... } void AwesomePlayer::initRenderer_l() { if (!strncmp("OMX.", component, 4)) { mVideoRenderer = new AwesomeRemoteRenderer( mClient.interface()->createRenderer( mISurface, component, ...)); .......... (2) } else { mVideoRenderer = new AwesomeLocalRenderer( ..., component, mISurface); ............................ (3) } }
(2) 如果video decoder是OMX component,则建立一个AwesomeRemoteRenderer作为mVideoRenderer 从上段的程式码(1)来看,AwesomeRemoteRenderer的本质是由OMX::createRenderer所创建的。createRenderer会先建立一个hardware renderer -- SharedVideoRenderer (libstagefrighthw.so);若失败,则建立software renderer -- SoftwareRenderer (surface)。
sp
<IOMXRenderer> OMX::createRenderer(...) { VideoRenderer *impl = NULL; libHandle = dlopen("libstagefrighthw.so", RTLD_NOW); if (libHandle) { CreateRendererFunc func = dlsym(libHandle, ...); impl = (*func)(...); <----------------- Hardware Renderer } if (!impl) { impl = new SoftwareRenderer(...); <---- Software Renderer } }
(3) 如果video decoder是software component,则建立一个AwesomeLocalRenderer作为mVideoRenderer AwesomeLocalRenderer的constructor会呼叫本身的init函式,其所做的事和OMX::createRenderer一模一样。
void AwesomeLocalRenderer::init(...) { mLibHandle = dlopen("libstagefrighthw.so", RTLD_NOW); if (mLibHandle) { CreateRendererFunc func = dlsym(...); mTarget = (*func)(...); <---------------- Hardware Renderer } if (mTarget == NULL) { mTarget = new SoftwareRenderer(...); <--- Software Renderer } }
(4) mVideoRenderer一经建立就可以开始将解码 后的资料传给它
void AwesomePlayer::onVideoEvent() { if (!mVideoBuffer) { mVideoSource->read(&mVideoBuffer, ...); } [Check Timestamp] if (mVideoRenderer == NULL) { initRenderer_l(); } mVideoRenderer->render(mVideoBuffer); <----- Render Data }
stagefright框架(六)-Audio Playback的流程
到目前为止,我们都只著重在video处理的部分,对於audio却只字未提。这篇文章将会开始audio处理的流程。 Stagefright中关於audio的部分是交由AudioPlayer来处理,它是在AwesomePlayer::play_l中被建立的。 (1) 当上层应用程式要求播放影音时,AudioPlayer同时被建立出来,并且被启动
status_t AwesomePlayer
::play_l() { ... mAudioPlayer = new AudioPlayer(mAudioSink, ...); mAudioPlayer->start(...); ... }
(2) AudioPlayer在启动的过程中会先去读取第一笔解码 后的资料,并且开启audio output
status_t AudioPlayer
::start(...) { mSource->read(&mFirstBuffer); if (mAudioSink.get() != NULL) { mAudioSink->open(..., &AudioPlayer::AudioSinkCallback, ...); mAudioSink->start(); } else { mAudioTrack = new AudioTrack(..., &AudioPlayer::AudioCallback, ...); mAudioTrack->start(); } }
从AudioPlayer::start的程式码来看,AudioPlayer似乎并没有将mFirstBuffer传给audio output。 (3) 开启audio output的同时,AudioPlayer会将callback函式设给它,之后每当callback函式被呼叫,AudioPlayer便去audio decoder读取解码 后的资料
size_t AudioPlayer::AudioSinkCallback(audioSink, buffer, size, ...) { return fillBuffer(buffer, size); } void AudioPlayer::AudioCallback(..., info) { buffer = info; fillBuffer(buffer->raw, buffer->size); } size_t AudioPlayer::fillBuffer(data, size) { mSource->read(&mInputBuffer, ...); memcpy(data, mInputBuffer->data(), ...); }
解码 后audio资料的读取就是由callback函式所驱动,但是callback函式又是怎麼由audio output去驱动的,目前从程式码上还看不出来。另外一方面,从上面的程式片段可以看出,fillBuffer将资料(mInputBuffer)复制到data之后,audio output应该会去取用data。 (5) 至於audio decoder的工作流程则和video decoder相同,可参阅《Stagefright (4) - Video Buffer传输流程》
stagefright框架(七)-Audio和Video的同步
讲完了audio和video的处理流程,接下来要看的是audio和video同步化(synchronization)的问题。OpenCORE的做法是设置一个主clock,而audio和video就分别以此作为输出的依据。而在Stagefright中,audio的输出是透过callback函式来驱动,video则根据audio的timestamp来做同步。以下是详细的说明: (1) 当callback函式驱动AudioPlayer读取解码 后的资料时,AudioPlayer会取得两个时间戳 -- mPositionTimeMediaUs和mPositionTimeRealUs
size_t AudioPlayer::fillBuffer(data, size) { ... mSource->read(&mInputBuffer, ...); mInputBuffer->meta_data()->findInt64(kKeyTime, &mPositionTimeMediaUs); mPositionTimeRealUs = ((mNumFramesPlayed + size_done / mFrameSize) * 1000000) / mSampleRate; ... }
mPositionTimeMediaUs是资料里面所载明的时间戳(timestamp);mPositionTimeRealUs则是播放此资料的实际时间(依据frame number及sample rate得出)。 (2) Stagefright中的video便依据从AudioPlayer得出来之两个时间戳的差值,作为播放的依据
void AwesomePlayer::onVideoEvent() { ... mVideoSource->read(&mVideoBuffer, ...); mVideoBuffer->meta_data()->findInt64(kKeyTime, &timeUs); mAudioPlayer->getMediaTimeMapping(&realTimeUs, &mediaTimeUs); mTimeSourceDeltaUs = realTimeUs - mediaTimeUs; nowUs = ts->getRealTimeUs() - mTimeSourceDeltaUs; latenessUs = nowUs - timeUs; ... }
AwesomePlayer从AudioPlayer取得realTimeUs(即mPositionTimeRealUs)和mediaTimeUs(即mPositionTimeMediaUs),并算出其差值mTimeSourceDeltaUs。 (3) 最后我们将该video资料做排程
void AwesomePlayer::onVideoEvent() { ... if (latenessUs > 40000) { mVideoBuffer->release(); mVideoBuffer = NULL; postVideoEvent_l(); return; } if (latenessUs < -10000) { postVideoEvent_l(10000); return; } mVideoRenderer->render(mVideoBuffer); ... }