stagefright是目前android主流的多媒体框架层,网上收集了一遍关于stagefright框架的介绍,比较全面,可以作为学习stagefrigth的入门资料。
stagefright框架(一)-Video Playback的流程
stagefright框架(二)- 和OpenMAX的運作
stagefright框架(三)-選擇Video Decoder
stagefright框架(四)-Video Buffer傳輸流程
stagefright框架(五)-Video Rendering
stagefright框架(六)-Audio Playback的流程
stagefright框架(七)-Audio和Video的同步
1. stagefright框架(一)-Video Playback的流程
在Android上,預設的多媒體框架(multimedia framework)是OpenCORE
。OpenCORE的優點是兼顧了跨平台的移植性,而且已經過多方驗證,所以相對來說較為穩定;但是其缺點是過於龐大複雜,需要耗費相當多的時間去維護。從Android 2.0開始,Google引進了架構稍為簡潔的Stagefright,並且有逐漸取代OpenCORE的趨勢 (註1)。
[圖1] Stagefright在Android多媒體架構中的位置。
[圖2] Stagefright所涵蓋的模組(註2)。
以下我們就先來看看Stagefright是如何播放一個影片檔。
Stagefright在Android中是以shared library的形式存在(libstagefright.so),其中的module -- AwesomePlayer可用來播放video/audio(註3)。AwesomePlayer提供許多API,可以讓上層的應用程式(Java/JNI)來呼叫,我們以一個簡單的程式來說明video playback的流程。
在Java中,若要播放一個影片檔,我們會這樣寫:
MediaPlayer mp = new MediaPlayer();
mp.setDataSource(PATH_TO_FILE); ...... (1)
mp.prepare(); ........................ (2)、(3)
mp.start(); .......................... (4)
在Stagefright中,則會看到相對應的處理;
(1)將檔案的絕對路徑指定給mUri
status_t AwesomePlayer::setDataSource(constchar* uri,...) { return setDataSource_l(uri,...); } status_t AwesomePlayer::setDataSource_l(constchar* uri,...) { mUri = uri; } |
(2)啟動mQueue,作為event handler
status_t AwesomePlayer::prepare() { return prepare_l(); } status_t AwesomePlayer::prepare_l() { prepareAsync_l(); while (mFlags& PREPARING) { mPreparedCondition.wait(mLock); } } status_t AwesomePlayer::prepareAsync_l() { mQueue.start(); mFlags |= PREPARING; mAsyncPrepareEvent = new AwesomeEvent( this &AwesomePlayer::onPrepareAsyncEvent); mQueue.postEvent(mAsyncPrepareEvent); } |
(3) onPrepareAsyncEvent被觸發
void AwesomePlayer::onPrepareAsyncEvent() { finishSetDataSource_l(); initVideoDecoder();......(3.3) initAudioDecoder(); } status_t AwesomePlayer::finishSetDataSource_l() { dataSource = DataSource::CreateFromURI(mUri.string(),...); sp<MediaExtractor> extractor= MediaExtractor::Create(dataSource);.....(3.1) return setDataSource_l(extractor);.........................(3.2) } |
(3.1)解析mUri所指定的檔案,並且根據其header來選擇對應的extractor
sp<MediaExtractor> MediaExtractor::Create(const sp<DataSource>&source,...) { source->sniff(&tmp,...); mime = tmp.string(); if (!strcasecmp(mime, MEDIA_MIMETYPE_CONTAINER_MPEG4) { return new MPEG4Extractor(source); } else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_MPEG)) { return new MP3Extractor(source); } else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_NB) { return new AMRExtractor(source); } } |
(3.2)使用extractor對檔案做A/V的分離 (mVideoTrack/mAudioTrack)
status_t AwesomePlayer::setDataSource_l(const sp<MediaExtractor>&extractor) { for (size_t i= 0; i< extractor->countTracks();++i) { sp<MetaData> meta= extractor->getTrackMetaData(i); CHECK(meta->findCString(kKeyMIMEType,&mime)); if (!haveVideo&&!strncasecmp(mime,"video/", 6)) { setVideoSource(extractor->getTrack(i)); haveVideo = true; } else if(!haveAudio&&!strncasecmp(mime,"audio/", 6)) { setAudioSource(extractor->getTrack(i)); haveAudio = true; } } } void AwesomePlayer::setVideoSource(sp<MediaSource> source) { mVideoTrack = source; } |
(3.3)根據mVideoTrack中的編碼類型來選擇video decoder (mVideoSource)
status_t AwesomePlayer::initVideoDecoder() { mVideoSource = OMXCodec::Create(mClient.interface(), mVideoTrack->getFormat(), false, mVideoTrack); } |
(4) 將mVideoEvent放入mQueue中,開始解碼播放,並交由mVideoRenderer來畫出
status_t AwesomePlayer::play() { return play_l(); } status_t AwesomePlayer::play_l() { postVideoEvent_l(); } void AwesomePlayer::postVideoEvent_l(int64_t delayUs) { mQueue.postEventWithDelay(mVideoEvent, delayUs); } void AwesomePlayer::onVideoEvent() { mVideoSource->read(&mVideoBuffer,&options); [Check Timestamp] mVideoRenderer->render(mVideoBuffer); postVideoEvent_l(); } |
(註1) 從Android2.3 (Gingerbread) 開始,預設的多媒體框架為 Stagefright。
(註2) Stagefright的架構尚不斷在演進中,本系列文章並未含括所有的模組。
(註3) Audio的播放是交由 AudioPlayer 來處理,請參考《Stagefright (6) - Audio Playback的流程》。
2. stagefright框架(二)- 和OpenMAX的運作
Stagefright的編解碼功能是利用OpenMAX框架,而且用的還是OpenCORE之OMX的實作,我們來看一下Stagefright和OMX是如何運作的。
(1) OMX_Init
OMXClient mClient; AwesomePlayer::AwesomePlayer() { mClient.connect(); } status_t OMXClient::connect() { mOMX = service->getOMX(); } sp<IOMX> MediaPlayerService::getOMX() { mOMX = new OMX; } OMX::OMX() : mMaster(new OMXMaster) OMXMaster::OMXMaster() { addPlugin(new OMXPVCodecsPlugin); } OMXPVCodecsPlugin::OMXPVCodecsPlugin() { OMX_MasterInit(); } OMX_ERRORTYPE OMX_MasterInit() <-- under OpenCORE { return OMX_Init(); } |
(2) OMX_SendCommand
OMXCodec::function_name() { mOMX->sendCommand(mNode, OMX_CommandStateSet, OMX_StateIdle); } status_t OMX::sendCommand(node, cmd, param) { return findInstance(node)->sendCommand(cmd, param); } status_t OMXNodeInstance::sendCommand(cmd, param) { OMX_SendCommand(mHandle, cmd, param, NULL); } |
(3) 其他作用在 OMX 元件的指令
其他作用在OMX元件的指令也和OMX_SendCommand的callpath一樣,請見下表:
OMXCodec | OMX | OMXNodeInstance |
| useBuffer | useBuffer (OMX_UseBuffer) |
| getParameter | getParameter (OMX_GetParameter) |
| fillBuffer | fillBuffer (OMX_FillThisBuffer) |
| emptyBuffer | emptyBuffer (OMX_EmptyThisBuffer) |
(4) Callback Functions
OMX_CALLBACKTYPE OMXNodeInstance::kCallbacks = { &OnEvent, <--------------- omx_message::EVENT &OnEmptyBufferDone, <----- omx_message::EMPTY_BUFFER_DONE &OnFillBufferDone <------- omx_message::FILL_BUFFER_DONE } |
3. stagefright框架(三)-選擇Video Decoder
在《Stagefright (1) – Video Playback的流程》中,我們並沒有詳述Stagefright是如何根據影片檔的類型來選擇適合的video decoder,現在,就讓我們來看一看。
(1) Video decoder是在onPrepareAsyncEvent中的initVideoDecoder被決定的
OMXCodec::Create()會回傳video decoder給mVideoSource。
status_t AwesomePlayer::initVideoDecoder() { mVideoSource = OMXCodec::Create(mClient.interface(), mVideoTrack->getFormat(), false, mVideoTrack); } sp<MediaSource> OMXCodec::Create(&omx, &meta, createEncoder, &source, matchComponentName) { meta->findCString(kKeyMIMEType, &mime); findMatchingCodecs(mime, ..., &matchingCodecs); ........ (2) for (size_t i = 0; i < matchingCodecs.size(); ++i) { componentName = matchingCodecs[i].string(); softwareCodec = InstantiateSoftwareCodec(componentName, ...); ..... (3) if (softwareCodec != NULL) return softwareCodec; err = omx->allocateNode(componentName, ..., &node); ... (4) if (err == OK) { codec = new OMXCodec(..., componentName, ...); ...... (5) return codec; } } } |
(2) 根據mVideoTrack的MIME從kDecoderInfo挑出合適的components
void OMXCodec::findMatchingCodecs(mime, ..., matchingCodecs) { for (int index = 0;; ++index) { componentName = GetCodec( kDecoderInfo, sizeof(kDecoderInfo)/sizeof(kDecoderInfo[0]), mime, index); matchingCodecs->push(String8(componentName)); } } static const CodecInfo kDecoderInfo[] = { ... { MEDIA_MIMETYPE_VIDEO_MPEG4, "OMX.qcom.video.decoder.mpeg4" }, { MEDIA_MIMETYPE_VIDEO_MPEG4, "OMX.TI.Video.Decoder" }, { MEDIA_MIMETYPE_VIDEO_MPEG4, "M4vH263Decoder" }, ... } |
GetCodec會依據mime從kDecoderInfo挑出所有的component name,然後存到matchingCodecs中。
(3) 根據matchingCodecs中component的順序,我們會先去檢查其是否為software decoder
static sp<MediaSource> InstantiateSoftwareCodec(name, ...) { FactoryInfo kFactoryInfo[] = { ... FACTORY_REF(M4vH263Decoder) ... }; for (i = 0; i < sizeof(kFactoryInfo)/sizeof(kFactoryInfo[0]); ++i) { if (!strcmp(name, kFactoryInfo[i].name)) return (*kFactoryInfo[i].CreateFunc)(source); } } |
所有的software decoder都會被列在kFactoryInfo中,我們藉由傳進來的name來對應到適合的decoder。
(4) 如果該component不是software decoder,則試著去配置對應的OMX component
status_t OMX::allocateNode(name, ..., node) { mMaster->makeComponentInstance( name, &OMXNodeInstance::kCallbacks, instance, handle); } OMX_ERRORTYPE OMXMaster::makeComponentInstance(name, ...) { plugin->makeComponentInstance(name, ...); } OMX_ERRORTYPE OMXPVCodecsPlugin::makeComponentInstance(name, ...) { return OMX_MasterGetHandle(..., name, ...); } OMX_ERRORTYPE OMX_MasterGetHandle(...) { return OMX_GetHandle(...); } |
(5) 若該component為OMX deocder,則回傳;否則繼續檢查下一個component
4. stagefright框架(四)-Video Buffer傳輸流程
這篇文章將介紹Stagefright中是如何和OMX video decoder傳遞buffer。
(1) OMXCodec會在一開始的時候透過read函式來傳送未解碼的data給decoder,並且要求decoder將解碼後的data傳回來
status_t OMXCodec::read(...) { if (mInitialBufferSubmit) { mInitialBufferSubmit = false; drainInputBuffers();<----- OMX_EmptyThisBuffer fillOutputBuffers();<----- OMX_FillThisBuffer } ... } void OMXCodec::drainInputBuffers() { Vector<BufferInfo>*buffers=&mPortBuffers[kPortIndexInput]; for (i = 0; i < buffers->size();++i) { drainInputBuffer(&buffers->editItemAt(i)); } } void OMXCodec::drainInputBuffer(BufferInfo*info) { mOMX->emptyBuffer(...); } void OMXCodec::fillOutputBuffers() { Vector<BufferInfo>*buffers=&mPortBuffers[kPortIndexOutput]; for (i = 0; i < buffers->size();++i) { fillOutputBuffer(&buffers->editItemAt(i)); } } void OMXCodec::fillOutputBuffer(BufferInfo*info) { mOMX->fillBuffer(...); } |
(2) Decoder從input port讀取資料後,開始進行解碼,並且回傳EmptyBufferDone通知OMXCodec
void OMXCodec::on_message(const omx_message &msg) { switch (msg.type) { case omx_message::EMPTY_BUFFER_DONE: { IOMX::buffer_id buffer= msg.u.extended_buffer_data.buffer; drainInputBuffer(&buffers->editItemAt(i)); } } } |
OMXCodec收到EMPTY_BUFFER_DONE之後,繼續傳送下一個未解碼的資料給decoder。
(3) Decoder將解碼完的資料送到output port,並回傳FillBufferDone通知OMXCodec
void OMXCodec::on_message(const omx_message &msg) { switch (msg.type) { case omx_message::FILL_BUFFER_DONE: { IOMX::buffer_id buffer= msg.u.extended_buffer_data.buffer; fillOutputBuffer(info); mFilledBuffers.push_back(i); mBufferFilled.signal(); } } } |
OMXCodec收到FILL_BUFFER_DONE之後,將解碼後的資料放入mFilledBuffers,發出mBufferFilled信號,並且要求decoder繼續送出資料。
(4) read函式在後段等待mBufferFilled信號。當mFilledBuffers被填入資料後,read函式將其指定給buffer指標,並回傳給AwesomePlayer
status_t OMXCodec::read(MediaBuffer**buffer,...) { ... while (mFilledBuffers.empty()) { mBufferFilled.wait(mLock); } BufferInfo *info = &mPortBuffers[kPortIndexOutput].editItemAt(index); info->mMediaBuffer->add_ref(); *buffer = info->mMediaBuffer; } |
5. stagefright框架(五)-Video Rendering
AwesomePlayer::onVideoEvent除了透過OMXCodec::read取得解碼後的資料外,還必須將這些資料(mVideoBuffer)傳給video renderer,以便畫到螢幕上去。
(1) 要將mVideoBuffer中的資料畫出來之前,必須先建立mVideoRenderer
void AwesomePlayer::onVideoEvent() { ... if (mVideoRenderer==NULL) { initRenderer_l(); } ... } void AwesomePlayer::initRenderer_l() { if (!strncmp("OMX.", component, 4)) { mVideoRenderer = new AwesomeRemoteRenderer( mClient.interface()->createRenderer( mISurface, component, ...));..........(2) } else { mVideoRenderer = new AwesomeLocalRenderer( ..., component, mISurface);............................(3) } } |
(2) 如果video decoder是OMX component,則建立一個AwesomeRemoteRenderer作為mVideoRenderer
從上段的程式碼(1)來看,AwesomeRemoteRenderer的本質是由OMX::createRenderer所創建的。createRenderer會先建立一個hardware renderer --SharedVideoRenderer (libstagefrighthw.so);若失敗,則建立software renderer -- SoftwareRenderer (surface)。
sp<IOMXRenderer> OMX::createRenderer(...) { VideoRenderer *impl = NULL; libHandle = dlopen("libstagefrighthw.so", RTLD_NOW); if (libHandle) { CreateRendererFunc func = dlsym(libHandle,...); impl = (*func)(...);<----------------- Hardware Renderer } if (!impl) { impl = new SoftwareRenderer(...);<---- Software Renderer } } |
(3) 如果video decoder是software component,則建立一個AwesomeLocalRenderer作為mVideoRenderer
AwesomeLocalRenderer的constructor會呼叫本身的init函式,其所做的事和OMX::createRenderer一模一樣。
void AwesomeLocalRenderer::init(...) { mLibHandle = dlopen("libstagefrighthw.so", RTLD_NOW); if (mLibHandle) { CreateRendererFunc func = dlsym(...); mTarget = (*func)(...);<---------------- Hardware Renderer } if (mTarget==NULL) { mTarget = new SoftwareRenderer(...);<--- Software Renderer } } |
(4) mVideoRenderer一經建立就可以開始將解碼後的資料傳給它
void AwesomePlayer::onVideoEvent() { if (!mVideoBuffer) { mVideoSource->read(&mVideoBuffer,...); } [Check Timestamp] if (mVideoRenderer==NULL) { initRenderer_l(); } mVideoRenderer->render(mVideoBuffer);<----- Render Data } |
6. stagefright框架(六)-Audio Playback的流程
到目前為止,我們都只著重在video處理的部分,對於audio卻隻字未提。這篇文章將會開始audio處理的流程。
Stagefright中關於audio的部分是交由AudioPlayer來處理,它是在AwesomePlayer::play_l中被建立的。
(1) 當上層應用程式要求播放影音時,AudioPlayer同時被建立出來,並且被啟動
status_t AwesomePlayer::play_l() { ... mAudioPlayer = new AudioPlayer(mAudioSink,...); mAudioPlayer->start(...); ... } |
(2) AudioPlayer在啟動的過程中會先去讀取第一筆解碼後的資料,並且開啟audio output
status_t AudioPlayer::start(...) { mSource->read(&mFirstBuffer); if (mAudioSink.get()!=NULL) { mAudioSink->open(...,&AudioPlayer::AudioSinkCallback,...); mAudioSink->start(); } else { mAudioTrack = new AudioTrack(...,&AudioPlayer::AudioCallback,...); mAudioTrack->start(); } } |
從AudioPlayer::start的程式碼來看,AudioPlayer似乎並沒有將mFirstBuffer傳給audio output。
(3) 開啟audio output的同時,AudioPlayer會將callback函式設給它,之後每當callback函式被呼叫,AudioPlayer便去audio decoder讀取解碼後的資料
size_t AudioPlayer::AudioSinkCallback(audioSink, buffer, size,...) { return fillBuffer(buffer, size); } void AudioPlayer::AudioCallback(..., info) { buffer = info; fillBuffer(buffer->raw, buffer->size); } size_t AudioPlayer::fillBuffer(data, size) { mSource->read(&mInputBuffer,...); memcpy(data, mInputBuffer->data(),...); } |
解碼後audio資料的讀取就是由callback函式所驅動,但是callback函式又是怎麼由audio output去驅動的,目前從程式碼上還看不出來。另外一方面,從上面的程式片段可以看出,fillBuffer將資料(mInputBuffer)複製到data之後,audio output應該會去取用data。
(5) 至於audio decoder的工作流程則和video decoder相同,可參閱《Stagefright (4)- Video Buffer傳輸流程》
7. stagefright框架(七)-Audio和Video的同步
講完了audio和video的處理流程,接下來要看的是audio和video同步化(synchronization)的問題。OpenCORE的做法是設置一個主clock,而audio和video就分別以此作為輸出的依據。而在Stagefright中,audio的輸出是透過callback函式來驅動,video則根據audio的timestamp來做同步。以下是詳細的說明:
(1) 當callback函式驅動AudioPlayer讀取解碼後的資料時,AudioPlayer會取得兩個時間戳 -- mPositionTimeMediaUs和mPositionTimeRealUs
size_t AudioPlayer::fillBuffer(data, size) { ... mSource->read(&mInputBuffer,...); mInputBuffer->meta_data()->findInt64(kKeyTime,&mPositionTimeMediaUs); mPositionTimeRealUs = ((mNumFramesPlayed + size_done / mFrameSize)* 1000000)/ mSampleRate; ... } |
mPositionTimeMediaUs是資料裡面所載明的時間戳(timestamp);mPositionTimeRealUs則是播放此資料的實際時間(依據framenumber及samplerate得出)。
(2) Stagefright中的video便依據從AudioPlayer得出來之兩個時間戳的差值,作為播放的依據
void AwesomePlayer::onVideoEvent() { ... mVideoSource->read(&mVideoBuffer,...); mVideoBuffer->meta_data()->findInt64(kKeyTime,&timeUs); mAudioPlayer->getMediaTimeMapping(&realTimeUs,&mediaTimeUs); mTimeSourceDeltaUs = realTimeUs - mediaTimeUs; nowUs = ts->getRealTimeUs()- mTimeSourceDeltaUs; latenessUs = nowUs - timeUs; ... } |
AwesomePlayer從AudioPlayer取得realTimeUs(即mPositionTimeRealUs)和mediaTimeUs(即mPositionTimeMediaUs),並算出其差值mTimeSourceDeltaUs。
(3) 最後我們將該video資料做排程
void AwesomePlayer::onVideoEvent() { ... if (latenessUs> 40000) { mVideoBuffer->release(); mVideoBuffer = NULL; postVideoEvent_l(); return; } if (latenessUs<-10000) { postVideoEvent_l(10000); return; } mVideoRenderer->render(mVideoBuffer); ... } |