HLS 概述
HTTP Live Streaming(HLS)是苹果公司实现的基于HTTP的流媒体直播和点播协议,主要应用在iOS系统。相对于普通的流媒体,例如RTMP协议、RTSP协议、MMS协议等,HLS最大的优点是可以根据网络状况自动切换到不同码率的视频,如果网络状况较好,则会切换到高码率的视频,若发现网络状况不佳,则会逐渐过渡到低码率的视频,这个我们下面将会结合代码对其进行说明。
HLS框架介绍
我们接下来看下HLS系统的整体结构图:
我们首先将要直播的视频送到编码器中,编码器分别对视频和音频进行编码,然后输出到一个MPEG-2格式的传输流中,再由分段器将MPEG-2传输流进行分段,产生一系列等间隔的媒体片段,这些媒体片段一般很小并且保存成后缀为.ts的文件,同时生成一个指向这些媒体文件的索引文件,也就是我们很经常听到的.M3U8文件。完成分段之后将这些索引文件以及媒体文件上传到Web服务器上。客户端读取索引文件,然后按顺序请求下载索引文件中列出的媒体文件。下载后是一个ts文件。需要进行解压获得对应的媒体数据并解码后进行播放。由于在直播过程中服务器端会不断地将最新的直播数据生成新的小文件,并上传所以只要客户端不断地按顺序下载并播放从服务器获取到的文件,从整个过程上看就相当于实现了直播。而且由于分段文件的很短,客户端可以根据实际的带宽情况切换到不同码率的直播源,从而实现多码率的适配的目的。
M3U8 标签介绍:
这部分可以看下下面这篇博客:
http://blog.csdn.net/jwzhangjie/article/details/9744027
HLS播放流程
获取不同带宽下对应的网络资源URI及音视频编解码,视频分辨率等信息的文件
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=899152,RESOLUTION=480x270,CODECS="avc1.4d4015,mp4a.40.5" http://hls.ftdp.com/video1_widld/m3u8/01.m3u8
根据上述获取的信息初始化对应的编解码器
获取第一个网络资源对应的分段索引列表(index文件)
#EXTM3U #EXT-X-VERSION:3 #EXT-X-TARGETDURATION:10 #EXT-X-MEDIA-SEQUENCE:6532 #EXT-X-KEY:METHOD=AES-128,URI="18319965201.key" #EXTINF:10, 20125484T125708-01-6533.ts #EXT-X-KEY:METHOD=AES-128,URI="14319965205.key" #EXTINF:10, 20125484T125708-01-6534.ts .... #EXTINF:8, 20140804T125708-01-6593.ts
获取某一个分片的Key
- 请求下载某一个分片
- 根据当前的带宽决定是否切换视频资源
- 将下载的分片资源解密后送到解码器进行解码
关于NuPlayerDrvier的创建以及SetDataSource的流程和Stagefight Player大体一致,区别在于setDataSource的时候是根据url的不同会创建三种不同的DataSource:HttpLiveSource,RTSPSource,以及GenericSource。这里就不做大篇幅的介绍了,就直接上图吧:
我们直接从prepare结合HLS原理开始分析:
status_t NuPlayerDriver::prepare() { ALOGV("prepare(%p)", this); Mutex::Autolock autoLock(mLock); return prepare_l(); } status_t NuPlayerDriver::prepare_l() { switch (mState) { case STATE_UNPREPARED: mState = STATE_PREPARING; // Make sure we're not posting any notifications, success or // failure information is only communicated through our result // code. mIsAsyncPrepare = false; mPlayer->prepareAsync(); while (mState == STATE_PREPARING) { mCondition.wait(mLock); } return (mState == STATE_PREPARED) ? OK : UNKNOWN_ERROR; case STATE_STOPPED: //...... default: return INVALID_OPERATION; }; } |
首先我们在经过setDataSource阶段会将状态变量mState设置为STATE_UNPREPARED,那么在NuPlayerDriver::prepare_l()中我们实际上调用的是mPlayer->prepareAsync(),也就是Nuplayer的prepareAsync方法。
void NuPlayer::prepareAsync() { //发送一个kWhatPrepare消息 (new AMessage(kWhatPrepare, this))->post(); } |
在NuPlayer::prepareAsync中只是发送了一个kWhatPrepare的消息。找到对应的Handler查看处理流程如下:
void NuPlayer::onMessageReceived(const sp<AMessage> &msg) { //ignore other fuck source case kWhatPrepare: { //调用Source的prepareAsync 我们这里看下HttpliveSource mSource->prepareAsync(); break; } //ignore other fuck source } |
这里直接调用的是Source的prepareAsync,这个mSource是在setDataSource阶段设置的,我们这里只分析HLS的情形所以需要查看HttpliveSource的prepareAsync。
void NuPlayer::HTTPLiveSource::prepareAsync() { //创建并启动一个Looper if (mLiveLooper == NULL) { mLiveLooper = new ALooper; mLiveLooper->setName("http live"); mLiveLooper->start(); mLiveLooper->registerHandler(this); } //创建一个kWhatSessionNotify赋值给LiveSession用于通知 sp<AMessage> notify = new AMessage(kWhatSessionNotify, this); //创建一个LiveSession mLiveSession = new LiveSession( notify, (mFlags & kFlagIncognito) ? LiveSession::kFlagIncognito : 0, mHTTPService); mLiveLooper->registerHandler(mLiveSession); //使用LiveSession进行异步连接 mLiveSession->connectAsync(mURL.c_str(), mExtraHeaders.isEmpty() ? NULL : &mExtraHeaders); } |
void LiveSession::connectAsync(const char *url, const KeyedVector<String8, String8> *headers) { //创建一个kWhatConnect并传入url sp<AMessage> msg = new AMessage(kWhatConnect, this); msg->setString("url", url); if (headers != NULL) { msg->setPointer("headers",new KeyedVector<String8, String8>(*headers)); } msg->post(); } |
void LiveSession::onMessageReceived(const sp<AMessage> &msg) { case kWhatConnect: { //调用onConnect onConnect(msg); break; } } |
void LiveSession::onConnect(const sp<AMessage> &msg) { //获取传过来的Uri CHECK(msg->findString("url", &mMasterURL)); KeyedVector<String8, String8> *headers = NULL; if (!msg->findPointer("headers", (void **)&headers)) { mExtraHeaders.clear(); } else { mExtraHeaders = *headers; delete headers; headers = NULL; } //创建一个mFetcherLooper if (mFetcherLooper == NULL) { mFetcherLooper = new ALooper(); mFetcherLooper->setName("Fetcher"); mFetcherLooper->start(false, false); } //获取不同带宽下对应的网络资源URI及音视频编解码信息 addFetcher(mMasterURL.c_str())->fetchPlaylistAsync(); } |
这里就开始获取不同带宽下对应的网络资源URI及音视频编解码信息了
sp<PlaylistFetcher> LiveSession::addFetcher(const char *uri) { ssize_t index = mFetcherInfos.indexOfKey(uri); sp<AMessage> notify = new AMessage(kWhatFetcherNotify, this); notify->setString("uri", uri); notify->setInt32("switchGeneration", mSwitchGeneration); FetcherInfo info; //创建一个PlaylistFetcher并返回 info.mFetcher = new PlaylistFetcher(notify, this, uri, mCurBandwidthIndex, mSubtitleGeneration); info.mDurationUs = -1ll; info.mToBeRemoved = false; info.mToBeResumed = false; mFetcherLooper->registerHandler(info.mFetcher); mFetcherInfos.add(uri, info); //这里的info.mFetcher是上面new 出来的PlaylistFetcher return info.mFetcher; } |
我们通过这里返回的PlaylistFetcher调用fetchPlaylistAsync来获取playlists
void PlaylistFetcher::fetchPlaylistAsync() { (new AMessage(kWhatFetchPlaylist, this))->post(); } void PlaylistFetcher::onMessageReceived(const sp<AMessage> &msg) { case kWhatFetchPlaylist: { bool unchanged; //获取一个M3U8Parser sp<M3UParser> playlist = mHTTPDownloader->fetchPlaylist(mURI.c_str(), NULL /* curPlaylistHash */, &unchanged); sp<AMessage> notify = mNotify->dup(); notify->setInt32("what", kWhatPlaylistFetched); //将playlist返回 notify->setObject("playlist", playlist); notify->post(); break; } } |
我们接下来看下fetchFile过程:首先会通过fetchFile从服务器端获取到m3u8 playlist内容存放到buffer缓存区,然后将获取到的缓存数据包装成M3UParser
sp<M3UParser> HTTPDownloader::fetchPlaylist( const char *url, uint8_t *curPlaylistHash, bool *unchanged) { *unchanged = false; sp<ABuffer> buffer; String8 actualUrl; //调用fetchFile ssize_t err = fetchFile(url, &buffer, &actualUrl); //断开连接 mHTTPDataSource->disconnect(); //将获取到的缓存数据包装成M3UParser sp<M3UParser> playlist = new M3UParser(actualUrl.string(), buffer->data(), buffer->size()); return playlist; } |
ssize_t HTTPDownloader::fetchFile( const char *url, sp<ABuffer> *out, String8 *actualUrl) { ssize_t err = fetchBlock(url, out, 0, -1, 0, actualUrl, true /* reconnect */); // close off the connection after use mHTTPDataSource->disconnect(); return err; } |
我们这里看下M3UParser构造方法:
M3UParser::M3UParser( const char *baseURI, const void *data, size_t size) : mInitCheck(NO_INIT), mBaseURI(baseURI), mIsExtM3U(false), mIsVariantPlaylist(false), mIsComplete(false), mIsEvent(false), mFirstSeqNumber(-1), mLastSeqNumber(-1), mTargetDurationUs(-1ll), mDiscontinuitySeq(0), mDiscontinuityCount(0), mSelectedIndex(-1) { mInitCheck = parse(data, size); } |
在最后的时候会调用parse对缓存数据进行解析:
status_t M3UParser::parse(const void *_data, size_t size) { int32_t lineNo = 0; sp<AMessage> itemMeta; const char *data = (const char *)_data; size_t offset = 0; uint64_t segmentRangeOffset = 0; while (offset < size) { size_t offsetLF = offset; while (offsetLF < size && data[offsetLF] != '\n') { ++offsetLF; } AString line; if (offsetLF > offset && data[offsetLF - 1] == '\r') { line.setTo(&data[offset], offsetLF - offset - 1); } else { line.setTo(&data[offset], offsetLF - offset); } if (line.empty()) { offset = offsetLF + 1; continue; } if (lineNo == 0 && line == "#EXTM3U") { mIsExtM3U = true; } if (mIsExtM3U) { status_t err = OK; if (line.startsWith("#EXT-X-TARGETDURATION")) { if (mIsVariantPlaylist) { return ERROR_MALFORMED; } err = parseMetaData(line, &mMeta, "target-duration"); } else if (line.startsWith("#EXT-X-MEDIA-SEQUENCE")) { if (mIsVariantPlaylist) { return ERROR_MALFORMED; } err = parseMetaData(line, &mMeta, "media-sequence"); } else if (line.startsWith("#EXT-X-KEY")) { if (mIsVariantPlaylist) { return ERROR_MALFORMED; } err = parseCipherInfo(line, &itemMeta, mBaseURI); } else if (line.startsWith("#EXT-X-ENDLIST")) { mIsComplete = true; } else if (line.startsWith("#EXT-X-PLAYLIST-TYPE:EVENT")) { mIsEvent = true; } else if (line.startsWith("#EXTINF")) { if (mIsVariantPlaylist) { return ERROR_MALFORMED; } err = parseMetaDataDuration(line, &itemMeta, "durationUs"); } else if (line.startsWith("#EXT-X-DISCONTINUITY")) { if (mIsVariantPlaylist) { return ERROR_MALFORMED; } if (itemMeta == NULL) { itemMeta = new AMessage; } itemMeta->setInt32("discontinuity", true); ++mDiscontinuityCount; } else if (line.startsWith("#EXT-X-STREAM-INF")) { if (mMeta != NULL) { return ERROR_MALFORMED; } mIsVariantPlaylist = true; err = parseStreamInf(line, &itemMeta); } else if (line.startsWith("#EXT-X-BYTERANGE")) { if (mIsVariantPlaylist) { return ERROR_MALFORMED; } uint64_t length, offset; err = parseByteRange(line, segmentRangeOffset, &length, &offset); if (err == OK) { if (itemMeta == NULL) { itemMeta = new AMessage; } itemMeta->setInt64("range-offset", offset); itemMeta->setInt64("range-length", length); segmentRangeOffset = offset + length; } } else if (line.startsWith("#EXT-X-MEDIA")) { err = parseMedia(line); } else if (line.startsWith("#EXT-X-DISCONTINUITY-SEQUENCE")) { if (mIsVariantPlaylist) { return ERROR_MALFORMED; } size_t seq; err = parseDiscontinuitySequence(line, &seq); if (err == OK) { mDiscontinuitySeq = seq; } } if (err != OK) { return err; } } if (!line.startsWith("#")) { if (!mIsVariantPlaylist) { int64_t durationUs; if (itemMeta == NULL || !itemMeta->findInt64("durationUs", &durationUs)) { return ERROR_MALFORMED; } itemMeta->setInt32("discontinuity-sequence", mDiscontinuitySeq + mDiscontinuityCount); } mItems.push(); Item *item = &mItems.editItemAt(mItems.size() - 1); CHECK(MakeURL(mBaseURI.c_str(), line.c_str(), &item->mURI)); item->mMeta = itemMeta; itemMeta.clear(); } offset = offsetLF + 1; ++lineNo; } if (!mIsVariantPlaylist) { int32_t targetDurationSecs; if (mMeta == NULL || !mMeta->findInt32( "target-duration", &targetDurationSecs)) { ALOGE("Media playlist missing #EXT-X-TARGETDURATION"); return ERROR_MALFORMED; } mTargetDurationUs = targetDurationSecs * 1000000ll; mFirstSeqNumber = 0; if (mMeta != NULL) { mMeta->findInt32("media-sequence", &mFirstSeqNumber); } mLastSeqNumber = mFirstSeqNumber + mItems.size() - 1; } return OK; } |
好了我们现在已经获取到了类型为M3UParser的播放列表文件了,这时候会发送一个kWhatPlaylistFetched,这个在哪里被处理呢?当然是LiveSession啊。
case PlaylistFetcher::kWhatPlaylistFetched: { onMasterPlaylistFetched(msg); break; } |
获取到播放列表后要干啥呢?我们接下来看:
void LiveSession::onMasterPlaylistFetched(const sp<AMessage> &msg) { AString uri; CHECK(msg->findString("uri", &uri)); ssize_t index = mFetcherInfos.indexOfKey(uri); // no longer useful, remove mFetcherLooper->unregisterHandler(mFetcherInfos[index].mFetcher->id()); mFetcherInfos.removeItemsAt(index); //取走获取到的playlist CHECK(msg->findObject("playlist", (sp<RefBase> *)&mPlaylist)); // We trust the content provider to make a reasonable choice of preferred // initial bandwidth by listing it first in the variant playlist. // At startup we really don't have a good estimate on the available // network bandwidth since we haven't tranferred any data yet. Once // we have we can make a better informed choice. size_t initialBandwidth = 0; size_t initialBandwidthIndex = 0; int32_t maxWidth = 0; int32_t maxHeight = 0; //判断获取到的playlist是否有效,无效就没啥用了,我们这里假设有效 if (mPlaylist->isVariantPlaylist()) { Vector<BandwidthItem> itemsWithVideo; for (size_t i = 0; i < mPlaylist->size(); ++i) { BandwidthItem item; item.mPlaylistIndex = i; item.mLastFailureUs = -1ll; sp<AMessage> meta; AString uri; mPlaylist->itemAt(i, &uri, &meta); //获取带宽 CHECK(meta->findInt32("bandwidth", (int32_t *)&item.mBandwidth)); //获取最大分辨率 int32_t width, height; if (meta->findInt32("width", &width)) { maxWidth = max(maxWidth, width); } if (meta->findInt32("height", &height)) { maxHeight = max(maxHeight, height); } mBandwidthItems.push(item); if (mPlaylist->hasType(i, "video")) { itemsWithVideo.push(item); } } //移除只有声音的信息 if (!itemsWithVideo.empty()&& itemsWithVideo.size() < mBandwidthItems.size()) { mBandwidthItems.clear(); for (size_t i = 0; i < itemsWithVideo.size(); ++i) { mBandwidthItems.push(itemsWithVideo[i]); } } CHECK_GT(mBandwidthItems.size(), 0u); initialBandwidth = mBandwidthItems[0].mBandwidth; //按照带宽进行排序 mBandwidthItems.sort(SortByBandwidth); for (size_t i = 0; i < mBandwidthItems.size(); ++i) { if (mBandwidthItems.itemAt(i).mBandwidth == initialBandwidth) { initialBandwidthIndex = i; break; } } } else { //...... } //获取到最大的分辨率 mMaxWidth = maxWidth > 0 ? maxWidth : mMaxWidth; mMaxHeight = maxHeight > 0 ? maxHeight : mMaxHeight; mPlaylist->pickRandomMediaItems(); changeConfiguration(0ll /* timeUs */, initialBandwidthIndex, false /* pickTrack */); } |
void LiveSession::changeConfiguration(int64_t timeUs, ssize_t bandwidthIndex, bool pickTrack) { //取消带宽切换 cancelBandwidthSwitch(); mReconfigurationInProgress = true; //由mOrigBandwidthIndex切换到mCurBandwidthIndex if (bandwidthIndex >= 0) { //将当前的带宽设置为当前带宽 mOrigBandwidthIndex = mCurBandwidthIndex; mCurBandwidthIndex = bandwidthIndex; if (mOrigBandwidthIndex != mCurBandwidthIndex) { //开始切换带宽 ALOGI("#### Starting Bandwidth Switch: %zd => %zd",mOrigBandwidthIndex, mCurBandwidthIndex); } } CHECK_LT(mCurBandwidthIndex, mBandwidthItems.size()); //获取当前的BandwidthItem const BandwidthItem &item = mBandwidthItems.itemAt(mCurBandwidthIndex); uint32_t streamMask = 0; // streams that should be fetched by the new fetcher uint32_t resumeMask = 0; // streams that should be fetched by the original fetcher AString URIs[kMaxStreams]; for (size_t i = 0; i < kMaxStreams; ++i) { if (mPlaylist->getTypeURI(item.mPlaylistIndex, mStreams[i].mType, &URIs[i])) { streamMask |= indexToType(i); } } // 停止我们不需要的,暂停我们将要复用的,第一次的时候这里是没有的所以跳过 for (size_t i = 0; i < mFetcherInfos.size(); ++i) { //......................... } sp<AMessage> msg; if (timeUs < 0ll) { // skip onChangeConfiguration2 (decoder destruction) if not seeking. msg = new AMessage(kWhatChangeConfiguration3, this); } else { msg = new AMessage(kWhatChangeConfiguration2, this); } msg->setInt32("streamMask", streamMask); msg->setInt32("resumeMask", resumeMask); msg->setInt32("pickTrack", pickTrack); msg->setInt64("timeUs", timeUs); for (size_t i = 0; i < kMaxStreams; ++i) { if ((streamMask | resumeMask) & indexToType(i)) { msg->setString(mStreams[i].uriKey().c_str(), URIs[i].c_str()); } } // Every time a fetcher acknowledges the stopAsync or pauseAsync request // we'll decrement mContinuationCounter, once it reaches zero, i.e. all // fetchers have completed their asynchronous operation, we'll post // mContinuation, which then is handled below in onChangeConfiguration2. //每次fetcher 调用了stopAsync和pauseAsync mContinuationCounter 数值都会减去1,一旦减到0 那么将会在onChangeConfiguration2处理 mContinuationCounter = mFetcherInfos.size(); mContinuation = msg; if (mContinuationCounter == 0) { msg->post(); } } |
void LiveSession::onChangeConfiguration2(const sp<AMessage> &msg) { int64_t timeUs; CHECK(msg->findInt64("timeUs", &timeUs)); if (timeUs >= 0) { mLastSeekTimeUs = timeUs; mLastDequeuedTimeUs = timeUs; for (size_t i = 0; i < mPacketSources.size(); i++) { sp<AnotherPacketSource> packetSource = mPacketSources.editValueAt(i); sp<MetaData> format = packetSource->getFormat(); packetSource->clear(); packetSource->setFormat(format); } for (size_t i = 0; i < kMaxStreams; ++i) { mStreams[i].reset(); } mDiscontinuityOffsetTimesUs.clear(); mDiscontinuityAbsStartTimesUs.clear(); if (mSeekReplyID != NULL) { CHECK(mSeekReply != NULL); mSeekReply->setInt32("err", OK); mSeekReply->postReply(mSeekReplyID); mSeekReplyID.clear(); mSeekReply.clear(); } restartPollBuffering(); } uint32_t streamMask, resumeMask; CHECK(msg->findInt32("streamMask", (int32_t *)&streamMask)); CHECK(msg->findInt32("resumeMask", (int32_t *)&resumeMask)); streamMask |= resumeMask; AString URIs[kMaxStreams]; for (size_t i = 0; i < kMaxStreams; ++i) { if (streamMask & indexToType(i)) { const AString &uriKey = mStreams[i].uriKey(); CHECK(msg->findString(uriKey.c_str(), &URIs[i])); ALOGV("%s = '%s'", uriKey.c_str(), URIs[i].c_str()); } } uint32_t changedMask = 0; for (size_t i = 0; i < kMaxStreams && i != kSubtitleIndex; ++i) { // stream URI could change even if onChangeConfiguration2 is only // used for seek. Seek could happen during a bw switch, in this // case bw switch will be cancelled, but the seekTo position will // fetch from the new URI. if ((mStreamMask & streamMask & indexToType(i)) && !mStreams[i].mUri.empty() && !(URIs[i] == mStreams[i].mUri)) { ALOGV("stream %zu changed: oldURI %s, newURI %s", i, mStreams[i].mUri.c_str(), URIs[i].c_str()); sp<AnotherPacketSource> source = mPacketSources.valueFor(indexToType(i)); if (source->getLatestDequeuedMeta() != NULL) { source->queueDiscontinuity(ATSParser::DISCONTINUITY_FORMATCHANGE, NULL, true); } } // Determine which decoders to shutdown on the player side, // a decoder has to be shutdown if its streamtype was active // before but now longer isn't. if ((mStreamMask & ~streamMask & indexToType(i))) { changedMask |= indexToType(i); } } //这里会触发kWhatStreamsChanged sp<AMessage> notify = mNotify->dup(); notify->setInt32("what", kWhatStreamsChanged); notify->setInt32("changedMask", changedMask); //将kWhatChangeConfiguration3作为回复消息 msg->setWhat(kWhatChangeConfiguration3); msg->setTarget(this); notify->setMessage("reply", msg); notify->post(); } |
case LiveSession::kWhatStreamsChanged: { uint32_t changedMask; CHECK(msg->findInt32("changedMask", (int32_t *)&changedMask)); //判断什么流改变 bool audio = changedMask & LiveSession::STREAMTYPE_AUDIO; bool video = changedMask & LiveSession::STREAMTYPE_VIDEO; sp<AMessage> reply; CHECK(msg->findMessage("reply", &reply)); sp<AMessage> notify = dupNotify(); notify->setInt32("what", kWhatQueueDecoderShutdown); notify->setInt32("audio", audio); notify->setInt32("video", video); notify->setMessage("reply", reply); notify->post(); break; } |
case Source::kWhatQueueDecoderShutdown: { int32_t audio, video; CHECK(msg->findInt32("audio", &audio)); CHECK(msg->findInt32("video", &video)); sp<AMessage> reply; CHECK(msg->findMessage("reply", &reply)); queueDecoderShutdown(audio, video, reply); break; } |
void NuPlayer::queueDecoderShutdown( bool audio, bool video, const sp<AMessage> &reply) { ALOGI("queueDecoderShutdown audio=%d, video=%d", audio, video); mDeferredActions.push_back(new FlushDecoderAction(audio ? FLUSH_CMD_SHUTDOWN : FLUSH_CMD_NONE,video ? FLUSH_CMD_SHUTDOWN : FLUSH_CMD_NONE)); mDeferredActions.push_back(new SimpleAction(&NuPlayer::performScanSources)); mDeferredActions.push_back(new PostMessageAction(reply)); processDeferredActions(); } |
调用performDecoderFlush
struct NuPlayer::FlushDecoderAction : public Action { FlushDecoderAction(FlushCommand audio, FlushCommand video) : mAudio(audio), mVideo(video) { } virtual void execute(NuPlayer *player) { player->performDecoderFlush(mAudio, mVideo); } private: FlushCommand mAudio; FlushCommand mVideo; DISALLOW_EVIL_CONSTRUCTORS(FlushDecoderAction); }; |
void NuPlayer::performDecoderFlush(FlushCommand audio, FlushCommand video) { ALOGV("performDecoderFlush audio=%d, video=%d", audio, video); if ((audio == FLUSH_CMD_NONE || mAudioDecoder == NULL)&& (video == FLUSH_CMD_NONE || mVideoDecoder == NULL)) { return; } if (audio != FLUSH_CMD_NONE && mAudioDecoder != NULL) { flushDecoder(true /* audio */, (audio == FLUSH_CMD_SHUTDOWN)); } if (video != FLUSH_CMD_NONE && mVideoDecoder != NULL) { flushDecoder(false /* audio */, (video == FLUSH_CMD_SHUTDOWN)); } } |
void NuPlayer::flushDecoder(bool audio, bool needShutdown) { ALOGV("[%s] flushDecoder needShutdown=%d", audio ? "audio" : "video", needShutdown); const sp<DecoderBase> &decoder = getDecoder(audio); const sp<DecoderBase> &decoder = getDecoder(audio); if (decoder == NULL) { ALOGI("flushDecoder %s without decoder present",audio ? "audio" : "video"); return; } //........... } |
紧接着我们看下初始化编码器部分:
void NuPlayer::postScanSources() { if (mScanSourcesPending) { return; } sp<AMessage> msg = new AMessage(kWhatScanSources, this); msg->setInt32("generation", mScanSourcesGeneration); msg->post(); mScanSourcesPending = true; } |
case kWhatScanSources: { int32_t generation; mScanSourcesPending = false; bool mHadAnySourcesBefore = (mAudioDecoder != NULL) || (mVideoDecoder != NULL); // initialize video before audio because successful initialization of // video may change deep buffer mode of audio. if (mSurface != NULL) { instantiateDecoder(false, &mVideoDecoder); } // Don't try to re-open audio sink if there's an existing decoder. if (mAudioSink != NULL && mAudioDecoder == NULL) { instantiateDecoder(true, &mAudioDecoder); } } |
status_t NuPlayer::instantiateDecoder(bool audio, sp<DecoderBase> *decoder) { //获取格式 sp<AMessage> format = mSource->getFormat(audio); format->setInt32("priority", 0 /* realtime */); if (audio) { sp<AMessage> notify = new AMessage(kWhatAudioNotify, this); ++mAudioDecoderGeneration; notify->setInt32("generation", mAudioDecoderGeneration); determineAudioModeChange(); if (mOffloadAudio) { //.................... } else { *decoder = new Decoder(notify, mSource, mPID, mRenderer); } } else { sp<AMessage> notify = new AMessage(kWhatVideoNotify, this); ++mVideoDecoderGeneration; notify->setInt32("generation", mVideoDecoderGeneration); *decoder = new Decoder(notify, mSource, mPID, mRenderer, mSurface, mCCDecoder); //........................... } //解码器初始化 (*decoder)->init(); //配置解码器 (*decoder)->configure(format); //......... return OK; } |
在这里创建出解码器并初始化它。
void NuPlayer::DecoderBase::configure(const sp<AMessage> &format) { sp<AMessage> msg = new AMessage(kWhatConfigure, this); msg->setMessage("format", format); msg->post(); } void NuPlayer::DecoderBase::init() { mDecoderLooper->registerHandler(this); } void NuPlayer::Decoder::onConfigure(const sp<AMessage> &format) { //创建MediaCodec mCodec = MediaCodec::CreateByType(mCodecLooper, mime.c_str(), false /* encoder */, NULL /* err */, mPid); //配置MediaCodec err = mCodec->configure(format, mSurface, NULL /* crypto */, 0 /* flags */); //如果是视频文件则设置宽高 if (!mIsAudio) { int32_t width, height; if (mOutputFormat->findInt32("width", &width)&& mOutputFormat->findInt32("height", &height)) { mStats->setInt32("width", width); mStats->setInt32("height", height); } } //启动MediaCodec err = mCodec->start(); } |
sp<MediaCodec> MediaCodec::CreateByType(const sp<ALooper> &looper, const char *mime, bool encoder, status_t *err, pid_t pid) { sp<MediaCodec> codec = new MediaCodec(looper, pid); const status_t ret = codec->init(mime, true /* nameIsType */, encoder); return ret == OK ? codec : NULL; // NULL deallocates codec. } |
这里说明mCodec是一个ACodec对象
status_t MediaCodec::init(const AString &name, bool nameIsType, bool encoder) { mResourceManagerService->init(); if (nameIsType || !strncasecmp(name.c_str(), "omx.", 4)) { //根据名称创建Codec mCodec = new ACodec; } else if (!nameIsType&& !strncasecmp(name.c_str(), "android.filter.", 15)) { } else { } sp<AMessage> msg = new AMessage(kWhatInit, this); msg->setString("name", name); msg->setInt32("nameIsType", nameIsType); if (nameIsType) { msg->setInt32("encoder", encoder); } return err; } |
case kWhatInit: { //.................... mCodec->initiateAllocateComponent(format); break; } |
void ACodec::initiateAllocateComponent(const sp<AMessage> &msg) { msg->setWhat(kWhatAllocateComponent); msg->setTarget(this); msg->post(); } |
case ACodec::kWhatAllocateComponent: { onAllocateComponent(msg); handled = true; break; } |
这里开始实例化编码器并设置状态
bool ACodec::UninitializedState::onAllocateComponent(const sp<AMessage> &msg) { Vector<OMXCodec::CodecNameAndQuirks> matchingCodecs; AString mime; AString componentName; uint32_t quirks = 0; int32_t encoder = false; if (msg->findString("componentName", &componentName)) { ssize_t index = matchingCodecs.add(); OMXCodec::CodecNameAndQuirks *entry = &matchingCodecs.editItemAt(index); entry->mName = String8(componentName.c_str()); if (!OMXCodec::findCodecQuirks(componentName.c_str(), &entry->mQuirks)) { entry->mQuirks = 0; } } else { CHECK(msg->findString("mime", &mime)); if (!msg->findInt32("encoder", &encoder)) { encoder = false; } OMXCodec::findMatchingCodecs( mime.c_str(), encoder, // createEncoder NULL, // matchComponentName 0, // flags &matchingCodecs); } sp<CodecObserver> observer = new CodecObserver; IOMX::node_id node = 0; status_t err = NAME_NOT_FOUND; for (size_t matchIndex = 0; matchIndex < matchingCodecs.size();++matchIndex) { componentName = matchingCodecs.itemAt(matchIndex).mName.string(); quirks = matchingCodecs.itemAt(matchIndex).mQuirks; pid_t tid = gettid(); int prevPriority = androidGetThreadPriority(tid); androidSetThreadPriority(tid, ANDROID_PRIORITY_FOREGROUND); err = omx->allocateNode(componentName.c_str(), observer, &node); androidSetThreadPriority(tid, prevPriority); node = 0; } notify = new AMessage(kWhatOMXMessageList, mCodec); observer->setNotificationMessage(notify); mCodec->mComponentName = componentName; mCodec->mRenderTracker.setComponentName(componentName); mCodec->mFlags = 0; mCodec->mQuirks = quirks; mCodec->mOMX = omx; mCodec->mNode = node; { sp<AMessage> notify = mCodec->mNotify->dup(); notify->setInt32("what", CodecBase::kWhatComponentAllocated); notify->setString("componentName", mCodec->mComponentName.c_str()); notify->post(); } mCodec->changeState(mCodec->mLoadedState); return true; } |
解码器的配置
status_t MediaCodec::configure( const sp<AMessage> &format, const sp<Surface> &surface, const sp<ICrypto> &crypto, uint32_t flags) { sp<AMessage> msg = new AMessage(kWhatConfigure, this); if (mIsVideo) { format->findInt32("width", &mVideoWidth); format->findInt32("height", &mVideoHeight); if (!format->findInt32("rotation-degrees", &mRotationDegrees)) { mRotationDegrees = 0; } } msg->setMessage("format", format); msg->setInt32("flags", flags); msg->setObject("surface", surface); //..................... // save msg for reset mConfigureMsg = msg; //..................... for (int i = 0; i <= kMaxRetry; ++i) { if (i > 0) { // Don't try to reclaim resource for the first time. if (!mResourceManagerService->reclaimResource(resources)) { break; } } sp<AMessage> response; err = PostAndAwaitResponse(msg, &response); //..................... } return err; } |
case kWhatConfigure: { sp<AReplyToken> replyID; CHECK(msg->senderAwaitsResponse(&replyID)); sp<RefBase> obj; CHECK(msg->findObject("surface", &obj)); sp<AMessage> format; CHECK(msg->findMessage("format", &format)); int32_t push; if (msg->findInt32("push-blank-buffers-on-shutdown", &push) && push != 0) { mFlags |= kFlagPushBlankBuffersOnShutdown; } if (obj != NULL) { format->setObject("native-window", obj); status_t err = handleSetSurface(static_cast<Surface *>(obj.get())); if (err != OK) { PostReplyWithError(replyID, err); break; } } else { handleSetSurface(NULL); } mReplyID = replyID; setState(CONFIGURING); void *crypto; uint32_t flags; CHECK(msg->findInt32("flags", (int32_t *)&flags)); if (flags & CONFIGURE_FLAG_ENCODE) { format->setInt32("encoder", true); mFlags |= kFlagIsEncoder; } //这里最重要 mCodec->initiateConfigureComponent(format); break; } |
void ACodec::initiateConfigureComponent(const sp<AMessage> &msg) { msg->setWhat(kWhatConfigureComponent); msg->setTarget(this); msg->post(); } |
case ACodec::kWhatConfigureComponent: { onConfigureComponent(msg); handled = true; break; } |
bool ACodec::LoadedState::onConfigureComponent( const sp<AMessage> &msg) { ALOGV("onConfigureComponent"); CHECK(mCodec->mNode != 0); status_t err = OK; AString mime; if (!msg->findString("mime", &mime)) { err = BAD_VALUE; } else { err = mCodec->configureCodec(mime.c_str(), msg); } { sp<AMessage> notify = mCodec->mNotify->dup(); notify->setInt32("what", CodecBase::kWhatComponentConfigured); notify->setMessage("input-format", mCodec->mInputFormat); notify->setMessage("output-format", mCodec->mOutputFormat); notify->post(); } return true; } |
case CodecBase::kWhatComponentConfigured: { if (mState == UNINITIALIZED || mState == INITIALIZED) { // In case a kWhatError message came in and replied with error, // we log a warning and ignore. ALOGW("configure interrupted by error, current state %d", mState); break; } CHECK_EQ(mState, CONFIGURING); // reset input surface flag mHaveInputSurface = false; CHECK(msg->findMessage("input-format", &mInputFormat)); CHECK(msg->findMessage("output-format", &mOutputFormat)); int32_t usingSwRenderer; if (mOutputFormat->findInt32("using-sw-renderer", &usingSwRenderer) && usingSwRenderer) { mFlags |= kFlagUsesSoftwareRenderer; } setState(CONFIGURED); (new AMessage)->postReply(mReplyID); break; } |
这里才是解码器最详细的配置,有时间好好针对这个展开研究,这篇博客先针对整个流程进行分析:
status_t ACodec::configureCodec( const char *mime, const sp<AMessage> &msg) { int32_t encoder; if (!msg->findInt32("encoder", &encoder)) { encoder = false; } sp<AMessage> inputFormat = new AMessage(); sp<AMessage> outputFormat = mNotify->dup(); // will use this for kWhatOutputFormatChanged mIsEncoder = encoder; mInputMetadataType = kMetadataBufferTypeInvalid; mOutputMetadataType = kMetadataBufferTypeInvalid; status_t err = setComponentRole(encoder /* isEncoder */, mime); if (err != OK) { return err; } int32_t bitRate = 0; // FLAC encoder doesn't need a bitrate, other encoders do if (encoder && strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_FLAC) && !msg->findInt32("bitrate", &bitRate)) { return INVALID_OPERATION; } int32_t storeMeta; if (encoder && msg->findInt32("store-metadata-in-buffers", &storeMeta) && storeMeta != 0) { err = mOMX->storeMetaDataInBuffers(mNode, kPortIndexInput, OMX_TRUE, &mInputMetadataType); if (err != OK) { ALOGE("[%s] storeMetaDataInBuffers (input) failed w/ err %d", mComponentName.c_str(), err); return err; } // For this specific case we could be using camera source even if storeMetaDataInBuffers // returns Gralloc source. Pretend that we are; this will force us to use nBufferSize. if (mInputMetadataType == kMetadataBufferTypeGrallocSource) { mInputMetadataType = kMetadataBufferTypeCameraSource; } uint32_t usageBits; if (mOMX->getParameter( mNode, (OMX_INDEXTYPE)OMX_IndexParamConsumerUsageBits, &usageBits, sizeof(usageBits)) == OK) { inputFormat->setInt32( "using-sw-read-often", !!(usageBits & GRALLOC_USAGE_SW_READ_OFTEN)); } } int32_t prependSPSPPS = 0; if (encoder && msg->findInt32("prepend-sps-pps-to-idr-frames", &prependSPSPPS) && prependSPSPPS != 0) { OMX_INDEXTYPE index; err = mOMX->getExtensionIndex( mNode, "OMX.google.android.index.prependSPSPPSToIDRFrames", &index); if (err == OK) { PrependSPSPPSToIDRFramesParams params; InitOMXParams(¶ms); params.bEnable = OMX_TRUE; err = mOMX->setParameter( mNode, index, ¶ms, sizeof(params)); } if (err != OK) { ALOGE("Encoder could not be configured to emit SPS/PPS before " "IDR frames. (err %d)", err); return err; } } // Only enable metadata mode on encoder output if encoder can prepend // sps/pps to idr frames, since in metadata mode the bitstream is in an // opaque handle, to which we don't have access. int32_t video = !strncasecmp(mime, "video/", 6); mIsVideo = video; if (encoder && video) { OMX_BOOL enable = (OMX_BOOL) (prependSPSPPS && msg->findInt32("store-metadata-in-buffers-output", &storeMeta) && storeMeta != 0); err = mOMX->storeMetaDataInBuffers(mNode, kPortIndexOutput, enable, &mOutputMetadataType); if (err != OK) { ALOGE("[%s] storeMetaDataInBuffers (output) failed w/ err %d", mComponentName.c_str(), err); } if (!msg->findInt64( "repeat-previous-frame-after", &mRepeatFrameDelayUs)) { mRepeatFrameDelayUs = -1ll; } if (!msg->findInt64("max-pts-gap-to-encoder", &mMaxPtsGapUs)) { mMaxPtsGapUs = -1ll; } if (!msg->findFloat("max-fps-to-encoder", &mMaxFps)) { mMaxFps = -1; } if (!msg->findInt64("time-lapse", &mTimePerCaptureUs)) { mTimePerCaptureUs = -1ll; } if (!msg->findInt32( "create-input-buffers-suspended", (int32_t*)&mCreateInputBuffersSuspended)) { mCreateInputBuffersSuspended = false; } } // NOTE: we only use native window for video decoders sp<RefBase> obj; bool haveNativeWindow = msg->findObject("native-window", &obj) && obj != NULL && video && !encoder; mLegacyAdaptiveExperiment = false; if (video && !encoder) { inputFormat->setInt32("adaptive-playback", false); int32_t usageProtected; if (msg->findInt32("protected", &usageProtected) && usageProtected) { if (!haveNativeWindow) { ALOGE("protected output buffers must be sent to an ANativeWindow"); return PERMISSION_DENIED; } mFlags |= kFlagIsGrallocUsageProtected; mFlags |= kFlagPushBlankBuffersToNativeWindowOnShutdown; } } if (haveNativeWindow) { sp<ANativeWindow> nativeWindow = static_cast<ANativeWindow *>(static_cast<Surface *>(obj.get())); // START of temporary support for automatic FRC - THIS WILL BE REMOVED int32_t autoFrc; if (msg->findInt32("auto-frc", &autoFrc)) { bool enabled = autoFrc; OMX_CONFIG_BOOLEANTYPE config; InitOMXParams(&config); config.bEnabled = (OMX_BOOL)enabled; status_t temp = mOMX->setConfig( mNode, (OMX_INDEXTYPE)OMX_IndexConfigAutoFramerateConversion, &config, sizeof(config)); if (temp == OK) { outputFormat->setInt32("auto-frc", enabled); } else if (enabled) { ALOGI("codec does not support requested auto-frc (err %d)", temp); } } // END of temporary support for automatic FRC int32_t tunneled; if (msg->findInt32("feature-tunneled-playback", &tunneled) && tunneled != 0) { ALOGI("Configuring TUNNELED video playback."); mTunneled = true; int32_t audioHwSync = 0; if (!msg->findInt32("audio-hw-sync", &audioHwSync)) { ALOGW("No Audio HW Sync provided for video tunnel"); } err = configureTunneledVideoPlayback(audioHwSync, nativeWindow); if (err != OK) { ALOGE("configureTunneledVideoPlayback(%d,%p) failed!", audioHwSync, nativeWindow.get()); return err; } int32_t maxWidth = 0, maxHeight = 0; if (msg->findInt32("max-width", &maxWidth) && msg->findInt32("max-height", &maxHeight)) { err = mOMX->prepareForAdaptivePlayback( mNode, kPortIndexOutput, OMX_TRUE, maxWidth, maxHeight); if (err != OK) { ALOGW("[%s] prepareForAdaptivePlayback failed w/ err %d", mComponentName.c_str(), err); // allow failure err = OK; } else { inputFormat->setInt32("max-width", maxWidth); inputFormat->setInt32("max-height", maxHeight); inputFormat->setInt32("adaptive-playback", true); } } } else { ALOGV("Configuring CPU controlled video playback."); mTunneled = false; // Explicity reset the sideband handle of the window for // non-tunneled video in case the window was previously used // for a tunneled video playback. err = native_window_set_sideband_stream(nativeWindow.get(), NULL); if (err != OK) { ALOGE("set_sideband_stream(NULL) failed! (err %d).", err); return err; } // Always try to enable dynamic output buffers on native surface err = mOMX->storeMetaDataInBuffers( mNode, kPortIndexOutput, OMX_TRUE, &mOutputMetadataType); if (err != OK) { ALOGE("[%s] storeMetaDataInBuffers failed w/ err %d", mComponentName.c_str(), err); // if adaptive playback has been requested, try JB fallback // NOTE: THIS FALLBACK MECHANISM WILL BE REMOVED DUE TO ITS // LARGE MEMORY REQUIREMENT // we will not do adaptive playback on software accessed // surfaces as they never had to respond to changes in the // crop window, and we don't trust that they will be able to. int usageBits = 0; bool canDoAdaptivePlayback; if (nativeWindow->query( nativeWindow.get(), NATIVE_WINDOW_CONSUMER_USAGE_BITS, &usageBits) != OK) { canDoAdaptivePlayback = false; } else { canDoAdaptivePlayback = (usageBits & (GRALLOC_USAGE_SW_READ_MASK | GRALLOC_USAGE_SW_WRITE_MASK)) == 0; } int32_t maxWidth = 0, maxHeight = 0; if (canDoAdaptivePlayback && msg->findInt32("max-width", &maxWidth) && msg->findInt32("max-height", &maxHeight)) { ALOGV("[%s] prepareForAdaptivePlayback(%dx%d)", mComponentName.c_str(), maxWidth, maxHeight); err = mOMX->prepareForAdaptivePlayback( mNode, kPortIndexOutput, OMX_TRUE, maxWidth, maxHeight); ALOGW_IF(err != OK, "[%s] prepareForAdaptivePlayback failed w/ err %d", mComponentName.c_str(), err); if (err == OK) { inputFormat->setInt32("max-width", maxWidth); inputFormat->setInt32("max-height", maxHeight); inputFormat->setInt32("adaptive-playback", true); } } // allow failure err = OK; } else { ALOGV("[%s] storeMetaDataInBuffers succeeded", mComponentName.c_str()); CHECK(storingMetadataInDecodedBuffers()); mLegacyAdaptiveExperiment = ADebug::isExperimentEnabled( "legacy-adaptive", !msg->contains("no-experiments")); inputFormat->setInt32("adaptive-playback", true); } int32_t push; if (msg->findInt32("push-blank-buffers-on-shutdown", &push) && push != 0) { mFlags |= kFlagPushBlankBuffersToNativeWindowOnShutdown; } } int32_t rotationDegrees; if (msg->findInt32("rotation-degrees", &rotationDegrees)) { mRotationDegrees = rotationDegrees; } else { mRotationDegrees = 0; } } if (video) { // determine need for software renderer bool usingSwRenderer = false; if (haveNativeWindow && mComponentName.startsWith("OMX.google.")) { usingSwRenderer = true; haveNativeWindow = false; } if (encoder) { err = setupVideoEncoder(mime, msg); } else { err = setupVideoDecoder(mime, msg, haveNativeWindow); } if (err != OK) { return err; } if (haveNativeWindow) { mNativeWindow = static_cast<Surface *>(obj.get()); } // initialize native window now to get actual output format // TODO: this is needed for some encoders even though they don't use native window err = initNativeWindow(); if (err != OK) { return err; } // fallback for devices that do not handle flex-YUV for native buffers if (haveNativeWindow) { int32_t requestedColorFormat = OMX_COLOR_FormatUnused; if (msg->findInt32("color-format", &requestedColorFormat) && requestedColorFormat == OMX_COLOR_FormatYUV420Flexible) { status_t err = getPortFormat(kPortIndexOutput, outputFormat); if (err != OK) { return err; } int32_t colorFormat = OMX_COLOR_FormatUnused; OMX_U32 flexibleEquivalent = OMX_COLOR_FormatUnused; if (!outputFormat->findInt32("color-format", &colorFormat)) { ALOGE("ouptut port did not have a color format (wrong domain?)"); return BAD_VALUE; } ALOGD("[%s] Requested output format %#x and got %#x.", mComponentName.c_str(), requestedColorFormat, colorFormat); if (!isFlexibleColorFormat( mOMX, mNode, colorFormat, haveNativeWindow, &flexibleEquivalent) || flexibleEquivalent != (OMX_U32)requestedColorFormat) { // device did not handle flex-YUV request for native window, fall back // to SW renderer ALOGI("[%s] Falling back to software renderer", mComponentName.c_str()); mNativeWindow.clear(); mNativeWindowUsageBits = 0; haveNativeWindow = false; usingSwRenderer = true; if (storingMetadataInDecodedBuffers()) { err = mOMX->storeMetaDataInBuffers( mNode, kPortIndexOutput, OMX_FALSE, &mOutputMetadataType); mOutputMetadataType = kMetadataBufferTypeInvalid; // just in case // TODO: implement adaptive-playback support for bytebuffer mode. // This is done by SW codecs, but most HW codecs don't support it. inputFormat->setInt32("adaptive-playback", false); } if (err == OK) { err = mOMX->enableGraphicBuffers(mNode, kPortIndexOutput, OMX_FALSE); } if (mFlags & kFlagIsGrallocUsageProtected) { // fallback is not supported for protected playback err = PERMISSION_DENIED; } else if (err == OK) { err = setupVideoDecoder(mime, msg, false); } } } } if (usingSwRenderer) { outputFormat->setInt32("using-sw-renderer", 1); } } else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_MPEG)) { int32_t numChannels, sampleRate; if (!msg->findInt32("channel-count", &numChannels) || !msg->findInt32("sample-rate", &sampleRate)) { // Since we did not always check for these, leave them optional // and have the decoder figure it all out. err = OK; } else { err = setupRawAudioFormat( encoder ? kPortIndexInput : kPortIndexOutput, sampleRate, numChannels); } } else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AAC)) { int32_t numChannels, sampleRate; if (!msg->findInt32("channel-count", &numChannels) || !msg->findInt32("sample-rate", &sampleRate)) { err = INVALID_OPERATION; } else { int32_t isADTS, aacProfile; int32_t sbrMode; int32_t maxOutputChannelCount; int32_t pcmLimiterEnable; drcParams_t drc; if (!msg->findInt32("is-adts", &isADTS)) { isADTS = 0; } if (!msg->findInt32("aac-profile", &aacProfile)) { aacProfile = OMX_AUDIO_AACObjectNull; } if (!msg->findInt32("aac-sbr-mode", &sbrMode)) { sbrMode = -1; } if (!msg->findInt32("aac-max-output-channel_count", &maxOutputChannelCount)) { maxOutputChannelCount = -1; } if (!msg->findInt32("aac-pcm-limiter-enable", &pcmLimiterEnable)) { // value is unknown pcmLimiterEnable = -1; } if (!msg->findInt32("aac-encoded-target-level", &drc.encodedTargetLevel)) { // value is unknown drc.encodedTargetLevel = -1; } if (!msg->findInt32("aac-drc-cut-level", &drc.drcCut)) { // value is unknown drc.drcCut = -1; } if (!msg->findInt32("aac-drc-boost-level", &drc.drcBoost)) { // value is unknown drc.drcBoost = -1; } if (!msg->findInt32("aac-drc-heavy-compression", &drc.heavyCompression)) { // value is unknown drc.heavyCompression = -1; } if (!msg->findInt32("aac-target-ref-level", &drc.targetRefLevel)) { // value is unknown drc.targetRefLevel = -1; } err = setupAACCodec( encoder, numChannels, sampleRate, bitRate, aacProfile, isADTS != 0, sbrMode, maxOutputChannelCount, drc, pcmLimiterEnable); } } else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_NB)) { err = setupAMRCodec(encoder, false /* isWAMR */, bitRate); } else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AMR_WB)) { err = setupAMRCodec(encoder, true /* isWAMR */, bitRate); } else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_G711_ALAW) || !strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_G711_MLAW)) { // These are PCM-like formats with a fixed sample rate but // a variable number of channels. int32_t numChannels; if (!msg->findInt32("channel-count", &numChannels)) { err = INVALID_OPERATION; } else { int32_t sampleRate; if (!msg->findInt32("sample-rate", &sampleRate)) { sampleRate = 8000; } err = setupG711Codec(encoder, sampleRate, numChannels); } } else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_FLAC)) { int32_t numChannels = 0, sampleRate = 0, compressionLevel = -1; if (encoder && (!msg->findInt32("channel-count", &numChannels) || !msg->findInt32("sample-rate", &sampleRate))) { ALOGE("missing channel count or sample rate for FLAC encoder"); err = INVALID_OPERATION; } else { if (encoder) { if (!msg->findInt32( "complexity", &compressionLevel) && !msg->findInt32( "flac-compression-level", &compressionLevel)) { compressionLevel = 5; // default FLAC compression level } else if (compressionLevel < 0) { ALOGW("compression level %d outside [0..8] range, " "using 0", compressionLevel); compressionLevel = 0; } else if (compressionLevel > 8) { ALOGW("compression level %d outside [0..8] range, " "using 8", compressionLevel); compressionLevel = 8; } } err = setupFlacCodec( encoder, numChannels, sampleRate, compressionLevel); } } else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_RAW)) { int32_t numChannels, sampleRate; if (encoder || !msg->findInt32("channel-count", &numChannels) || !msg->findInt32("sample-rate", &sampleRate)) { err = INVALID_OPERATION; } else { err = setupRawAudioFormat(kPortIndexInput, sampleRate, numChannels); } } else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AC3)) { int32_t numChannels; int32_t sampleRate; if (!msg->findInt32("channel-count", &numChannels) || !msg->findInt32("sample-rate", &sampleRate)) { err = INVALID_OPERATION; } else { err = setupAC3Codec(encoder, numChannels, sampleRate); } } else if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_EAC3)) { int32_t numChannels; int32_t sampleRate; if (!msg->findInt32("channel-count", &numChannels) || !msg->findInt32("sample-rate", &sampleRate)) { err = INVALID_OPERATION; } else { err = setupEAC3Codec(encoder, numChannels, sampleRate); } } if (err != OK) { return err; } if (!msg->findInt32("encoder-delay", &mEncoderDelay)) { mEncoderDelay = 0; } if (!msg->findInt32("encoder-padding", &mEncoderPadding)) { mEncoderPadding = 0; } if (msg->findInt32("channel-mask", &mChannelMask)) { mChannelMaskPresent = true; } else { mChannelMaskPresent = false; } int32_t maxInputSize; if (msg->findInt32("max-input-size", &maxInputSize)) { err = setMinBufferSize(kPortIndexInput, (size_t)maxInputSize); } else if (!strcmp("OMX.Nvidia.aac.decoder", mComponentName.c_str())) { err = setMinBufferSize(kPortIndexInput, 8192); // XXX } int32_t priority; if (msg->findInt32("priority", &priority)) { err = setPriority(priority); } int32_t rateInt = -1; float rateFloat = -1; if (!msg->findFloat("operating-rate", &rateFloat)) { msg->findInt32("operating-rate", &rateInt); rateFloat = (float)rateInt; // 16MHz (FLINTMAX) is OK for upper bound. } if (rateFloat > 0) { err = setOperatingRate(rateFloat, video); } mBaseOutputFormat = outputFormat; err = getPortFormat(kPortIndexInput, inputFormat); if (err == OK) { err = getPortFormat(kPortIndexOutput, outputFormat); if (err == OK) { mInputFormat = inputFormat; mOutputFormat = outputFormat; } } return err; } |
到了这里整个解码器的初始化和配置已经结束了,我们看下解码器的start阶段:
status_t MediaCodec::start() { sp<AMessage> msg = new AMessage(kWhatStart, this); status_t err; Vector<MediaResource> resources; const char *type = (mFlags & kFlagIsSecure) ? kResourceSecureCodec : kResourceNonSecureCodec; const char *subtype = mIsVideo ? kResourceVideoCodec : kResourceAudioCodec; resources.push_back(MediaResource(String8(type), String8(subtype), 1)); // Don't know the buffer size at this point, but it's fine to use 1 because // the reclaimResource call doesn't consider the requester's buffer size for now. resources.push_back(MediaResource(String8(kResourceGraphicMemory), 1)); for (int i = 0; i <= kMaxRetry; ++i) { if (i > 0) { // Don't try to reclaim resource for the first time. if (!mResourceManagerService->reclaimResource(resources)) { break; } // Recover codec from previous error before retry start. err = reset(); if (err != OK) { ALOGE("retrying start: failed to reset codec"); break; } sp<AMessage> response; err = PostAndAwaitResponse(mConfigureMsg, &response); if (err != OK) { ALOGE("retrying start: failed to configure codec"); break; } } sp<AMessage> response; err = PostAndAwaitResponse(msg, &response); if (!isResourceError(err)) { break; } } return err; } |
case kWhatStart: { sp<AReplyToken> replyID; CHECK(msg->senderAwaitsResponse(&replyID)); if (mState == FLUSHED) { setState(STARTED); if (mHavePendingInputBuffers) { onInputBufferAvailable(); mHavePendingInputBuffers = false; } //我们重点看这里 mCodec->signalResume(); //.................. PostReplyWithError(replyID, OK); break; } else if (mState != CONFIGURED) { PostReplyWithError(replyID, INVALID_OPERATION); break; } mReplyID = replyID; setState(STARTING); mCodec->initiateStart(); break; } |
首先調用initiateStart初始化解码器状态
void ACodec::initiateStart() { (new AMessage(kWhatStart, this))->post(); } |
case ACodec::kWhatStart: { onStart(); handled = true; break; } |
void ACodec::LoadedState::onStart() { ALOGV("onStart"); status_t err = mCodec->mOMX->sendCommand(mCodec->mNode, OMX_CommandStateSet, OMX_StateIdle); if (err != OK) { mCodec->signalError(OMX_ErrorUndefined, makeNoSideEffectStatus(err)); } else { mCodec->changeState(mCodec->mLoadedToIdleState); } } |
接着开始获取数据进行解码
void ACodec::signalResume() { (new AMessage(kWhatResume, this))->post(); } |
case kWhatResume: { resume(); handled = true; break; } |
void ACodec::ExecutingState::resume() { submitOutputBuffers(); // Post all available input buffers if (mCodec->mBuffers[kPortIndexInput].size() == 0u) { ALOGW("[%s] we don't have any input buffers to resume", mCodec->mComponentName.c_str()); } for (size_t i = 0; i < mCodec->mBuffers[kPortIndexInput].size(); i++) { BufferInfo *info = &mCodec->mBuffers[kPortIndexInput].editItemAt(i); if (info->mStatus == BufferInfo::OWNED_BY_US) { postFillThisBuffer(info); } } mActive = true; } |
void ACodec::BaseState::postFillThisBuffer(BufferInfo *info) { if (mCodec->mPortEOS[kPortIndexInput]) { return; } CHECK_EQ((int)info->mStatus, (int)BufferInfo::OWNED_BY_US); sp<AMessage> notify = mCodec->mNotify->dup(); notify->setInt32("what", CodecBase::kWhatFillThisBuffer); notify->setInt32("buffer-id", info->mBufferID); info->mData->meta()->clear(); notify->setBuffer("buffer", info->mData); sp<AMessage> reply = new AMessage(kWhatInputBufferFilled, mCodec); reply->setInt32("buffer-id", info->mBufferID); notify->setMessage("reply", reply); notify->post(); info->mStatus = BufferInfo::OWNED_BY_UPSTREAM; } |
case CodecBase::kWhatFillThisBuffer: { //.......... if (mFlags & kFlagIsAsync) { if (!mHaveInputSurface) { if (mState == FLUSHED) { mHavePendingInputBuffers = true; } else { onInputBufferAvailable(); } } } else if (mFlags & kFlagDequeueInputPending) { CHECK(handleDequeueInputBuffer(mDequeueInputReplyID)); ++mDequeueInputTimeoutGeneration; mFlags &= ~kFlagDequeueInputPending; mDequeueInputReplyID = 0; } else { postActivityNotificationIfPossible(); } break; } |
void MediaCodec::onInputBufferAvailable() { int32_t index; while ((index = dequeuePortBuffer(kPortIndexInput)) >= 0) { sp<AMessage> msg = mCallback->dup(); msg->setInt32("callbackID", CB_INPUT_AVAILABLE); msg->setInt32("index", index); msg->post(); } } |
还记得这个mCallback怎么来的吗?
void NuPlayer::Decoder::onConfigure(const sp<AMessage> &format) { //................. sp<AMessage> reply = new AMessage(kWhatCodecNotify, this); mCodec->setCallback(reply); //.................. } |
status_t MediaCodec::setCallback(const sp<AMessage> &callback) { sp<AMessage> msg = new AMessage(kWhatSetCallback, this); msg->setMessage("callback", callback); sp<AMessage> response; return PostAndAwaitResponse(msg, &response); } |
case kWhatSetCallback: { sp<AReplyToken> replyID; CHECK(msg->senderAwaitsResponse(&replyID)); sp<AMessage> callback; CHECK(msg->findMessage("callback", &callback)); mCallback = callback; if (mCallback != NULL) { mFlags |= kFlagIsAsync; } else { mFlags &= ~kFlagIsAsync; } sp<AMessage> response = new AMessage; response->postReply(replyID); break; } |
所以根据上面我们可以知道接下来i调用的是kWhatCodecNotify 下的 CB_INPUT_AVAILABLE
case MediaCodec::CB_INPUT_AVAILABLE: { int32_t index; CHECK(msg->findInt32("index", &index)); handleAnInputBuffer(index); break; } |
bool NuPlayer::Decoder::handleAnInputBuffer(size_t index) { if (isDiscontinuityPending()) { return false; } sp<ABuffer> buffer; mCodec->getInputBuffer(index, &buffer); if (buffer == NULL) { handleError(UNKNOWN_ERROR); return false; } if (index >= mInputBuffers.size()) { for (size_t i = mInputBuffers.size(); i <= index; ++i) { mInputBuffers.add(); mMediaBuffers.add(); mInputBufferIsDequeued.add(); mMediaBuffers.editItemAt(i) = NULL; mInputBufferIsDequeued.editItemAt(i) = false; } } mInputBuffers.editItemAt(index) = buffer; //CHECK_LT(bufferIx, mInputBuffers.size()); if (mMediaBuffers[index] != NULL) { mMediaBuffers[index]->release(); mMediaBuffers.editItemAt(index) = NULL; } mInputBufferIsDequeued.editItemAt(index) = true; if (!mCSDsToSubmit.isEmpty()) { sp<AMessage> msg = new AMessage(); msg->setSize("buffer-ix", index); sp<ABuffer> buffer = mCSDsToSubmit.itemAt(0); ALOGI("[%s] resubmitting CSD", mComponentName.c_str()); msg->setBuffer("buffer", buffer); mCSDsToSubmit.removeAt(0); CHECK(onInputBufferFetched(msg)); return true; } while (!mPendingInputMessages.empty()) { sp<AMessage> msg = *mPendingInputMessages.begin(); if (!onInputBufferFetched(msg)) { break; } mPendingInputMessages.erase(mPendingInputMessages.begin()); } if (!mInputBufferIsDequeued.editItemAt(index)) { return true; } mDequeuedInputBuffers.push_back(index); onRequestInputBuffers(); return true; } |
void NuPlayer::DecoderBase::onRequestInputBuffers() { if (mRequestInputBuffersPending) { return; } // doRequestBuffers() return true if we should request more data if (doRequestBuffers()) { mRequestInputBuffersPending = true; sp<AMessage> msg = new AMessage(kWhatRequestInputBuffers, this); msg->post(10 * 1000ll); } } |
bool NuPlayer::Decoder::doRequestBuffers() { // mRenderer is only NULL if we have a legacy widevine source that // is not yet ready. In this case we must not fetch input. if (isDiscontinuityPending() || mRenderer == NULL) { return false; } status_t err = OK; while (err == OK && !mDequeuedInputBuffers.empty()) { size_t bufferIx = *mDequeuedInputBuffers.begin(); sp<AMessage> msg = new AMessage(); msg->setSize("buffer-ix", bufferIx); err = fetchInputData(msg); if (err != OK && err != ERROR_END_OF_STREAM) { // if EOS, need to queue EOS buffer break; } mDequeuedInputBuffers.erase(mDequeuedInputBuffers.begin()); if (!mPendingInputMessages.empty() || !onInputBufferFetched(msg)) { mPendingInputMessages.push_back(msg); } } return err == -EWOULDBLOCK && mSource->feedMoreTSData() == OK; } |
status_t NuPlayer::Decoder::fetchInputData(sp<AMessage> &reply) { sp<ABuffer> accessUnit; bool dropAccessUnit; do { status_t err = mSource->dequeueAccessUnit(mIsAudio, &accessUnit); if (err == -EWOULDBLOCK) { return err; } else if (err != OK) { if (err == INFO_DISCONTINUITY) { int32_t type; CHECK(accessUnit->meta()->findInt32("discontinuity", &type)); bool formatChange = (mIsAudio && (type & ATSParser::DISCONTINUITY_AUDIO_FORMAT)) || (!mIsAudio && (type & ATSParser::DISCONTINUITY_VIDEO_FORMAT)); bool timeChange = (type & ATSParser::DISCONTINUITY_TIME) != 0; ALOGI("%s discontinuity (format=%d, time=%d)", mIsAudio ? "audio" : "video", formatChange, timeChange); bool seamlessFormatChange = false; sp<AMessage> newFormat = mSource->getFormat(mIsAudio); if (formatChange) { seamlessFormatChange = supportsSeamlessFormatChange(newFormat); // treat seamless format change separately formatChange = !seamlessFormatChange; } // For format or time change, return EOS to queue EOS input, // then wait for EOS on output. if (formatChange /* not seamless */) { mFormatChangePending = true; err = ERROR_END_OF_STREAM; } else if (timeChange) { rememberCodecSpecificData(newFormat); mTimeChangePending = true; err = ERROR_END_OF_STREAM; } else if (seamlessFormatChange) { // reuse existing decoder and don't flush rememberCodecSpecificData(newFormat); continue; } else { // This stream is unaffected by the discontinuity return -EWOULDBLOCK; } } // reply should only be returned without a buffer set // when there is an error (including EOS) CHECK(err != OK); reply->setInt32("err", err); return ERROR_END_OF_STREAM; } dropAccessUnit = false; if (!mIsAudio && !mIsSecure && mRenderer->getVideoLateByUs() > 100000ll && mIsVideoAVC && !IsAVCReferenceFrame(accessUnit)) { dropAccessUnit = true; ++mNumInputFramesDropped; } } while (dropAccessUnit); // ALOGV("returned a valid buffer of %s data", mIsAudio ? "mIsAudio" : "video"); #if 0 int64_t mediaTimeUs; CHECK(accessUnit->meta()->findInt64("timeUs", &mediaTimeUs)); ALOGV("[%s] feeding input buffer at media time %.3f", mIsAudio ? "audio" : "video", mediaTimeUs / 1E6); #endif if (mCCDecoder != NULL) { mCCDecoder->decode(accessUnit); } reply->setBuffer("buffer", accessUnit); return OK; } |
我们接着看下如何获取索引列表,首先看下onChangeConfiguration3,在这部分代码很长,大家有兴趣可以看下这里面的代码,它的任务主要有如下几点
- 判断audio及Video是否发送变化
- 根据当前的mFetcherInfos更新resumeMask
- 如果是有新的Fetcher那么需要新建FetcherInfo
- 启动对应的Fetcher
- 检查当前带宽根据带宽切换资源
但是最关键的代码在于fetcher->startAsync,void LiveSession::onChangeConfiguration3(const sp<AMessage> &msg) { //........ fetcher->startAsync( sources[kAudioIndex], sources[kVideoIndex], sources[kSubtitleIndex], getMetadataSource(sources, mNewStreamMask, switching), startTime.mTimeUs < 0 ? mLastSeekTimeUs : startTime.mTimeUs, startTime.getSegmentTimeUs(), startTime.mSeq, seekMode); //....... }
void PlaylistFetcher::startAsync( const sp<AnotherPacketSource> &audioSource, const sp<AnotherPacketSource> &videoSource, const sp<AnotherPacketSource> &subtitleSource, const sp<AnotherPacketSource> &metadataSource, int64_t startTimeUs, int64_t segmentStartTimeUs, int32_t startDiscontinuitySeq, LiveSession::SeekMode seekMode) { sp<AMessage> msg = new AMessage(kWhatStart, this); //................. msg->post(); } |
case kWhatStart: { status_t err = onStart(msg); sp<AMessage> notify = mNotify->dup(); notify->setInt32("what", kWhatStarted); notify->setInt32("err", err); notify->post(); break; } |
status_t PlaylistFetcher::onStart(const sp<AMessage> &msg) { //.......... if (streamTypeMask & LiveSession::STREAMTYPE_AUDIO) { void *ptr; CHECK(msg->findPointer("audioSource", &ptr)); mPacketSources.add(LiveSession::STREAMTYPE_AUDIO,static_cast<AnotherPacketSource *>(ptr)); } if (streamTypeMask & LiveSession::STREAMTYPE_VIDEO) { void *ptr; CHECK(msg->findPointer("videoSource", &ptr)); mPacketSources.add(LiveSession::STREAMTYPE_VIDEO,static_cast<AnotherPacketSource *>(ptr)); } if (streamTypeMask & LiveSession::STREAMTYPE_SUBTITLES) { void *ptr; CHECK(msg->findPointer("subtitleSource", &ptr)); mPacketSources.add(LiveSession::STREAMTYPE_SUBTITLES,static_cast<AnotherPacketSource *>(ptr)); } void *ptr; // metadataSource is not part of streamTypeMask if ((streamTypeMask & (LiveSession::STREAMTYPE_AUDIO | LiveSession::STREAMTYPE_VIDEO)) && msg->findPointer("metadataSource", &ptr)) { mPacketSources.add(LiveSession::STREAMTYPE_METADATA,static_cast<AnotherPacketSource *>(ptr)); } //............... postMonitorQueue(); return OK; } |
void PlaylistFetcher::postMonitorQueue(int64_t delayUs, int64_t minDelayUs) { int64_t maxDelayUs = delayUsToRefreshPlaylist(); if (maxDelayUs < minDelayUs) { maxDelayUs = minDelayUs; } if (delayUs > maxDelayUs) { FLOGV("Need to refresh playlist in %lld", (long long)maxDelayUs); delayUs = maxDelayUs; } sp<AMessage> msg = new AMessage(kWhatMonitorQueue, this); msg->setInt32("generation", mMonitorQueueGeneration); msg->post(delayUs); } |
case kWhatMonitorQueue: case kWhatDownloadNext: { int32_t generation; CHECK(msg->findInt32("generation", &generation)); if (generation != mMonitorQueueGeneration) { // Stale event break; } if (msg->what() == kWhatMonitorQueue) { onMonitorQueue(); } else { onDownloadNext(); } break; } |
void PlaylistFetcher::onMonitorQueue() { //....................... if (finalResult == OK && bufferedDurationUs < kMinBufferedDurationUs) { FLOGV("monitoring, buffered=%lld < %lld", (long long)bufferedDurationUs, (long long)kMinBufferedDurationUs); // delay the next download slightly; hopefully this gives other concurrent fetchers // a better chance to run. // onDownloadNext(); sp<AMessage> msg = new AMessage(kWhatDownloadNext, this); msg->setInt32("generation", mMonitorQueueGeneration); msg->post(1000l); } else { // We'd like to maintain buffering above durationToBufferUs, so try // again when buffer just about to go below durationToBufferUs // (or after targetDurationUs / 2, whichever is smaller). int64_t delayUs = bufferedDurationUs - kMinBufferedDurationUs + 1000000ll; if (delayUs > targetDurationUs / 2) { delayUs = targetDurationUs / 2; } FLOGV("pausing for %lld, buffered=%lld > %lld", (long long)delayUs, (long long)bufferedDurationUs, (long long)kMinBufferedDurationUs); postMonitorQueue(delayUs); } } |
initDownloadState 用于在获取TS包之前获取对应的Uri
bool PlaylistFetcher::initDownloadState( AString &uri, sp<AMessage> &itemMeta, int32_t &firstSeqNumberInPlaylist, int32_t &lastSeqNumberInPlaylist) { status_t err = refreshPlaylist(); firstSeqNumberInPlaylist = 0; lastSeqNumberInPlaylist = 0; bool discontinuity = false; if (mPlaylist != NULL) { mPlaylist->getSeqNumberRange( &firstSeqNumberInPlaylist, &lastSeqNumberInPlaylist); if (mDiscontinuitySeq < 0) { mDiscontinuitySeq = mPlaylist->getDiscontinuitySeq(); } } mSegmentFirstPTS = -1ll; if (mPlaylist != NULL && mSeqNumber < 0) { CHECK_GE(mStartTimeUs, 0ll); if (mSegmentStartTimeUs < 0) { if (!mPlaylist->isComplete() && !mPlaylist->isEvent()) { // If this is a live session, start 3 segments from the end on connect mSeqNumber = lastSeqNumberInPlaylist - 3; if (mSeqNumber < firstSeqNumberInPlaylist) { mSeqNumber = firstSeqNumberInPlaylist; } } else { // When seeking mSegmentStartTimeUs is unavailable (< 0), we // use mStartTimeUs (client supplied timestamp) to determine both start segment // and relative position inside a segment mSeqNumber = getSeqNumberForTime(mStartTimeUs); mStartTimeUs -= getSegmentStartTimeUs(mSeqNumber); } mStartTimeUsRelative = true; FLOGV("Initial sequence number for time %lld is %d from (%d .. %d)", (long long)mStartTimeUs, mSeqNumber, firstSeqNumberInPlaylist, lastSeqNumberInPlaylist); } else { // When adapting or track switching, mSegmentStartTimeUs (relative // to media time 0) is used to determine the start segment; mStartTimeUs (absolute // timestamps coming from the media container) is used to determine the position // inside a segments. if (mStreamTypeMask != LiveSession::STREAMTYPE_SUBTITLES && mSeekMode != LiveSession::kSeekModeNextSample) { // avoid double fetch/decode // Use (mSegmentStartTimeUs + 1/2 * targetDurationUs) to search // for the starting segment in new variant. // If the two variants' segments are aligned, this gives the // next segment. If they're not aligned, this gives the segment // that overlaps no more than 1/2 * targetDurationUs. mSeqNumber = getSeqNumberForTime(mSegmentStartTimeUs + mPlaylist->getTargetDuration() / 2); } else { mSeqNumber = getSeqNumberForTime(mSegmentStartTimeUs); } ssize_t minSeq = getSeqNumberForDiscontinuity(mDiscontinuitySeq); if (mSeqNumber < minSeq) { mSeqNumber = minSeq; } if (mSeqNumber < firstSeqNumberInPlaylist) { mSeqNumber = firstSeqNumberInPlaylist; } if (mSeqNumber > lastSeqNumberInPlaylist) { mSeqNumber = lastSeqNumberInPlaylist; } FLOGV("Initial sequence number is %d from (%d .. %d)", mSeqNumber, firstSeqNumberInPlaylist, lastSeqNumberInPlaylist); } } // if mPlaylist is NULL then err must be non-OK; but the other way around might not be true if (mSeqNumber < firstSeqNumberInPlaylist || mSeqNumber > lastSeqNumberInPlaylist || err != OK) { if ((err != OK || !mPlaylist->isComplete()) && mNumRetries < kMaxNumRetries) { ++mNumRetries; if (mSeqNumber > lastSeqNumberInPlaylist || err != OK) { // make sure we reach this retry logic on refresh failures // by adding an err != OK clause to all enclosing if's. // refresh in increasing fraction (1/2, 1/3, ...) of the // playlist's target duration or 3 seconds, whichever is less int64_t delayUs = kMaxMonitorDelayUs; if (mPlaylist != NULL) { delayUs = mPlaylist->size() * mPlaylist->getTargetDuration() / (1 + mNumRetries); } if (delayUs > kMaxMonitorDelayUs) { delayUs = kMaxMonitorDelayUs; } FLOGV("sequence number high: %d from (%d .. %d), " "monitor in %lld (retry=%d)", mSeqNumber, firstSeqNumberInPlaylist, lastSeqNumberInPlaylist, (long long)delayUs, mNumRetries); postMonitorQueue(delayUs); return false; } if (err != OK) { notifyError(err); return false; } // we've missed the boat, let's start 3 segments prior to the latest sequence // number available and signal a discontinuity. ALOGI("We've missed the boat, restarting playback." " mStartup=%d, was looking for %d in %d-%d", mStartup, mSeqNumber, firstSeqNumberInPlaylist, lastSeqNumberInPlaylist); if (mStopParams != NULL) { // we should have kept on fetching until we hit the boundaries in mStopParams, // but since the segments we are supposed to fetch have already rolled off // the playlist, i.e. we have already missed the boat, we inevitably have to // skip. notifyStopReached(); return false; } mSeqNumber = lastSeqNumberInPlaylist - 3; if (mSeqNumber < firstSeqNumberInPlaylist) { mSeqNumber = firstSeqNumberInPlaylist; } discontinuity = true; // fall through } else { if (mPlaylist != NULL) { ALOGE("Cannot find sequence number %d in playlist " "(contains %d - %d)", mSeqNumber, firstSeqNumberInPlaylist, firstSeqNumberInPlaylist + (int32_t)mPlaylist->size() - 1); if (mTSParser != NULL) { mTSParser->signalEOS(ERROR_END_OF_STREAM); // Use an empty buffer; we don't have any new data, just want to extract // potential new access units after flush. Reset mSeqNumber to // lastSeqNumberInPlaylist such that we set the correct access unit // properties in extractAndQueueAccessUnitsFromTs. sp<ABuffer> buffer = new ABuffer(0); mSeqNumber = lastSeqNumberInPlaylist; extractAndQueueAccessUnitsFromTs(buffer); } notifyError(ERROR_END_OF_STREAM); } else { // It's possible that we were never able to download the playlist. // In this case we should notify error, instead of EOS, as EOS during // prepare means we succeeded in downloading everything. ALOGE("Failed to download playlist!"); notifyError(ERROR_IO); } return false; } } mNumRetries = 0; CHECK(mPlaylist->itemAt( mSeqNumber - firstSeqNumberInPlaylist, &uri, &itemMeta)); CHECK(itemMeta->findInt32("discontinuity-sequence", &mDiscontinuitySeq)); int32_t val; if (itemMeta->findInt32("discontinuity", &val) && val != 0) { discontinuity = true; } else if (mLastDiscontinuitySeq >= 0 && mDiscontinuitySeq != mLastDiscontinuitySeq) { // Seek jumped to a new discontinuity sequence. We need to signal // a format change to decoder. Decoder needs to shutdown and be // created again if seamless format change is unsupported. FLOGV("saw discontinuity: mStartup %d, mLastDiscontinuitySeq %d, " "mDiscontinuitySeq %d, mStartTimeUs %lld", mStartup, mLastDiscontinuitySeq, mDiscontinuitySeq, (long long)mStartTimeUs); discontinuity = true; } mLastDiscontinuitySeq = -1; // decrypt a junk buffer to prefetch key; since a session uses only one http connection, // this avoids interleaved connections to the key and segment file. { sp<ABuffer> junk = new ABuffer(16); junk->setRange(0, 16); status_t err = decryptBuffer(mSeqNumber - firstSeqNumberInPlaylist, junk, true /* first */); if (err == ERROR_NOT_CONNECTED) { return false; } else if (err != OK) { notifyError(err); return false; } } if ((mStartup && !mTimeChangeSignaled) || discontinuity) { // We need to signal a time discontinuity to ATSParser on the // first segment after start, or on a discontinuity segment. // Setting mNextPTSTimeUs informs extractAndQueueAccessUnitsXX() // to send the time discontinuity. if (mPlaylist->isComplete() || mPlaylist->isEvent()) { // If this was a live event this made no sense since // we don't have access to all the segment before the current // one. mNextPTSTimeUs = getSegmentStartTimeUs(mSeqNumber); } // Setting mTimeChangeSignaled to true, so that if start time // searching goes into 2nd segment (without a discontinuity), // we don't reset time again. It causes corruption when pending // data in ATSParser is cleared. mTimeChangeSignaled = true; } if (discontinuity) { ALOGI("queueing discontinuity (explicit=%d)", discontinuity); // Signal a format discontinuity to ATSParser to clear partial data // from previous streams. Not doing this causes bitstream corruption. if (mTSParser != NULL) { mTSParser->signalDiscontinuity( ATSParser::DISCONTINUITY_FORMATCHANGE, NULL /* extra */); } queueDiscontinuity( ATSParser::DISCONTINUITY_FORMAT_ONLY, NULL /* extra */); if (mStartup && mStartTimeUsRelative && mFirstPTSValid) { // This means we guessed mStartTimeUs to be in the previous // segment (likely very close to the end), but either video or // audio has not found start by the end of that segment. // // If this new segment is not a discontinuity, keep searching. // // If this new segment even got a discontinuity marker, just // set mStartTimeUs=0, and take all samples from now on. mStartTimeUs = 0; mFirstPTSValid = false; mIDRFound = false; mVideoBuffer->clear(); } } FLOGV("fetching segment %d from (%d .. %d)", mSeqNumber, firstSeqNumberInPlaylist, lastSeqNumberInPlaylist); return true; } |
void PlaylistFetcher::onDownloadNext() { AString uri; sp<AMessage> itemMeta; sp<ABuffer> buffer; sp<ABuffer> tsBuffer; int32_t firstSeqNumberInPlaylist = 0; int32_t lastSeqNumberInPlaylist = 0; bool connectHTTP = true; if (mDownloadState->hasSavedState()) { mDownloadState->restoreState( uri, itemMeta, buffer, tsBuffer, firstSeqNumberInPlaylist, lastSeqNumberInPlaylist); connectHTTP = false; FLOGV("resuming: '%s'", uri.c_str()); } else { if (!initDownloadState( uri, itemMeta, firstSeqNumberInPlaylist, lastSeqNumberInPlaylist)) { return; } FLOGV("fetching: '%s'", uri.c_str()); } int64_t range_offset, range_length; if (!itemMeta->findInt64("range-offset", &range_offset) || !itemMeta->findInt64("range-length", &range_length)) { range_offset = 0; range_length = -1; } // block-wise download bool shouldPause = false; ssize_t bytesRead; do { int64_t startUs = ALooper::GetNowUs(); //下载 bytesRead = mHTTPDownloader->fetchBlock( uri.c_str(), &buffer, range_offset, range_length, kDownloadBlockSize, NULL /* actualURL */, connectHTTP); int64_t delayUs = ALooper::GetNowUs() - startUs; if (bytesRead == ERROR_NOT_CONNECTED) { return; } if (bytesRead < 0) { status_t err = bytesRead; ALOGE("failed to fetch .ts segment at url '%s'", uri.c_str()); notifyError(err); return; } // add sample for bandwidth estimation, excluding samples from subtitles (as // its too small), or during startup/resumeUntil (when we could have more than // one connection open which affects bandwidth) if (!mStartup && mStopParams == NULL && bytesRead > 0 && (mStreamTypeMask & (LiveSession::STREAMTYPE_AUDIO | LiveSession::STREAMTYPE_VIDEO))) { mSession->addBandwidthMeasurement(bytesRead, delayUs); if (delayUs > 2000000ll) { FLOGV("bytesRead %zd took %.2f seconds - abnormal bandwidth dip", bytesRead, (double)delayUs / 1.0e6); } } connectHTTP = false; CHECK(buffer != NULL); size_t size = buffer->size(); // Set decryption range. buffer->setRange(size - bytesRead, bytesRead); //通过获取的key解密buffer status_t err = decryptBuffer(mSeqNumber - firstSeqNumberInPlaylist, buffer, buffer->offset() == 0 /* first */); // Unset decryption range. buffer->setRange(0, size); if (err != OK) { ALOGE("decryptBuffer failed w/ error %d", err); notifyError(err); return; } bool startUp = mStartup; // save current start up state err = OK; if (bufferStartsWithTsSyncByte(buffer)) { // Incremental extraction is only supported for MPEG2 transport streams. if (tsBuffer == NULL) { tsBuffer = new ABuffer(buffer->data(), buffer->capacity()); tsBuffer->setRange(0, 0); } else if (tsBuffer->capacity() != buffer->capacity()) { size_t tsOff = tsBuffer->offset(), tsSize = tsBuffer->size(); tsBuffer = new ABuffer(buffer->data(), buffer->capacity()); tsBuffer->setRange(tsOff, tsSize); } tsBuffer->setRange(tsBuffer->offset(), tsBuffer->size() + bytesRead); //将解密后的buffer递给解码器 err = extractAndQueueAccessUnitsFromTs(tsBuffer); } if (err == -EAGAIN) { // starting sequence number too low/high mTSParser.clear(); for (size_t i = 0; i < mPacketSources.size(); i++) { sp<AnotherPacketSource> packetSource = mPacketSources.valueAt(i); packetSource->clear(); } postMonitorQueue(); return; } else if (err == ERROR_OUT_OF_RANGE) { // reached stopping point notifyStopReached(); return; } else if (err != OK) { notifyError(err); return; } // If we're switching, post start notification // this should only be posted when the last chunk is full processed by TSParser if (mSeekMode != LiveSession::kSeekModeExactPosition && startUp != mStartup) { CHECK(mStartTimeUsNotify != NULL); mStartTimeUsNotify->post(); mStartTimeUsNotify.clear(); shouldPause = true; } if (shouldPause || shouldPauseDownload()) { // save state and return if this is not the last chunk, // leaving the fetcher in paused state. if (bytesRead != 0) { mDownloadState->saveState( uri, itemMeta, buffer, tsBuffer, firstSeqNumberInPlaylist, lastSeqNumberInPlaylist); return; } shouldPause = true; } } while (bytesRead != 0); if (bufferStartsWithTsSyncByte(buffer)) { // If we don't see a stream in the program table after fetching a full ts segment // mark it as nonexistent. ATSParser::SourceType srcTypes[] = { ATSParser::VIDEO, ATSParser::AUDIO }; LiveSession::StreamType streamTypes[] = { LiveSession::STREAMTYPE_VIDEO, LiveSession::STREAMTYPE_AUDIO }; const size_t kNumTypes = NELEM(srcTypes); for (size_t i = 0; i < kNumTypes; i++) { ATSParser::SourceType srcType = srcTypes[i]; LiveSession::StreamType streamType = streamTypes[i]; sp<AnotherPacketSource> source = static_cast<AnotherPacketSource *>( mTSParser->getSource(srcType).get()); if (!mTSParser->hasSource(srcType)) { ALOGW("MPEG2 Transport stream does not contain %s data.", srcType == ATSParser::VIDEO ? "video" : "audio"); mStreamTypeMask &= ~streamType; mPacketSources.removeItem(streamType); } } } if (checkDecryptPadding(buffer) != OK) { ALOGE("Incorrect padding bytes after decryption."); notifyError(ERROR_MALFORMED); return; } if (tsBuffer != NULL) { AString method; CHECK(buffer->meta()->findString("cipher-method", &method)); if ((tsBuffer->size() > 0 && method == "NONE") || tsBuffer->size() > 16) { ALOGE("MPEG2 transport stream is not an even multiple of 188 " "bytes in length."); notifyError(ERROR_MALFORMED); return; } } // bulk extract non-ts files bool startUp = mStartup; if (tsBuffer == NULL) { status_t err = extractAndQueueAccessUnits(buffer, itemMeta); if (err == -EAGAIN) { // starting sequence number too low/high postMonitorQueue(); return; } else if (err == ERROR_OUT_OF_RANGE) { // reached stopping point notifyStopReached(); return; } else if (err != OK) { notifyError(err); return; } } ++mSeqNumber; // if adapting, pause after found the next starting point if (mSeekMode != LiveSession::kSeekModeExactPosition && startUp != mStartup) { CHECK(mStartTimeUsNotify != NULL); mStartTimeUsNotify->post(); mStartTimeUsNotify.clear(); shouldPause = true; } if (!shouldPause) { postMonitorQueue(); } } |
判断是否需要切换带宽
bool LiveSession::switchBandwidthIfNeeded(bool bufferHigh, bool bufferLow) { // no need to check bandwidth if we only have 1 bandwidth settings int32_t bandwidthBps, shortTermBps; bool isStable; //调用estimateBandwidth预测带宽 if (mBandwidthEstimator->estimateBandwidth(&bandwidthBps, &isStable, &shortTermBps)) { ALOGV("bandwidth estimated at %.2f kbps, ""stable %d, shortTermBps %.2f kbps",bandwidthBps / 1024.0f, isStable, shortTermBps / 1024.0f); mLastBandwidthBps = bandwidthBps; mLastBandwidthStable = isStable; } else { ALOGV("no bandwidth estimate."); return false; } int32_t curBandwidth = mBandwidthItems.itemAt(mCurBandwidthIndex).mBandwidth; // canSwithDown and canSwitchUp can't both be true. // we only want to switch up when measured bw is 120% higher than current variant, // and we only want to switch down when measured bw is below current variant. bool canSwitchDown = bufferLow && (bandwidthBps < (int32_t)curBandwidth); bool canSwitchUp = bufferHigh && (bandwidthBps > (int32_t)curBandwidth * 12 / 10); if (canSwitchDown || canSwitchUp) { // bandwidth estimating has some delay, if we have to downswitch when // it hasn't stabilized, use the short term to guess real bandwidth, // since it may be dropping too fast. // (note this doesn't apply to upswitch, always use longer average there) if (!isStable && canSwitchDown) { if (shortTermBps < bandwidthBps) { bandwidthBps = shortTermBps; } } //获取要修改带宽数值index ssize_t bandwidthIndex = getBandwidthIndex(bandwidthBps); // it's possible that we're checking for canSwitchUp case, but the returned // bandwidthIndex is < mCurBandwidthIndex, as getBandwidthIndex() only uses 70% // of measured bw. In that case we don't want to do anything, since we have // both enough buffer and enough bw. if ((canSwitchUp && bandwidthIndex > mCurBandwidthIndex) || (canSwitchDown && bandwidthIndex < mCurBandwidthIndex)) { // if not yet prepared, just restart again with new bw index. // this is faster and playback experience is cleaner. //修改配置,包括重启各种资源 changeConfiguration(mInPreparationPhase ? 0 : -1ll, bandwidthIndex); return true; } } return false; } |
size_t LiveSession::getBandwidthIndex(int32_t bandwidthBps) { if (mBandwidthItems.size() < 2) { // shouldn't be here if we only have 1 bandwidth, check // logic to get rid of redundant bandwidth polling ALOGW("getBandwidthIndex() called for single bandwidth playlist!"); return 0; } #if 1 char value[PROPERTY_VALUE_MAX]; ssize_t index = -1; if (property_get("media.httplive.bw-index", value, NULL)) { char *end; index = strtol(value, &end, 10); CHECK(end > value && *end == '\0'); if (index >= 0 && (size_t)index >= mBandwidthItems.size()) { index = mBandwidthItems.size() - 1; } } if (index < 0) { char value[PROPERTY_VALUE_MAX]; if (property_get("media.httplive.max-bw", value, NULL)) { char *end; long maxBw = strtoul(value, &end, 10); if (end > value && *end == '\0') { if (maxBw > 0 && bandwidthBps > maxBw) { ALOGV("bandwidth capped to %ld bps", maxBw); bandwidthBps = maxBw; } } } // Pick the highest bandwidth stream that's not currently blacklisted // below or equal to estimated bandwidth. index = mBandwidthItems.size() - 1; ssize_t lowestBandwidth = getLowestValidBandwidthIndex(); while (index > lowestBandwidth) { // be conservative (70%) to avoid overestimating and immediately // switching down again. size_t adjustedBandwidthBps = bandwidthBps * 7 / 10; const BandwidthItem &item = mBandwidthItems[index]; if (item.mBandwidth <= adjustedBandwidthBps && isBandwidthValid(item)) { break; } --index; } } #elif 0 // Change bandwidth at random() size_t index = uniformRand() * mBandwidthItems.size(); #elif 0 // There's a 50% chance to stay on the current bandwidth and // a 50% chance to switch to the next higher bandwidth (wrapping around // to lowest) const size_t kMinIndex = 0; static ssize_t mCurBandwidthIndex = -1; size_t index; if (mCurBandwidthIndex < 0) { index = kMinIndex; } else if (uniformRand() < 0.5) { index = (size_t)mCurBandwidthIndex; } else { index = mCurBandwidthIndex + 1; if (index == mBandwidthItems.size()) { index = kMinIndex; } } mCurBandwidthIndex = index; #elif 0 // Pick the highest bandwidth stream below or equal to 1.2 Mbit/sec size_t index = mBandwidthItems.size() - 1; while (index > 0 && mBandwidthItems.itemAt(index).mBandwidth > 1200000) { --index; } #elif 1 char value[PROPERTY_VALUE_MAX]; size_t index; if (property_get("media.httplive.bw-index", value, NULL)) { char *end; index = strtoul(value, &end, 10); CHECK(end > value && *end == '\0'); if (index >= mBandwidthItems.size()) { index = mBandwidthItems.size() - 1; } } else { index = 0; } #else size_t index = mBandwidthItems.size() - 1; // Highest bandwidth stream #endif CHECK_GE(index, 0); return index; } |