高通Camera框架--数据流浅谈01

    本文重点:stagefrightRecorder.cpp    OMXCodec.cpp   MPEG4Writer.cpp  CameraSource.cpp 之间的调用关系

===============================================================================

     最初看的时候,有些地方还是不清楚,关于编码和文件的读写之间的关系不是很了解。只是知道底层回调的数据会经过CameraSource.cpp回调,只是知道数据会在OMXCodec.cpp 中完成编码,只是知道在MPEG4Writer.cpp 会有读写线程和轨迹线程,只是知道在stagefrightRecorder.cpp 中会将OMXCodex.cpp、MPEG4Writer.cpp和CameraSource.cpp 做相互的配合调用。之前一直纠结的是MPEG4Writer.cpp是直接读CameraSource.cpp 的数据,那和编码之间又是怎样的联系呢?

     是自己知识面不广,了解的不多,读代码能力也还得加强。

     这次看源码,总算是理清了上面的疑点。

    OMXCodec.cpp 的read函数,直接读的是CameraSource.cpp 中数据,然后MPEG4Writer.cpp 中的轨迹线程的mSource->read()调用的则是OMXCodec.cpp 中数据。那也就是说,底层数据经过CameraSource.cpp回调的时候,是先经过编码,然后在将数据写入文件。

   >>>>>>>>这里直接从stagefrightRecorder.cpp 的start函数开始了,会在start()函数中调用startMPEG4Recording()函数

stagefrightRecorder.cpp

status_t StagefrightRecorder::start() {
      ......
    switch (mOutputFormat) {
        case OUTPUT_FORMAT_DEFAULT:
        case OUTPUT_FORMAT_THREE_GPP:
        case OUTPUT_FORMAT_MPEG_4:
            status = startMPEG4Recording();
           
     ......
}

      >>>>>>>>在startMPEG4Recording()方法中过,调用的重要方法已经标红

status_t StagefrightRecorder::startMPEG4Recording() {
   ......

      status_t err = setupMPEG4Recording(
            mOutputFd, mVideoWidth, mVideoHeight,
            mVideoBitRate, &totalBitRate, &mWriter);

    sp<MetaData> meta = new MetaData;


    setupMPEG4MetaData(startTimeUs, totalBitRate, &meta);


    err = mWriter->start(meta.get());
  ......
}

 

     >>>>>>>>在setupMPEG4Recording()方法中,我们看到 sp<MediaWriter> writer = new MPEG4Writer(outputFd); 这个是完成writer的初始化,那我们现在就知道这个writer是MPEG4Writer了,这个还是蛮重要的。在这个方法中,会调用setupMediaSource(),完成source的初始化,而这个source就是CameraSource,还会继续调用setupVideoEncoder()方法,完成coder的初始化,而这个coder则是OMXCodex。还得注意下的就是writer->addSource(encoder); 这里是把编码的数据交给了writer,这样MPEG4Writer.cpp和OMXCodec.cpp直接就连续起来了

 

status_t StagefrightRecorder::setupMPEG4Recording(
      ......
        sp<MediaWriter> *mediaWriter) {
    mediaWriter->clear();

  
    sp<MediaWriter> writer = new MPEG4Writer(outputFd);


    if (mVideoSource < VIDEO_SOURCE_LIST_END) {


        sp<MediaSource> mediaSource;       
        err = setupMediaSource(&mediaSource);
        if (err != OK) {
            return err;
        }


        sp<MediaSource> encoder;
        err = setupVideoEncoder(mediaSource, videoBitRate, &encoder);
        if (err != OK) {
            return err;
        }


        writer->addSource(encoder);
        *totalBitRate += videoBitRate;
    }


    // Audio source is added at the end if it exists.
    // This help make sure that the "recoding" sound is suppressed for
    // camcorder applications in the recorded files.
    if (!mCaptureTimeLapse && (mAudioSource != AUDIO_SOURCE_CNT)) {
        err = setupAudioEncoder(writer);
        if (err != OK) return err;
        *totalBitRate += mAudioBitRate;
    }


    if (mInterleaveDurationUs > 0) {
        reinterpret_cast<MPEG4Writer *>(writer.get())->
            setInterleaveDuration(mInterleaveDurationUs);
    }
    if (mLongitudex10000 > -3600000 && mLatitudex10000 > -3600000) {
        reinterpret_cast<MPEG4Writer *>(writer.get())->
            setGeoData(mLatitudex10000, mLongitudex10000);
    }
    if (mMaxFileDurationUs != 0) {
        writer->setMaxFileDuration(mMaxFileDurationUs);
    }
    if (mMaxFileSizeBytes != 0) {
        writer->setMaxFileSize(mMaxFileSizeBytes);
    }


    mStartTimeOffsetMs = mEncoderProfiles->getStartTimeOffsetMs(mCameraId);
    if (mStartTimeOffsetMs > 0) {
        reinterpret_cast<MPEG4Writer *>(writer.get())->
            setStartTimeOffsetMs(mStartTimeOffsetMs);
    }


    writer->setListener(mListener);
    *mediaWriter = writer;
    return OK;
}

  >>>>>>>>在setupMediaSource()方法中是完成了cameraSource的初始化

 

status_t StagefrightRecorder::setupMediaSource(
                      sp<MediaSource> *mediaSource) {
    if (mVideoSource == VIDEO_SOURCE_DEFAULT
            || mVideoSource == VIDEO_SOURCE_CAMERA) {
        sp<CameraSource> cameraSource;
        status_t err = setupCameraSource(&cameraSource);

        if (err != OK) {
            return err;
        }
        *mediaSource = cameraSource;
    } else if (mVideoSource == VIDEO_SOURCE_GRALLOC_BUFFER) {
        // If using GRAlloc buffers, setup surfacemediasource.
        // Later a handle to that will be passed
        // to the client side when queried
        status_t err = setupSurfaceMediaSource();
        if (err != OK) {
            return err;
        }
        *mediaSource = mSurfaceMediaSource;
    } else {
        return INVALID_OPERATION;
    }
    return OK;
}

      >>>>>>>在setupVideoEncoder()方法中是完成了OMXCodec的初始化,这里注意下OMXCodec::create(...,camerasource,...,...);我们看到create的时候,传进入的参数中,那个source是cameraSource,所以后面在OMXCodec.cpp 中调用的mSoure->read();直接调用的就是CameraSoure.cpp中的read()方法

 

status_t StagefrightRecorder::setupVideoEncoder(
        ......
    sp<MediaSource> encoder = OMXCodec::Create(
            client.interface(), enc_meta,
            true /* createEncoder */, cameraSource,
            NULL, encoder_flags);


    if (encoder == NULL) {
        ALOGW("Failed to create the encoder");
        // When the encoder fails to be created, we need
        // release the camera source due to the camera's lock
        // and unlock mechanism.
        cameraSource->stop();
        return UNKNOWN_ERROR;
    }


    mVideoSourceNode = cameraSource;
    mVideoEncoderOMX = encoder;


    *source = encoder;


    return OK;
}

 

-----------------------------

    >>>>>上面有说到在stagefrigheRecorder.cpp中有调用到MPEG4Writer.cpp中addSource()方法 [writer->addSource(encoder);],而addSource中传进来的参数是编码码的数据,这样MPEG4Writer.cpp和OMXCodec.cpp之间就有了联系,MPEG4Writer.cpp 读写的就是OMXCodec.cpp 中编码后的数据。

MPEG4Writer.cpp 

 

    >>>>>> 在MPEG4Writer.cpp 的addSource()方法中,注意看下Track *track = new Track(this, source, 1 + mTracks.size());  我们看到new Track(...,source,...)的时候,是传进去了source,而这个source,从上面的分析,我们已经知道它是编码后的数据

 

status_t MPEG4Writer::addSource(const sp<MediaSource> &source) {
    Mutex::Autolock l(mLock);
    if (mStarted) {
        ALOGE("Attempt to add source AFTER recording is started");
        return UNKNOWN_ERROR;
    }


    // At most 2 tracks can be supported.
    if (mTracks.size() >= 2) {
        ALOGE("Too many tracks (%d) to add", mTracks.size());
        return ERROR_UNSUPPORTED;
    }


    CHECK(source.get() != NULL);


    // A track of type other than video or audio is not supported.
    const char *mime;
    sp<MetaData> meta = source->getFormat();
    CHECK(meta->findCString(kKeyMIMEType, &mime));
    bool isAudio = !strncasecmp(mime, "audio/", 6);
    bool isVideo = !strncasecmp(mime, "video/", 6);
    if (!isAudio && !isVideo) {
        ALOGE("Track (%s) other than video or audio is not supported",
            mime);
        return ERROR_UNSUPPORTED;
    }


    // At this point, we know the track to be added is either
    // video or audio. Thus, we only need to check whether it
    // is an audio track or not (if it is not, then it must be
    // a video track).


    // No more than one video or one audio track is supported.
    for (List<Track*>::iterator it = mTracks.begin();
         it != mTracks.end(); ++it) {
        if ((*it)->isAudio() == isAudio) {
            ALOGE("%s track already exists", isAudio? "Audio": "Video");
            return ERROR_UNSUPPORTED;
        }
    }


    // This is the first track of either audio or video.
    // Go ahead to add the track.
    Track *track = new Track(this, source, 1 + mTracks.size());       -------------------------------
    mTracks.push_back(track);                                                                                               |
                                                                                                                                               |
                                                                                                                                               |
    mHFRRatio = ExtendedUtils::HFR::getHFRRatio(meta);                                                   |
                                                                                                                                               |
                                                                                                                                               |
    return OK;                                                                                                                           |
}                                                                                                                                               |

                                                                                                                                                |

MPEG4Writer::Track::Track(       --------------------------------------------------------------------------- |
        MPEG4Writer *owner, const sp<MediaSource> &source, size_t trackId)
    : mOwner(owner),
      mMeta(source->getFormat()),
      mSource(source),
      mDone(false),
      mPaused(false),
      mResumed(false),
      mStarted(false),
      mTrackId(trackId),
      mTrackDurationUs(0),
      mEstimatedTrackSizeBytes(0),
      mSamplesHaveSameSize(true),
      mStszTableEntries(new ListTableEntries<uint32_t>(1000, 1)),
      mStcoTableEntries(new ListTableEntries<uint32_t>(1000, 1)),
      mCo64TableEntries(new ListTableEntries<off64_t>(1000, 1)),
      mStscTableEntries(new ListTableEntries<uint32_t>(1000, 3)),
      mStssTableEntries(new ListTableEntries<uint32_t>(1000, 1)),
      mSttsTableEntries(new ListTableEntries<uint32_t>(1000, 2)),
      mCttsTableEntries(new ListTableEntries<uint32_t>(1000, 2)),
      mCodecSpecificData(NULL),
      mCodecSpecificDataSize(0),
      mGotAllCodecSpecificData(false),
      mReachedEOS(false),
      mRotation(0),
      mHFRRatio(1) {
    getCodecSpecificDataFromInputFormatIfPossible();


    const char *mime;
    mMeta->findCString(kKeyMIMEType, &mime);
    mIsAvc = !strcasecmp(mime, MEDIA_MIMETYPE_VIDEO_AVC);
    mIsAudio = !strncasecmp(mime, "audio/", 6);
    mIsMPEG4 = !strcasecmp(mime, MEDIA_MIMETYPE_VIDEO_MPEG4) ||
               !strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AAC);


    setTimeScale();
}

  >>>>>> theradEntry()是轨迹线程真正执行的方法,在这个方法中会通过mSource->read(&buffer)去不断的读取数据,那我们想知道,它是read的哪里的数据,所以就需要找到mSource是在哪里初始化的,我们搜索下,会发现是在

    MPEG4Writer::Track::Track(  )
        MPEG4Writer *owner, const sp<MediaSource> &source, size_t trackId)
    : mOwner(owner),
   
      mSource(source),

  这里进行了初始化。直接看上面,已经连线标出来的方法,就知道,这个source就是编码后的数据了

 

status_t MPEG4Writer::Track::threadEntry() {

    while (!mDone && (err = mSource->read(&buffer)) == OK) {

       ......

    }
}

--------------------

     >>>>>>在上面的分析中,我们知道在stagefrightRecorder.cpp中会完成OMXCodec的初始化,而且在初始化中,就将CameraSource传进来,这里只是想说 下面的source就是CameraSource,这样CameraSource.cpp和OMXCodec.cpp之间就联系起来了

OMXCodec.cpp

OMXCodec::OMXCodec(
        const sp<IOMX> &omx, IOMX::node_id node,
        uint32_t quirks, uint32_t flags,
        bool isEncoder,
        const char *mime,
        const char *componentName,
        const sp<MediaSource> &source,
        const sp<ANativeWindow> &nativeWindow)
    : mOMX(omx),
      mOMXLivesLocally(omx->livesLocally(node, getpid())),
      mNode(node),
      mQuirks(quirks),
      mFlags(flags),
      mIsEncoder(isEncoder),
      mIsVideo(!strncasecmp("video/", mime, 6)),
      mMIME(strdup(mime)),
      mComponentName(strdup(componentName)),
      mSource(source),
      mCodecSpecificDataIndex(0),
      mState(LOADED),
      mInitialBufferSubmit(true),
      mSignalledEOS(false),
      mNoMoreOutputData(false),
      mOutputPortSettingsHaveChanged(false),
      mSeekTimeUs(-1),
      mSeekMode(ReadOptions::SEEK_CLOSEST_SYNC),
      mTargetTimeUs(-1),
      mOutputPortSettingsChangedPending(false),
      mSkipCutBuffer(NULL),
      mLeftOverBuffer(NULL),
      mPaused(false),
      mNativeWindow(
              (!strncmp(componentName, "OMX.google.", 11))
                        ? NULL : nativeWindow),
      mNumBFrames(0),
      mInSmoothStreamingMode(false),
      mOutputCropChanged(false),
      mSignalledReadTryAgain(false),
      mReturnedRetry(false),
      mLastSeekTimeUs(-1),
      mLastSeekMode(ReadOptions::SEEK_CLOSEST) {
    mPortStatus[kPortIndexInput] = ENABLING;
    mPortStatus[kPortIndexOutput] = ENABLING;


    setComponentRole();
}

    >>>>>>这里的read()方法,会被MPEG4Writer.cpp的mSourcr->read()调用,至于编码的过程,还没有详细看

status_t OMXCodec::read(
        MediaBuffer **buffer, const ReadOptions *options) { 

      .......

}

==============================================================================================

欢迎关注我的个人微信公众号,公众号会记录自己开发的点滴,还有日常的生活,希望和更多的小伙伴一起交流~~

 

 

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

小驰行动派

谢谢老板,今晚吃鸡~

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值