关于这个问题,困扰了很长时间,主要是为了获取这2个东东,设计到的类太多了,首先的理清设计到了哪些类,其次得把这些类的依属关系给理清,下面就让咱们一步一步的来分析:
SPS和PPS的获取是在服务器接受到了Descripute命令后进行的,在看下面的内容之前请大家先看上篇文章: http://www.shouyanwang.org/thread-704-1-1.html
RtspServerMediaSubSession sdpLines:
FramedSource* inputSource = createNewStreamSource(0, estBitrate);
RTPSink* dummyRTPSink= createNewRTPSink(&dummyGroupsock, rtpPayloadType, inputSource);
上面的2个类基本表明了H264这种类型在解析的时候所涉及的类层次:
初步看了下FramedSource和RTPSink的类结构层次,觉得FrameSource于从文件中提取一帧一帧的H264帧有关系,而RTPSink可能与拆包封装为RTP/RTCP 传给服务器有关联。
由此引发的2条主干继承关系如下:
FramedSource* inputSource = createNewStreamSource(0, estBitrate);
引发的继承关系:
H264VideoStreamFramer --- MPEGVideoStreamFamer --- FramedFilter---FramedSource -- Media
因此后面看到inputSoure其实本质上是 H264VideoStreamFramer
RTPSink* dummyRTPSink= createNewRTPSink(&dummyGroupsock, rtpPayloadType, inputSource);
引发的继承关系如下:
H264VideoRTPSink -- VideoRTPSink -- MultiFramedRTPSink -- RTPSink --- MediaSink
因此后面看到的RTSPink其实本质上是H264VideoRTPSink.
复制代码
fDummyRTPSink-startPlaying 实际调用的是MediaSink的startPlaying:
复制代码
这里注意了fSource已经指向了传进来的source,这个 source就是最上面的 H264VideoStreamFramer.
continuePlaying调用的H264VideoRTPSink中得continuePlaying:
复制代码
在这里需要非常的注意,因此在这里引入了一个新的类H264FUAFramenter,这个类的继承关系如下:
H264FUAFramenter -- FramedFilter -- FrameSource -- MediaSourec
fSource = fOurFragmenter;//fOurFragmenter 主要用于是RTP的拆包实现
这里格外注意这2行代码,fSource首先参与到fOurFragmenter的初始化,然后改变指向,指向了刚刚创建的fOurFragmenter.
在fOurFragmenter初始化的过程中,其参数 FramedSource* fInputSource;指向了传进来的source,也就是 H264FUAFramenter
为什么在这里重点强调,后面要用到的.
conitnuePlay调用的核心是buildAndSendPacket,,buildAndPackFrame主要涉及到了RFC3984中得RTP封包部分,其调用的核心是:
packFrame();
复制代码
这个时候fSource-getNextFrame,fSource其实调用的是FrameSource的getNextFrame,因为H264FuaFragmenter并没有实现此方法:
复制代码
在上面完成了对FrameSource的成员函数的初始化,主要是函数指针和void*指针,void*指向H264VideoRTPSink,函数指针指向H264VideoRTPSink中得函数.
H264FUAFramenter中得doGetNextFrame
复制代码
看到上面的代码没,核心是fInputSource->getNextFrame,fInputSource是 H264VideoFramer类型的,但是H264VideoFramer并没有覆盖此方法,因此最终的执行是在FrameSource的getNextFrame中:
复制代码
从H264VideoStreamFrame的继承关系可以推敲出,doGetNextFrame其实调用的是其父类MPEGVideoStreamFramer的doGetNextFrame方法.
复制代码
剩下的明天接着分析,就这么点分析就花了将近1个半钟头........类了,休息下。。。
SPS和PPS的获取是在服务器接受到了Descripute命令后进行的,在看下面的内容之前请大家先看上篇文章: http://www.shouyanwang.org/thread-704-1-1.html
RtspServerMediaSubSession sdpLines:
FramedSource* inputSource = createNewStreamSource(0, estBitrate);
RTPSink* dummyRTPSink= createNewRTPSink(&dummyGroupsock, rtpPayloadType, inputSource);
上面的2个类基本表明了H264这种类型在解析的时候所涉及的类层次:
初步看了下FramedSource和RTPSink的类结构层次,觉得FrameSource于从文件中提取一帧一帧的H264帧有关系,而RTPSink可能与拆包封装为RTP/RTCP 传给服务器有关联。
由此引发的2条主干继承关系如下:
FramedSource* inputSource = createNewStreamSource(0, estBitrate);
引发的继承关系:
H264VideoStreamFramer --- MPEGVideoStreamFamer --- FramedFilter---FramedSource -- Media
因此后面看到inputSoure其实本质上是 H264VideoStreamFramer
RTPSink* dummyRTPSink= createNewRTPSink(&dummyGroupsock, rtpPayloadType, inputSource);
引发的继承关系如下:
H264VideoRTPSink -- VideoRTPSink -- MultiFramedRTPSink -- RTPSink --- MediaSink
因此后面看到的RTSPink其实本质上是H264VideoRTPSink.
- char const* H264VideoFileServerMediaSubsession::getAuxSDPLine(RTPSink* rtpSink, FramedSource* inputSource) {
- if (fAuxSDPLine != NULL) return fAuxSDPLine; // it's already been set up (for a previous client)
- printf("H264VideoFileServerMediaSubsession getAuxSDPLine\r\n");
- if (fDummyRTPSink == NULL) { // we're not already setting it up for another, concurrent stream
- // Note: For H264 video files, the 'config' information ("profile-level-id" and "sprop-parameter-sets") isn't known
- // until we start reading the file. This means that "rtpSink"s "auxSDPLine()" will be NULL initially,
- // and we need to start reading data from our file until this changes.
- fDummyRTPSink = rtpSink;//加了dummy就代表是引用?
- // Start reading the file: 在这里的主要目的是为了获取SPS和PPS
- fDummyRTPSink->startPlaying(*inputSource, afterPlayingDummy, this);//函数的声明在 MediaSink 中完成,如果是H264那么这里的inputSource必然不为NULL
fDummyRTPSink-startPlaying 实际调用的是MediaSink的startPlaying:
- Boolean MediaSink::startPlaying(MediaSource& source,
- afterPlayingFunc* afterFunc,
- void* afterClientData) {
- printf("MediaSink startPlaying....\r\n");
- // Make sure we're not already being played:
- if (fSource != NULL) {//这里是fSource不是source
- printf("MediaSink is already being played\r\n");
- envir().setResultMsg("This sink is already being played");
- return False;
- }
- // Make sure our source is compatible: compatible(兼容)
- if (!sourceIsCompatibleWithUs(source)) {
- envir().setResultMsg("MediaSink::startPlaying(): source is not compatible!");
- return False;
- }
- fSource = (FramedSource*)&source;
- fAfterFunc = afterFunc;//MediaSink中定义的函数指针,用于指向H264FileServerMediaSubSession中的函数:afterPlayingDummy
- fAfterClientData = afterClientData;//实际指向H264FileMediaSubSession
- return continuePlaying();//调用的是H264的continuePlaying函数
- }
continuePlaying调用的H264VideoRTPSink中得continuePlaying:
- Boolean H264VideoRTPSink::continuePlaying() {
- // First, check whether we have a 'fragmenter' class set up yet.
- // If not, create it now:
- if (fOurFragmenter == NULL) {
- printf("H264VideoRTPSink init H264FUAFragmenter\r\n");
- Boolean H264VideoRTPSink::continuePlaying() {
H264FUAFramenter -- FramedFilter -- FrameSource -- MediaSourec
fOurFragmenter = new H264FUAFragmenter(envir(), fSource, OutPacketBuffer::maxSize,//100K
ourMaxPacketSize() - 12/*RTP hdr size*/);
fSource = fOurFragmenter;//fOurFragmenter 主要用于是RTP的拆包实现
这里格外注意这2行代码,fSource首先参与到fOurFragmenter的初始化,然后改变指向,指向了刚刚创建的fOurFragmenter.
在fOurFragmenter初始化的过程中,其参数 FramedSource* fInputSource;指向了传进来的source,也就是 H264FUAFramenter
为什么在这里重点强调,后面要用到的.
conitnuePlay调用的核心是buildAndSendPacket,,buildAndPackFrame主要涉及到了RFC3984中得RTP封包部分,其调用的核心是:
packFrame();
- void MultiFramedRTPSink::packFrame() {
- // Get the next frame.
- // First, see if we have an overflow frame that was too big for the last pkt
- if (fOutBuf->haveOverflowData()) {
- printf("MultiFramedRTPSink packFrame Over flow data---\r\n");
- // Use this frame before reading a new one from the source
- unsigned frameSize = fOutBuf->overflowDataSize(); //? fOutBuf的初始化在哪里完成?
- struct timeval presentationTime = fOutBuf->overflowPresentationTime();
- unsigned durationInMicroseconds = fOutBuf->overflowDurationInMicroseconds();
- fOutBuf->useOverflowData();
- afterGettingFrame1(frameSize, 0, presentationTime, durationInMicroseconds);
- } else {
- printf("MultiFrameRTPSink packFrame read a new frame from the source--\r\n");
- // Normal case: we need to read a new frame from the source
- if (fSource == NULL) return;
- fCurFrameSpecificHeaderPosition = fOutBuf->curPacketSize();
- fCurFrameSpecificHeaderSize = frameSpecificHeaderSize();
- fOutBuf->skipBytes(fCurFrameSpecificHeaderSize);
- fTotalFrameSpecificHeaderSizes += fCurFrameSpecificHeaderSize;
- // printf("MultiFrameRTPSink packFrame curptr:%d,totalBytesAvailable:%d--\r\n",fOutBuf->curPtr(), fOutBuf->totalBytesAvailable());
- fSource->getNextFrame(fOutBuf->curPtr(), fOutBuf->totalBytesAvailable(),
- afterGettingFrame, this, ourHandleClosure, this);//似乎所有的核心都集中到了这里?-----
- }
- }
这个时候fSource-getNextFrame,fSource其实调用的是FrameSource的getNextFrame,因为H264FuaFragmenter并没有实现此方法:
- void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,
- afterGettingFunc* afterGettingFunc,
- void* afterGettingClientData,
- onCloseFunc* onCloseFunc,
- void* onCloseClientData) {
- printf("FrameSource getNextFrame ...\r\n");
- // Make sure we're not already being read:
- if (fIsCurrentlyAwaitingData) {
- envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!\n";
- envir().internalError();
- }
- fTo = to;
- fMaxSize = maxSize;
- fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()
- fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()
- fAfterGettingFunc = afterGettingFunc;
- fAfterGettingClientData = afterGettingClientData;
- fOnCloseFunc = onCloseFunc;
- fOnCloseClientData = onCloseClientData;
- fIsCurrentlyAwaitingData = True;
- //实际调用的是: H264FUAFragmenter::doGetNextFrame()
- doGetNextFrame();//
- }
H264FUAFramenter中得doGetNextFrame
- void H264FUAFragmenter::doGetNextFrame() {
- if (fNumValidDataBytes == 1) {
- //正常情况下应该执行的是这里
- printf("H264FUAFragmenter doGetNextFrame validDataBytes..\r\n");
- // We have no NAL unit data currently in the buffer. Read a new one:
- //这里的fInputSource实际指的是:H264VideoStreamFramer
- fInputSource->getNextFrame(&fInputBuffer[1], fInputBufferSize - 1,
- afterGettingFrame, this,
- FramedSource::handleClosure, this);
- } else {
- //对NAL单元进行拆分或者合并发送
看到上面的代码没,核心是fInputSource->getNextFrame,fInputSource是 H264VideoFramer类型的,但是H264VideoFramer并没有覆盖此方法,因此最终的执行是在FrameSource的getNextFrame中:
- void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,
- afterGettingFunc* afterGettingFunc,
- void* afterGettingClientData,
- onCloseFunc* onCloseFunc,
- void* onCloseClientData) {
- printf("FrameSource getNextFrame ...\r\n");
- // Make sure we're not already being read:
- if (fIsCurrentlyAwaitingData) {
- envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!\n";
- envir().internalError();
- }
- fTo = to;
- fMaxSize = maxSize;
- fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()
- fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()
- fAfterGettingFunc = afterGettingFunc;
- fAfterGettingClientData = afterGettingClientData;
- fOnCloseFunc = onCloseFunc;
- fOnCloseClientData = onCloseClientData;
- fIsCurrentlyAwaitingData = True;
- //实际调用的是: H264FUAFragmenter::doGetNextFrame() ---(这里理解错误 ,因为这里的FrameSource并不是H264FUAFragmenter对象)
- doGetNextFrame();// ----这里实际调用的是那个对象的呢?
- }
从H264VideoStreamFrame的继承关系可以推敲出,doGetNextFrame其实调用的是其父类MPEGVideoStreamFramer的doGetNextFrame方法.
- void MPEGVideoStreamFramer::doGetNextFrame() {
- printf("MPEGVideoStreamFrame doGetNextFrame ....\r\n");
- fParser->registerReadInterest(fTo, fMaxSize);
- continueReadProcessing();
- }
剩下的明天接着分析,就这么点分析就花了将近1个半钟头........类了,休息下。。。