live555中是如何获取SPS和PPS的

关于这个问题,困扰了很长时间,主要是为了获取这2个东东,设计到的类太多了,首先的理清设计到了哪些类,其次得把这些类的依属关系给理清,下面就让咱们一步一步的来分析:
SPS和PPS的获取是在服务器接受到了Descripute命令后进行的,在看下面的内容之前请大家先看上篇文章: http://www.shouyanwang.org/thread-704-1-1.html


RtspServerMediaSubSession sdpLines:

FramedSource* inputSource = createNewStreamSource(0, estBitrate);
RTPSink* dummyRTPSink= createNewRTPSink(&dummyGroupsock, rtpPayloadType, inputSource);


上面的2个类基本表明了H264这种类型在解析的时候所涉及的类层次:

初步看了下FramedSource和RTPSink的类结构层次,觉得FrameSource于从文件中提取一帧一帧的H264帧有关系,而RTPSink可能与拆包封装为RTP/RTCP 传给服务器有关联。

由此引发的2条主干继承关系如下:

FramedSource* inputSource = createNewStreamSource(0, estBitrate);
引发的继承关系:
H264VideoStreamFramer --- MPEGVideoStreamFamer --- FramedFilter---FramedSource -- Media
因此后面看到inputSoure其实本质上是 H264VideoStreamFramer

RTPSink* dummyRTPSink= createNewRTPSink(&dummyGroupsock, rtpPayloadType, inputSource);
引发的继承关系如下:
H264VideoRTPSink -- VideoRTPSink  --  MultiFramedRTPSink -- RTPSink --- MediaSink
因此后面看到的RTSPink其实本质上是H264VideoRTPSink.

  1. char const* H264VideoFileServerMediaSubsession::getAuxSDPLine(RTPSink* rtpSink, FramedSource* inputSource) {
  2.   if (fAuxSDPLine != NULL) return fAuxSDPLine; // it's already been set up (for a previous client)

  3.   printf("H264VideoFileServerMediaSubsession getAuxSDPLine\r\n");

  4.   if (fDummyRTPSink == NULL) { // we're not already setting it up for another, concurrent stream
  5.     // Note: For H264 video files, the 'config' information ("profile-level-id" and "sprop-parameter-sets") isn't known
  6.     // until we start reading the file.  This means that "rtpSink"s "auxSDPLine()" will be NULL initially,
  7.     // and we need to start reading data from our file until this changes.
  8.     fDummyRTPSink = rtpSink;//加了dummy就代表是引用?

  9.     // Start reading the file: 在这里的主要目的是为了获取SPS和PPS
  10.     fDummyRTPSink->startPlaying(*inputSource, afterPlayingDummy, this);//函数的声明在 MediaSink 中完成,如果是H264那么这里的inputSource必然不为NULL
复制代码

fDummyRTPSink-startPlaying 实际调用的是MediaSink的startPlaying:
  1. Boolean MediaSink::startPlaying(MediaSource& source,
  2.                                 afterPlayingFunc* afterFunc,
  3.                                 void* afterClientData) {

  4.   printf("MediaSink startPlaying....\r\n");
  5.   // Make sure we're not already being played:
  6.   if (fSource != NULL) {//这里是fSource不是source
  7.     printf("MediaSink is already being played\r\n");
  8.     envir().setResultMsg("This sink is already being played");
  9.     return False;
  10.   }

  11.   // Make sure our source is compatible: compatible(兼容)
  12.   if (!sourceIsCompatibleWithUs(source)) {
  13.     envir().setResultMsg("MediaSink::startPlaying(): source is not compatible!");
  14.     return False;
  15.   }
  16.   fSource = (FramedSource*)&source;

  17.   fAfterFunc = afterFunc;//MediaSink中定义的函数指针,用于指向H264FileServerMediaSubSession中的函数:afterPlayingDummy
  18.   fAfterClientData = afterClientData;//实际指向H264FileMediaSubSession
  19.   return continuePlaying();//调用的是H264的continuePlaying函数
  20. }
复制代码
这里注意了fSource已经指向了传进来的source,这个 source就是最上面的 H264VideoStreamFramer.

continuePlaying调用的H264VideoRTPSink中得continuePlaying:
  1. Boolean H264VideoRTPSink::continuePlaying() {
  2.   // First, check whether we have a 'fragmenter' class set up yet.
  3.   // If not, create it now:
  4.   if (fOurFragmenter == NULL) {
  5.     printf("H264VideoRTPSink init H264FUAFragmenter\r\n");
  6. Boolean H264VideoRTPSink::continuePlaying() {
复制代码
在这里需要非常的注意,因此在这里引入了一个新的类H264FUAFramenter,这个类的继承关系如下:

H264FUAFramenter -- FramedFilter -- FrameSource -- MediaSourec

fOurFragmenter = new H264FUAFragmenter(envir(), fSource, OutPacketBuffer::maxSize,//100K

   ourMaxPacketSize() - 12/*RTP hdr size*/);

fSource = fOurFragmenter;//fOurFragmenter 主要用于是RTP的拆包实现


这里格外注意这2行代码,fSource首先参与到fOurFragmenter的初始化,然后改变指向,指向了刚刚创建的fOurFragmenter.

在fOurFragmenter初始化的过程中,其参数 FramedSource* fInputSource;指向了传进来的source,也就是 H264FUAFramenter

为什么在这里重点强调,后面要用到的.


conitnuePlay调用的核心是buildAndSendPacket,,buildAndPackFrame主要涉及到了RFC3984中得RTP封包部分,其调用的核心是:
packFrame();


  1. void MultiFramedRTPSink::packFrame() {
  2.   // Get the next frame.

  3.   // First, see if we have an overflow frame that was too big for the last pkt
  4.   if (fOutBuf->haveOverflowData()) {
  5.           printf("MultiFramedRTPSink packFrame Over flow data---\r\n");
  6.     // Use this frame before reading a new one from the source
  7.     unsigned frameSize = fOutBuf->overflowDataSize(); //? fOutBuf的初始化在哪里完成?
  8.     struct timeval presentationTime = fOutBuf->overflowPresentationTime();
  9.     unsigned durationInMicroseconds = fOutBuf->overflowDurationInMicroseconds();
  10.     fOutBuf->useOverflowData();

  11.     afterGettingFrame1(frameSize, 0, presentationTime, durationInMicroseconds);
  12.   } else {
  13.     printf("MultiFrameRTPSink packFrame read a new frame from the source--\r\n");

  14.     // Normal case: we need to read a new frame from the source
  15.     if (fSource == NULL) return;

  16.     fCurFrameSpecificHeaderPosition = fOutBuf->curPacketSize();
  17.     fCurFrameSpecificHeaderSize = frameSpecificHeaderSize();
  18.     fOutBuf->skipBytes(fCurFrameSpecificHeaderSize);
  19.     fTotalFrameSpecificHeaderSizes += fCurFrameSpecificHeaderSize;

  20. //        printf("MultiFrameRTPSink packFrame curptr:%d,totalBytesAvailable:%d--\r\n",fOutBuf->curPtr(), fOutBuf->totalBytesAvailable());
  21.     fSource->getNextFrame(fOutBuf->curPtr(), fOutBuf->totalBytesAvailable(),
  22.                           afterGettingFrame, this, ourHandleClosure, this);//似乎所有的核心都集中到了这里?-----
  23.   }
  24. }
复制代码

这个时候fSource-getNextFrame,fSource其实调用的是FrameSource的getNextFrame,因为H264FuaFragmenter并没有实现此方法:
  1. void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,
  2.                                 afterGettingFunc* afterGettingFunc,
  3.                                 void* afterGettingClientData,
  4.                                 onCloseFunc* onCloseFunc,
  5.                                 void* onCloseClientData) {

  6.   printf("FrameSource getNextFrame ...\r\n");
  7.   // Make sure we're not already being read:
  8.   if (fIsCurrentlyAwaitingData) {
  9.     envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!\n";
  10.     envir().internalError();
  11.   }

  12.   fTo = to;
  13.   fMaxSize = maxSize;
  14.   fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()
  15.   fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()
  16.   fAfterGettingFunc = afterGettingFunc;
  17.   fAfterGettingClientData = afterGettingClientData;
  18.   fOnCloseFunc = onCloseFunc;
  19.   fOnCloseClientData = onCloseClientData;
  20.   fIsCurrentlyAwaitingData = True;

  21.   //实际调用的是: H264FUAFragmenter::doGetNextFrame()
  22.   doGetNextFrame();//
  23. }
复制代码
在上面完成了对FrameSource的成员函数的初始化,主要是函数指针和void*指针,void*指向H264VideoRTPSink,函数指针指向H264VideoRTPSink中得函数.

H264FUAFramenter中得doGetNextFrame
  1. void H264FUAFragmenter::doGetNextFrame() {
  2.   if (fNumValidDataBytes == 1) {

  3.           //正常情况下应该执行的是这里
  4.         printf("H264FUAFragmenter doGetNextFrame validDataBytes..\r\n");
  5.     // We have no NAL unit data currently in the buffer.  Read a new one:

  6.         //这里的fInputSource实际指的是:H264VideoStreamFramer
  7.     fInputSource->getNextFrame(&fInputBuffer[1], fInputBufferSize - 1,
  8.                                afterGettingFrame, this,
  9.                                FramedSource::handleClosure, this);
  10.   } else {

  11.     //对NAL单元进行拆分或者合并发送
复制代码

看到上面的代码没,核心是fInputSource->getNextFrame,fInputSource是 H264VideoFramer类型的,但是H264VideoFramer并没有覆盖此方法,因此最终的执行是在FrameSource的getNextFrame中:

  1. void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,
  2.                                 afterGettingFunc* afterGettingFunc,
  3.                                 void* afterGettingClientData,
  4.                                 onCloseFunc* onCloseFunc,
  5.                                 void* onCloseClientData) {

  6.   printf("FrameSource getNextFrame ...\r\n");
  7.   // Make sure we're not already being read:
  8.   if (fIsCurrentlyAwaitingData) {
  9.     envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!\n";
  10.     envir().internalError();
  11.   }

  12.   fTo = to;
  13.   fMaxSize = maxSize;
  14.   fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()
  15.   fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()
  16.   fAfterGettingFunc = afterGettingFunc;
  17.   fAfterGettingClientData = afterGettingClientData;
  18.   fOnCloseFunc = onCloseFunc;
  19.   fOnCloseClientData = onCloseClientData;
  20.   fIsCurrentlyAwaitingData = True;

  21.   //实际调用的是: H264FUAFragmenter::doGetNextFrame() ---(这里理解错误 ,因为这里的FrameSource并不是H264FUAFragmenter对象)
  22.   doGetNextFrame();// ----这里实际调用的是那个对象的呢?
  23. }
复制代码

从H264VideoStreamFrame的继承关系可以推敲出,doGetNextFrame其实调用的是其父类MPEGVideoStreamFramer的doGetNextFrame方法.
  1. void MPEGVideoStreamFramer::doGetNextFrame() {
  2.   printf("MPEGVideoStreamFrame doGetNextFrame ....\r\n");
  3.   fParser->registerReadInterest(fTo, fMaxSize);
  4.   continueReadProcessing();
  5. }
复制代码


剩下的明天接着分析,就这么点分析就花了将近1个半钟头........类了,休息下。。。
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值