live555中rtsp客户端对于buffer的处理方法介绍

A. Buffer 管理 
How to control the burst input packet is a big topic. The leak bucker model may be useful, however, if a long burst of higher-rate packets arrives (in our system), the bucket will overflow and our control function will take actions against packets in that burst. 

  

In our client system, in order to get the library (manager the session and the receiving thread) and the player (used to display picture and put the sound to the sound box, and this place include the decoder), we put a middle layer between the Server and the Player, which is easy for porting. 

The following gives a more detailed description. 

  

BTW : the UPC (usage parameter control) and the process of handling the exception, such as packet-loss, are complicated and we will not give a full description here. 

1.       Receiver Buffer 

When the session has been set, we will be ready for receiving the streaming packet. Now, for example, there are two media subsessions which one is Audio subsession and the other one is Video subsession, and we have one buffer for each of them, the following is the details of the receiver buffer manager. 

1)      In the receive part, we have defined a ‘Packet’ class, which used to store and handle one RTP packet. 

2)      For each subsession, there is one buffer queue whose number is variable, and according to the Maxim delay time, we determine the number of the buffer queue. 

3)      Buffer queue is responsible for the packet re-order and something else. 

4)      In the receiver buffer, we will handle the packet as soon as possible (except one packet is delay by the network, and we will wait for it until arrived the delay threshold), and leave the buffer overflow and underflow manager to the Player. 


 

Figure 1: packet receive flow 

  

  

Figure 2: packet handle flow (with the decoder) 

  

2.       Player (Decoding) Buffer 

The player stores media data from the RTSP client into buffers for every stream. The player allocates memory for every stream according to the maximum preroll length. In the initial phase, the player will wait for buffering till every stream has received contents at least Preroll time. So every buffer length will be Prerollmax + C (here C is a constant). When every buffer is ready, the player will start the playback thread and play the contents. 

  

Figure 3: Playback with Stream Buffers 

  

The playback thread counts time stamps for every stream. During playing process, one of the streams may be delayed and then the corresponding buffer will under run. If the video stream is delayed, the audio will play normally but the video stalls. The play back thread will continue to count time stamp for audio stream but the video time stamp will not increase. When the new video data is arrived the play back thread will decide it should skip some video frames till the next key frame or play faster to catch the audio time stamp. Usually the player may choose playing faster if it’s just delayed a short time. On the other hand, if it’s the audio stream that is stalled, the player will buffer again till every buffer stores data more than T time. Here T is a short time related with the audio stream’s preroll time, and it can be smaller or equal to the preroll. This dealing is for reducing discontinuity of audio when network is jitter. To avoid this case, choose a higher T value or choose a better network. 

  

If one of the buffers is overflow, this is treated as an error. For the video stream, the error handler will drop some data till next key frame arrives. And for audio stream, the error handler will simply drop some data. 

  

Figure 4: Process Buffer Overflow or Underflow 

B. How to control the receive loop 
在 live 的 openRTSP 代码 的主循环 

env->taskScheduler().doEventLoop() 

中, 函数 doEventLoop 有一默认的参数,可以通过设置这个参数达到推出循环的目的,不过可以直接调用下面 C 与 D 所写的释放资源的方法 pause 接收或者推出整个 线程 。 

  

C. PAUSE&SEEK 
OpenRTSP 例子没有给具体的实现,最新的 livemedia 版本可以支持 SEEK 了(包括 服务器 部分) 

//PAUSE : 

        playerIn.rtspClient->pauseMediaSession(*(playerIn.Session)); 

playerIn.rtspClient->playMediaSession(*(playerIn.Session), -1); 

//will resume 

  

// SEEK 

  float SessionLength = Session->playEndTime() 

     // 先得到播放时间区域,在 SDP 解析 中 

  先 PAUSE*** 

  再 rtspClient->PlayMediaSession(Session, start); 

       //start less than the "SessionLength " 

D. 释放资源问题 
OpenRTSP 给出的解决 方案 是 shutdown() 函数,而在我们将库与播放器连接过程中,发觉有线程始终不能推出,后来参考 Mplayer (它的 rtsp 支持采用的就是 live 的代码)的释放方案,给出以下代码,目前运行一切正常。 

void OutRTSPClient() //rtpState 是我们定义的一个 数据 结构体,保存了一些会话信息 


    if (rtpState->Session == NULL) 

       return ; 

    if (rtpState->rtspClient != NULL) { 

       MediaSubsessionIterator iter(*(rtpState->Session)); 

       MediaSubsession* subsession; 

       while ((subsession = iter.next()) != NULL) { 

           Medium::close(subsession->sink); 

            subsession->sink = NULL; 

                                                 rtpState->rtspClient->teardownMediaSubsession(*subsession); 

       } 

    } 

    

    UsageEnvironment* env = NULL; 

    TaskScheduler* scheduler = NULL; 

    if (rtpState->Session != NULL) { 

       env = &(rtpState->Session->envir()); 

       scheduler = &(env->taskScheduler()); 

    } 

    Medium::close(rtpState->Session); 

    Medium::close(rtpState->rtspClient); 

    

    env->reclaim(); 

    delete scheduler; 




本文来自CSDN博客,转载请标明出处:http://blog.csdn.net/yufangbo/archive/2009/11/27/4879086.aspx







****************
RTSP服务器与客户端之间的保活,有几种不同的做法。
1. 首先,RTSP服务器对于每个客户端,都应该有一个超时定时器,一旦客户端超时,就将对应的会话删除。如果会话还活着,就应该一直刷新这个定时器,这就是保活。
2. 根据客户端请求的传输方式的不同,保活的方法也不同。对于TCP传输方式,服务器需要在每次向客户端成功传输一个数据包之后刷新超时定时器。
3. 对于udp传输方式,服务器发出的数据包,并不知道客户端是否成功接收。标准的做法是在服务器每次收到客户端的RTCP包(receiver report)之后刷新超时定时器。但是有些客户端软件并不会发送receiver report,而是定期向服务器发送OPTIONS或GET_PARAMETER或SET_PARAMETER,那么服务器在收到这些消息时也需要刷新超 时定时器。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值