Live555 FAQ

36 篇文章 0 订阅

I have successfully used the "testRTSPClient" demo application to receive a RTSP/RTP stream.Using this application code as a model, how can I decode the received video (and/or audio) data?

The "testRTSPClient" demo application receives each (video and/or audio) frame into a memory buffer,but does not do anything with the frame data.You can, however, use this code as a model for a 'media player' application that decodes and renders these frames.Note, in particular, the" DummySink" class that the"testRTSPClient" demo application uses - andthe (non-static) "DummySink::afterGettingFrame()" function.When this function is called, a complete 'frame' (for H.264, this will be a "NAL unit") will have already been delivered into"fReceiveBuffer".Note that our "DummySink" implementation doesn't actually do anything with this data; that's why it's called a 'dummy' sink.

If you want to decode (or otherwise process) these frames, you would replace "DummySink" with your own "MediaSink" subclass.Its "afterGettingFrame()" function would pass the data (at "fReceiveBuffer", of length "frameSize") to a decoder.(A decoder would also use the "presentationTime" timestamp to properly time the rendering of each frame, and to synchronizeaudio and video.)

If you are receiving H.264 video data, there is one more thing that you have to do before you start feeding frames to your decoder.H.264 streams have out-of-band configuration information ("SPS" and "PPS" NAL units)that you may need to feed to the decoder to initialize it.To get this information, call "MediaSubsession::fmtp_spropparametersets()" (on the video 'subsession' object).This will give you a (ASCII) character string.You can then pass this to "parseSPropParameterSets()"(defined in the file"include/H264VideoRTPSource.hh"),to generate binary NAL units for your decoder.

来源:http://www.live555.com/liveMedia/faq.html#testRTSPClient-how-to-decode-data


When I try to receive a stream using the"openRTSP" command-line client,the RTSP protocol exchange appears to work OK, but the resulting data file(s) are empty.What's wrong?

RTP/UDP media (audio and/or video) packets from the server are not reaching the client, most likely because there is a firewall somewhereinbetween that is blocking UDP packets.(Note that the RTSP protocol uses TCP, not UDP.)To correct this, either fix your firewall, or else request RTP-over-TCP streaming, using the "-t" option to "openRTSP".

If, instead, you're using the "testRTSPClient" demo application, note the line

    #define REQUEST_STREAMING_OVER_TCP False
If you change "False" to "True", then the "testRTSPClient" client will request RTP-over-TCP streaming.

来源:http://www.live555.com/liveMedia/faq.html#openRTSP-empty-files


Does the RTSP implementation (client and/or server) support'trick mode' operations (i.e., seek, fast-forward, reverse play)?

When talking about "trick mode support", it's important to distinguish between RTSP client support,and RTSP server support.

Our RTSP client implementationfully supports 'trick play' operations.Note the "start", "end" and "scale" parameters to "RTSPClient::sendPlayCommand()".(Note also that our"openRTSP"demo RTSP client application has command-line options that can be used to demonstrate client 'trick play' operations.)

Our RTSP server implementationalso supports 'trick play' operations,but note that parts of this are (necessarily) media type specific. I.e., there has to be some new code added for each different type of media file that we wish to stream.This functionality has already been provided forsome types of media file.

To add 'trick play' support for a media type (that does not already support it),changes need to be made to the corresponding subclass of "ServerMediaSubsession":

  1. To add support for seeking within a stream, you will need to implement the following virtual functions:
    • virtual float duration() const;
      Returns the file's duration, in seconds
    • virtual void seekStreamSource(FramedSource* inputSource, double& seekNPT, double streamDuration, u_int64_t& numBytes);
      (Attempts to) seek within the input source.
  2. To add support for 'fast forward' and/or 'reverse play', you will also need to implement the following virtual functions:
    • virtual void testScaleFactor(float& scale);
      Inspects the input value of "scale", and, if necessary, changes it to a nearby value that we support. (E.g., if the input value of "scale" is 3.3, you might change it to 3 (an integer).) If there's no 'nearby' value that you support, just set "scale" to 1 (the default value).
    • virtual void setStreamSourceScale(FramedSource* inputSource, float scale);
      Actually sets the scale factor for a specific input source. (The value of "scale" will previously have been passed in and out of "testScaleFactor()", so we know that it is a value that we support.)
来源:http://www.live555.com/liveMedia/faq.html#trick-mode


OutPacketBuffer::maxSize should be larger by default?

"OutPacketBuffer::maxSize" defines the largest possible 'frame' that a server (or a proxy server) can send.  It's important to understand that each outgoing frame - if it is larger than the RTP/UDP packet size (about 1500 bytes on most networks) - will be broken up into multiple outgoing RTP packets, and the receiver must receive *all* of these packets in order to be able to reconstruct the frame.  In other words, if even one of these packets is lost, then the receiver will lose the *entire* frame.

The default 60000 byte size corresponds to a sequence of about *20* RTP/UDP packets (assuming a standard ~1500 byte MTU).  Internet streaming servers should not be generating frames that are this large.  But if they do, it's useful to have our code print out an error message, telling them that they're doing something that they shouldn't.  (Ditto if you're trying to proxy frames this large; this will not work if the network in front of the proxy server has any significant packet loss.  But if these networks happen to have no packet loss, then you can easily update your code to increase "OutPacketBuffer::maxSize".)

I might end up increasing the default "OutPacketBuffer::maxSize" to 65000 kBytes (because such a frame would be large enough to fit inside a single 65536-byte UDP packet - the largest possible).  But I'm not going to make the default size larger than this, because developers need to be aware of the consequences of having their servers (try to) transmit ridiculously large frames.

来源:http://lists.live555.com/pipermail/live-devel/2013-April/016816.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值