mediastreamer2

 来自 http://blog.csdn.net/flyhawk007j2me/article/details/6830830

mediastreamer2是一个支持多种平台的轻量级的流技术引擎,主要适用于开发语音和视频电话应用程序。

该引擎主要为linphone的多媒体的收发,包括语音和视频的捕获、编码解码以及渲染。

mediastream.c是mediastream2库自带的一个test,也是最为复杂的一个test,学习它有助于加深对mediastreamer2的理解


//*****************************************************************************************************************************************************************************//

简介一下它的功能
1 利用mediastreamer2库封装的filter完成:从声卡捕捉声音,编码后通过rtp发送给远端主机,同时接收远端主机发来的rtp包,解码到声卡回放。
filter graph如下:
soundread -> ec -> encoder -> rtpsend
rtprecv -> decode -> dtmfgen -> ec -> soundwrite

2 利用mediastreamer2库封装的filter完成:从摄像头捕捉图像,编码后通过rtp发送给远端主机(有本地视频预览),同时接收远端主机发来的rtp包,解码后视频回放。
filter graph如下:
source -> pixconv -> tee -> encoder -> rtpsend
tee -> output
rtprecv -> decoder -> output

这个程序没有实现:用2个session来分别同时传送视频和音频。所以不要造成误解。
它实现的是:用1个全双工的session来传送视频或者音频,不管是本机还是远端主机,运行的都是同一个程序,一次只能选择一种payload。
牢记rfc3550 Page 17中的所说“Separate audio and video streams SHOULD NOT be carried in a single RTP session and demultiplexed based on the payload type or SSRC fields. ”

程序中audio_stream_new() video_stream_new()内使用create_duplex_rtpsession()建立起监听端口。
比较奇怪的是video_stream_start()最后没有attach上rtprecv。而audio_stream_start_full()里有attach rtprecv。

编译的时候,别忘了加-D VIDEO_ENABLED启用视频支持。

程序命令参数
mediastream --local <port> --remote <ip:port> --payload <payload type number>
[ --fmtp <fmtpline>] [ --jitter <miliseconds>]

这里fmtp和jitter是可选
fmtp的介绍如下:
Sets a send parameters (fmtp) for the PayloadType. This method is provided for applications using RTP with SDP, but actually the ftmp information is not used for RTP processing.
jitter就是设定缓冲时间,也就是队列的阀值。具体可以参见Comer所著TCP/IP 卷一的RTP一章。默认是80ms(还是50ms?),没必要修改它。


//==========================================================================================================//

举一个使用的例子。
主机A IP 10.10.104.198 
主机B IP 10.10.104.199
主机A 运行 ./mediastream --local 5010 --remote 10.10.104.199:6014 --payload 110
主机B 运行 ./mediastream --local 6014 --remote 10.10.104.198:5010 --payload 110

这里我使用的是音频传输,speex_nb编码。视频没有使用,怀疑是SDL有些问题,视频预览的时候是绿屏。

注意:程序代码里提到的音频编码有lpc1015,speex_nb,speex_wb,ilbc等,视频编码有h263_1998,theora,mp4v,x_snow,h264等。但是你却不一定能用得起来。这要看之前编译ffmpeg时,究竟是否指定了如上编码。
如果你的机子上并没有这些库,却又指定了这些库的payload type,那么mediastreamer2初始化的时候,会在终端输出错误信息找不到xxx.so之类,那么请换另一种payload type。
一般来说,speex,theora,xvid(h264)这三个比较容易编译。

[atom@localhost code]$ ./mediastream --local 5010 --remote 10.10.104.199:6014 --payload 110 > atom

 

Mediastreamer2  可通过插件进行扩展,当前提供了 H264 和 ILBC 编码器插件。

 




//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++//

Initialize mediastreamer2

When using mediastreamer2, your first task is to initialize the library:

       #include <mediastreamer2/mscommon.h>

    int i;

    i=ms_init();

    if (i!=0)

    return -1;

Mediastreamer2 provides internal components which are called filters. Those filters must be linked together so that OUTPUT from one filter is sent to INPUT of the other filters.

Usually, filters are used for processing audio or video data. They could capture data, play/draw data, encode/decode data, mix data (conference), transform data (echo canceller). One of the most important filter is the RTP filters that are able to send and receive RTP data.

Graph sample

If you are using mediastreamer2, you probably want to do Voice Over IP and get a graph providing a 2 way communication. This 2 graphs are very simple:

This first graph shows the filters needed to capture data from a sound card, encode them and send it through an RTP session.

             AUDIO    ->    ENCODER   ->   RTP

     CAPTURE   ->                        -> SENDER

This second graph shows the filters needed to receive data from an RTP session decode it and send it to the playback device.

       RTP      ->    DECODER   ->   DTMF           ->   AUDIO

RECEIVER ->                        -> GENERATION -> PLAYBACK

Code to initiate the filters of the Graph sample

Note that the NULL/error checks are not done for better reading. To build the graph, you'll need some information: you need to select the sound card and of course have an RTP session created with oRTP.

      MSSndCard *sndcard;

       sndcard=ms_snd_card_manager_get_default_card(ms_snd_card_manager_get());

      /* audio capture filter */

     MSFilter *soundread=ms_snd_card_create_reader(captcard);

     MSFilter *soundwrite=ms_snd_card_create_writer(playcard);

     MSFilter *encoder=ms_filter_create_encoder("PCMU");

     MSFilter *decoder=ms_filter_create_decoder("PCMU");

     MSFilter *rtpsend=ms_filter_new(MS_RTP_SEND_ID);

     MSFilter *rtprecv=ms_filter_new(MS_RTP_RECV_ID);

     RtpSession *rtp_session = *** your_ortp_session *** ;

     ms_filter_call_method(rtpsend,MS_RTP_SEND_SET_SESSION,rtp_session);

     ms_filter_call_method(rtprecv,MS_RTP_RECV_SET_SESSION,rtp_session);

     MSFilter *dtmfgen=ms_filter_new(MS_DTMF_GEN_ID);


In most cases, the above graph is not enough: you'll need to configure filter's options. As an example, you need to set sampling rate of sound cards' filters:

     int sr = 8000;

    int chan=1;

    ms_filter_call_method(soundread,MS_FILTER_SET_SAMPLE_RATE,&sr);

    ms_filter_call_method(soundwrite,MS_FILTER_SET_SAMPLE_RATE,&sr);

    ms_filter_call_method(stream->encoder,MS_FILTER_SET_SAMPLE_RATE,&sr);

    ms_filter_call_method(stream->decoder,MS_FILTER_SET_SAMPLE_RATE,&sr);

    ms_filter_call_method(soundwrite,MS_FILTER_SET_NCHANNELS, &chan);

        /* if you have some fmtp parameters (from SDP for example!)

        char *fmtp1 = ** get your fmtp line **;

        char *fmtp2 = ** get your fmtp line **;

    ms_filter_call_method(stream->encoder,MS_FILTER_ADD_FMTP, (void*)fmtp1);

    ms_filter_call_method(stream->decoder,MS_FILTER_ADD_FMTP,(void*)fmtp2);


Code to link the filters and run the graph sample

    ms_filter_link(stream->soundread,0,stream->encoder,0);

    ms_filter_link(stream->encoder,0,stream->rtpsend,0);

    ms_filter_link(stream->rtprecv,0,stream->decoder,0);

    ms_filter_link(stream->decoder,0,stream->dtmfgen,0);

    ms_filter_link(stream->dtmfgen,0,stream->soundwrite,0);



Then you need to 'attach' the filters to a ticker. A ticker is a graph manager responsible for running filters.

In the above case, there is 2 independant graph within the ticker: you need to attach the first element of each graph (the one that does not contains any INPUT pins)

       /* create ticker */

    MSTicker *ticker=ms_ticker_new();

    ms_ticker_attach(ticker,soundread);

    ms_ticker_attach(ticker,rtprecv);


Code to unlink the filters and run the graph sample

      ms_ticker_detach(ticker,soundread);

    ms_ticker_detach(ticker,rtprecv);

    ms_filter_unlink(stream->soundread,0,stream->encoder,0);

    ms_filter_unlink(stream->encoder,0,stream->rtpsend,0);

    ms_filter_unlink(stream->rtprecv,0,stream->decoder,0);

    ms_filter_unlink(stream->decoder,0,stream->dtmfgen,0);

    ms_filter_unlink(stream->dtmfgen,0,stream->soundwrite,0);

    if (rtp_session!=NULL) rtp_session_destroy(rtp_session);

    if (rtpsend!=NULL) ms_filter_destroy(rtpsend);

    if (rtprecv!=NULL) ms_filter_destroy(rtprecv);

    if (soundread!=NULL) ms_filter_destroy(soundread);

    if (soundwrite!=NULL) ms_filter_destroy(soundwrite);

    if (encoder!=NULL) ms_filter_destroy(encoder);

    if (decoder!=NULL) ms_filter_destroy(decoder);

    if (dtmfgen!=NULL) ms_filter_destroy(dtmfgen);

    if (ticker!=NULL) ms_ticker_destroy(ticker);

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值