ZLMediaKit中RTSP的创建过程

5 篇文章 0 订阅
1 篇文章 0 订阅

ZLMediaKit中RTSP的创建过程

  • 首先是初始化_rtsp,在 publish消息中的setProtocolTranslation,在_muxer(MultiMediaSourceMuxer)里面的_muxer(MultiMuxerPrivate)的构造函数中,创建了_rtsp(RtspMediaSourceMuxer),在_rtsp的构造函数中,创建了_media_src (RtspMediaSource),并将RtpRing的delegate设置为_media_src
   RtspMediaSourceMuxer(const string &vhost,
                         const string &strApp,
                         const string &strId,
                         const TitleSdp::Ptr &title = nullptr) : RtspMuxer(title){
        _media_src = std::make_shared<RtspMediaSource>(vhost,strApp,strId);
        getRtpRing()->setDelegate(_media_src);
    }
  • 后面就是接收音视频数据后,对应 _rtsp 相应的处理过程了,在接收音视频的过程中,会创建track并检测track是否ready,如果track已经ready的话,会调用以下函数,对于 _rtsp来说就是初始化其对象 _encoder,根据打印表名,_rtsp 会添加的两个code分别为 H264RtpEncoder和AACRtpEncoder,并且 _encoder 会设置 rtpRing为_rtpRing
void MultiMuxerPrivate::onTrackReady(const Track::Ptr &track) {
    if (_rtmp) {
        _rtmp->addTrack(track);
    }
    if (_rtsp) {
        _rtsp->addTrack(track);
    }
    if (_ts) {
        _ts->addTrack(track);
    }
#if defined(ENABLE_MP4)
    if (_fmp4) {
        _fmp4->addTrack(track);
    }
#endif

    //拷贝智能指针,目的是为了防止跨线程调用设置录像相关api导致的线程竞争问题
    auto hls = _hls;
    if (hls) {
        hls->addTrack(track);
    }
    auto mp4 = _mp4;
    if (mp4) {
        mp4->addTrack(track);
    }
}
rtsp的AddTrack:
void RtspMuxer::addTrack(const Track::Ptr &track) {
    InfoL<<"enter RtspMuxer::addTrack";
    //根据track生成sdp
    Sdp::Ptr sdp = track->getSdp();
    if (!sdp) {
        return;
    }

    InfoL<<"enter RtspMuxer::addTrack trackType: "<<track->getTrackType();
    auto &encoder = _encoder[track->getTrackType()];
    encoder = Factory::getRtpEncoderBySdp(sdp);
    if (!encoder) {
        return;
    }

    //设置rtp输出环形缓存
    encoder->setRtpRing(_rtpRing);

    //添加其sdp
    _sdp.append(sdp->getSdp());
}
  • 在track回调AllTrackAready之后,收到的音视频数据会进入 _rtsp的流程中,具体流程是先进入 _rtsp中,然后再进入 H264RtpEncoder 最后再回到 _media_src 的onWrite 中
    void inputFrame(const Frame::Ptr &frame) override {
        InfoL<<"enter RtspMediaSourceMuxer inputFrame";
        GET_CONFIG(bool, rtsp_demand, General::kRtspDemand);
        if (_clear_cache && rtsp_demand) {
            InfoL<<"rtsp inputFrame _clear_cache && rtsp_demand";
            _clear_cache = false;
            _media_src->clearCache();
        }
        if (_enabled || !rtsp_demand) {
            InfoL<<"rtsp inputFrame _enabled || !rtsp_demand";
            RtspMuxer::inputFrame(frame);
        }
    }
void RtspMuxer::inputFrame(const Frame::Ptr &frame) {
    auto &encoder = _encoder[frame->getTrackType()];
    if(encoder){
        InfoL<<"RtspMuxer::inputFrame encoder";
        encoder->inputFrame(frame);
    }
}

void H264RtpEncoder::inputFrame(const Frame::Ptr &frame) {
    auto ptr = frame->data() + frame->prefixSize();
    auto len = frame->size() - frame->prefixSize();
    auto pts = frame->pts();
    auto nal_type = H264_TYPE(ptr[0]);
    auto packet_size = getMaxSize() - 2;

    //末尾5bit为nalu type,固定为28(FU-A)
    auto fu_char_0 = (ptr[0] & (~0x1F)) | 28;
    auto fu_char_1 = nal_type;
    FuFlags *fu_flags = (FuFlags *) (&fu_char_1);
    fu_flags->start_bit = 1;

    //超过MTU则按照FU-A模式打包
    if (len > packet_size + 1) {
        InfoL<<"H264RtpEncoder::inputFrame if";
        size_t offset = 1;
        while (!fu_flags->end_bit) {
            if (!fu_flags->start_bit && len <= offset + packet_size) {
                //FU-A end
                packet_size = len - offset;
                fu_flags->end_bit = 1;
            }

            //传入nullptr先不做payload的内存拷贝
            auto rtp = makeRtp(getTrackType(), nullptr, packet_size + 2, fu_flags->end_bit, pts);
            //rtp payload 负载部分
            uint8_t *payload = rtp->getPayload();
            //FU-A 第1个字节
            payload[0] = fu_char_0;
            //FU-A 第2个字节
            payload[1] = fu_char_1;
            //H264 数据
            memcpy(payload + 2, (uint8_t *) ptr + offset, packet_size);
            //输入到rtp环形缓存
            RtpCodec::inputRtp(rtp, fu_flags->start_bit && nal_type == H264Frame::NAL_IDR);

            offset += packet_size;
            fu_flags->start_bit = 0;
        }
    } else {
        InfoL<<"H264RtpEncoder::inputFrame else";
        //如果帧长度不超过mtu, 则按照Single NAL unit packet per H.264 方式打包
        makeH264Rtp(ptr, len, false, false, pts);
    }
}

virtual bool inputRtp(const RtpPacket::Ptr &rtp, bool key_pos){
        if(_rtpRing){
            _rtpRing->write(rtp,key_pos);
        }
        return key_pos;
    }

void write(T in, bool is_key = true) {
        if (_delegate) {
            _delegate->onWrite(std::move(in), is_key);
            return;
        }

        LOCK_GUARD(_mtx_map);
        for (auto &pr : _dispatcher_map) {
            auto &second = pr.second;
            //切换线程后触发onRead事件
            pr.first->async([second, in, is_key]() {
                second->write(std::move(const_cast<T &>(in)), is_key);
            }, false);
        }

        _storage->write(std::move(in), is_key);
    }

void onWrite(RtpPacket::Ptr rtp, bool keyPos) override {
        _speed[rtp->type] += rtp->size();
        assert(rtp->type >= 0 && rtp->type < TrackMax);
        auto &track = _tracks[rtp->type];
        auto stamp = rtp->getStampMS();
        if (track) {
            track->_seq = rtp->getSeq();
            track->_time_stamp = stamp;
            track->_ssrc = rtp->getSSRC();
        }
        if (!_ring) {
            weak_ptr<RtspMediaSource> weakSelf = dynamic_pointer_cast<RtspMediaSource>(shared_from_this());
            auto lam = [weakSelf](int size) {
                auto strongSelf = weakSelf.lock();
                if (!strongSelf) {
                    return;
                }
                strongSelf->onReaderChanged(size);
            };
            //GOP默认缓冲512组RTP包,每组RTP包时间戳相同(如果开启合并写了,那么每组为合并写时间内的RTP包),
            //每次遇到关键帧第一个RTP包,则会清空GOP缓存(因为有新的关键帧了,同样可以实现秒开)
            _ring = std::make_shared<RingType>(_ring_size, std::move(lam));
            onReaderChanged(0);
            if (!_sdp.empty()) {
            //根据打印,最后会进入此处进行resist
                InfoL<<"RtspMediaSource onWrite regist";
                regist();
            }
        }
        bool is_video = rtp->type == TrackVideo;
        PacketCache<RtpPacket>::inputPacket(stamp, is_video, std::move(rtp), keyPos);
    }
  • 最后梳理一下,便于自己理解, _rtsp 是类(RtspMediaSourceMuxer) 的对象,其中有 RtpRing 、 _encoder 和 _media_src,其中 RtpRing的代理是 _media_src ,_encoder中的rtpRing就是上述的RtpRing, _rtsp 收到数据之后,会把数据回调给 _encoder ,encoder会回调给 RtpRing,至此就是回调到了 _rtsp中的 RtpRing ,然后RtpRing再次将数据回调给代理_media_src,最终数据就是在上面的函数中最后调用PacketCache<RtpPacket>::inputPacket,其中RtpRing就是一个缓存数据的环形队列
以下是Java结合ZLMediaKitRTSP流转换为WebSocket-FLV推送给前端浏览器进行播放的代码: ```java import io.netty.handler.codec.http.websocketx.BinaryWebSocketFrame; import org.apache.commons.lang3.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.alibaba.fastjson.JSONObject; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.ChannelInboundHandlerAdapter; import io.netty.handler.codec.http.FullHttpRequest; import io.netty.handler.codec.http.websocketx.TextWebSocketFrame; import io.netty.handler.codec.http.websocketx.WebSocketFrame; import io.netty.handler.codec.http.websocketx.WebSocketServerProtocolHandler; import io.netty.util.ReferenceCountUtil; import org.zeromq.ZMQ; import org.zeromq.ZMQ.Context; import org.zeromq.ZMQ.Socket; import org.zeromq.ZMsg; import java.io.IOException; public class RtspToWebSocketHandler extends ChannelInboundHandlerAdapter { private static final Logger logger = LoggerFactory.getLogger(RtspToWebSocketHandler.class); private ZMQ.Context context; private ZMQ.Socket subscriber; private ZMQ.Socket publisher; private String publisherAddress; private String subscriberAddress; private String rtspUrl; private String roomId; private String sdp; private String tag; private boolean isPushStream; public RtspToWebSocketHandler(String publisherAddress, String subscriberAddress, String rtspUrl, String roomId, String sdp, String tag) { this.publisherAddress = publisherAddress; this.subscriberAddress = subscriberAddress; this.rtspUrl = rtspUrl; this.roomId = roomId; this.sdp = sdp; this.tag = tag; this.isPushStream = false; } @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { if (msg instanceof FullHttpRequest) { handleHttpRequest(ctx, (FullHttpRequest) msg); } else if (msg instanceof WebSocketFrame) { handleWebSocketFrame(ctx, (WebSocketFrame) msg); } else { ReferenceCountUtil.release(msg); } } private void handleHttpRequest(ChannelHandlerContext ctx, FullHttpRequest req) throws Exception { // Handle a bad request. if (!req.decoderResult().isSuccess()) { sendHttpResponse(ctx, req, new DefaultFullHttpResponse(HTTP_1_1, BAD_REQUEST)); return; } // Allow only GET methods. if (req.method() != GET) { sendHttpResponse(ctx, req, new DefaultFullHttpResponse(HTTP_1_1, FORBIDDEN)); return; } // Handshake WebSocketServerProtocolHandler wsHandler = new WebSocketServerProtocolHandler(WEBSOCKET_PATH, null, true, 65536); wsHandler.handshake(ctx.channel(), req); } private void handleWebSocketFrame(ChannelHandlerContext ctx, WebSocketFrame frame) { // Check for closing frame if (frame instanceof CloseWebSocketFrame) { ctx.close(); return; } // Check for ping frame if (frame instanceof PingWebSocketFrame) { ctx.write(new PongWebSocketFrame(frame.content().retain())); return; } // Check for binary frame if (!(frame instanceof BinaryWebSocketFrame)) { throw new UnsupportedOperationException(String.format("%s frame types not supported", frame.getClass().getName())); } // Start pushing stream if (!isPushStream) { logger.info("Start pushing stream. roomId: {}", roomId); startPushStream(); isPushStream = true; } // Send WebSocket frame to ZLMediaKit BinaryWebSocketFrame binaryWebSocketFrame = (BinaryWebSocketFrame) frame; byte[] data = binaryWebSocketFrame.content().nioBuffer().array(); publisher.sendMore(roomId).send(data); } private void startPushStream() { // Create ZMQ context and sockets context = ZMQ.context(1); subscriber = context.socket(ZMQ.SUB); publisher = context.socket(ZMQ.PUB); // Connect to subscriber and publisher subscriber.connect(subscriberAddress); subscriber.subscribe(tag.getBytes()); publisher.connect(publisherAddress); // Send stream info to ZLMediaKit JSONObject jsonObject = new JSONObject(); jsonObject.put("api", "addMediaSource"); jsonObject.put("url", rtspUrl); jsonObject.put("vhost", "default"); jsonObject.put("enable_rtsp", true); jsonObject.put("enable_rtp", true); jsonObject.put("enable_tcp", true); jsonObject.put("enable_udp", true); jsonObject.put("timeout_sec", 30); jsonObject.put("merge", true); jsonObject.put("hls_enabled", false); jsonObject.put("mp4_enabled", false); jsonObject.put("record_enabled", false); jsonObject.put("broadcast_enabled", true); jsonObject.put("room_id", roomId); jsonObject.put("sdp", sdp); publisher.sendMore(tag).send(jsonObject.toJSONString().getBytes()); logger.info("Send stream info to ZLMediaKit. roomId: {}, rtspUrl: {}", roomId, rtspUrl); // Start receiving stream from ZLMediaKit new Thread(() -> { while (!Thread.currentThread().isInterrupted()) { try { ZMsg zMsg = ZMsg.recvMsg(subscriber); if (zMsg == null) { continue; } byte[] data = zMsg.getLast().getData(); BinaryWebSocketFrame binaryWebSocketFrame = new BinaryWebSocketFrame(Unpooled.wrappedBuffer(data)); ctx.channel().writeAndFlush(binaryWebSocketFrame); logger.debug("Received stream from ZLMediaKit. roomId: {}, data length: {}", roomId, data.length); } catch (IOException e) { logger.error("Error receiving stream from ZLMediaKit. roomId: {}", roomId, e); break; } } // Close sockets subscriber.close(); publisher.close(); context.term(); }).start(); } private static void sendHttpResponse(ChannelHandlerContext ctx, FullHttpRequest req, FullHttpResponse res) { // Generate an error page if response getStatus code is not OK (200). if (res.status().code() != 200) { ByteBuf buf = Unpooled.copiedBuffer(res.status().toString(), CharsetUtil.UTF_8); res.content().writeBytes(buf); buf.release(); HttpHeaderUtil.setContentLength(res, res.content().readableBytes()); } // Send the response and close the connection if necessary. ChannelFuture f = ctx.channel().writeAndFlush(res); if (!HttpHeaderUtil.isKeepAlive(req) || res.status().code() != 200) { f.addListener(ChannelFutureListener.CLOSE); } } } ``` 使用方法: 1. 在Netty的ChannelPipeline加入RtspToWebSocketHandler。 2. 当前端连接WebSocket时,会触发RtspToWebSocketHandler的channelRead方法,此时需要调用WebSocketServerProtocolHandler的handshake方法进行握手。 3. 当前端发送WebSocket帧时,会触发RtspToWebSocketHandler的channelRead方法,此时会将WebSocket帧发送给ZLMediaKitZLMediaKit会将转换后的FLV数据发送给RtspToWebSocketHandler,RtspToWebSocketHandler再将FLV数据发送给前端浏览器进行播放。 4. 当前端关闭WebSocket时,会触发RtspToWebSocketHandler的handleWebSocketFrame方法,此时需要关闭ZMQ的subscriber和publisher。 注意事项: 1. 代码ZLMediaKit的接口参数可能与实际情况不符,需要根据实际情况进行修改。 2. 代码的ZMQ版本为4.x,如果使用的是3.x版本需要进行相应修改。 3. 代码使用了Fastjson和Netty,需要进行相应依赖的引入。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值