rocketMq-broker-PullMessageProcessor 的完整生命周期和设计思想

     前面我们说到,根据消息拉取命令RequestCode.PULL_MESSAGE,可以找到broker端处理消息拉取的入口:org.apache.rocketmq.broker.processor.PullMessageProcessor #processRequest,在processRequest中执行消息拉取。

     很多文章直接讲了processRequest方法的执行内容,并没有讲清楚PullMessageProcessor的生命周期,以及是如何被通知要进行消息拉取的,在这篇文章中,我们将先梳理这个点,并学习其中的设计思想。(对这部分不敢兴趣的,可直接跳到第3部分,有源码阅读推荐。)

1、pullMessageProcessor是如何实例化的? 由谁管理的呢?

2、pullMessageProcessor怎样被触发进行消息拉取?
     

     图1中我们可以看出,broker启动时,创建了pullMessageProcessor实例;

     初始化了NettyRemotingServer、pullMessageExecutor;

     并将pullMessageProcessor,pullMessageExecutor组合成pair对象,注册到了NettyRemotingServer的processorTable中.

    @Override
    public void registerProcessor(int requestCode, NettyRequestProcessor processor, ExecutorService executor) {
        ExecutorService executorThis = executor;
        if (null == executor) {
            executorThis = this.publicExecutor;
        }
 
        Pair<NettyRequestProcessor, ExecutorService> pair = new Pair<NettyRequestProcessor, ExecutorService>(processor, executorThis);
        this.processorTable.put(requestCode, pair);
    }

我猜测,可能是什么地方接收到请求,根据请求传来的code码,从processorTable中获取到了pair;用pair中解析出的ExecutorService 处理了请求。于是我向上查看调用,果然如此:

pullMessageProcessor被调用流程图(绿色部分)

绿色中第②步:netty服务端启动

@Override
    public void start() {
        this.defaultEventExecutorGroup = new DefaultEventExecutorGroup(
            nettyServerConfig.getServerWorkerThreads(),
            new ThreadFactory() {
 
                private AtomicInteger threadIndex = new AtomicInteger(0);
 
                @Override
                public Thread newThread(Runnable r) {
                    return new Thread(r, "NettyServerCodecThread_" + this.threadIndex.incrementAndGet());
                }
            });
 
        // Netty 代码实现
        ServerBootstrap childHandler =
            this.serverBootstrap.group(this.eventLoopGroupBoss, this.eventLoopGroupSelector)
                .channel(useEpoll() ? EpollServerSocketChannel.class : NioServerSocketChannel.class)
                .option(ChannelOption.SO_BACKLOG, 1024)
                .option(ChannelOption.SO_REUSEADDR, true)
                .option(ChannelOption.SO_KEEPALIVE, false)
                .childOption(ChannelOption.TCP_NODELAY, true)
                .childOption(ChannelOption.SO_SNDBUF, nettyServerConfig.getServerSocketSndBufSize())
                .childOption(ChannelOption.SO_RCVBUF, nettyServerConfig.getServerSocketRcvBufSize())
                .localAddress(new InetSocketAddress(this.nettyServerConfig.getListenPort()))
                .childHandler(new ChannelInitializer<SocketChannel>() {
                    @Override
                    public void initChannel(SocketChannel ch) throws Exception {
                        ch.pipeline()
                            .addLast(defaultEventExecutorGroup, HANDSHAKE_HANDLER_NAME,
                                new HandshakeHandler(TlsSystemConfig.tlsMode))
                            .addLast(defaultEventExecutorGroup,
                                new NettyEncoder(),
                                new NettyDecoder(),
                                new IdleStateHandler(0, 0, nettyServerConfig.getServerChannelMaxIdleTimeSeconds()),
                                new NettyConnectManageHandler(),
                                new NettyServerHandler() // 接收请求
                            );
                    }
                });
 
        if (nettyServerConfig.isServerPooledByteBufAllocatorEnable()) {
            childHandler.childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT);
        }
 
        try {
            ChannelFuture sync = this.serverBootstrap.bind().sync();
            InetSocketAddress addr = (InetSocketAddress) sync.channel().localAddress();
            this.port = addr.getPort();
        } catch (InterruptedException e1) {
            throw new RuntimeException("this.serverBootstrap.bind().sync() InterruptedException", e1);
        }
 
        if (this.channelEventListener != null) {
            this.nettyEventExecutor.start();
        }
 
        this.timer.scheduleAtFixedRate(new TimerTask() {
 
            @Override
            public void run() {
                try {
                    NettyRemotingServer.this.scanResponseTable();
                } catch (Throwable e) {
                    log.error("scanResponseTable exception", e);
                }
            }
        }, 1000 * 3, 1000);
    }

关于netty通信部分,我们不放到这篇文章讲,大家可参考:

https://feixiang.blog.csdn.net/article/details/130315357

RocketMQ源码学习-通信与协议_receive response, but not matched any request-CSDN博客

绿色中第⑤步中:根据code码获取对应的pair,并从pair中获取到object1和object2进行处理。

(为了使流程具备通用性,使用泛型构造Pari类。使NettyRemotingAbstract类的processRequestCommand()方法,成为broker处理REQUEST_COMMAND请求的统一入口)

public void processRequestCommand(final ChannelHandlerContext ctx, final RemotingCommand cmd) {
        final Pair<NettyRequestProcessor, ExecutorService> matched = this.processorTable.get(cmd.getCode());
        final Pair<NettyRequestProcessor, ExecutorService> pair = null == matched ? this.defaultRequestProcessor : matched;
        final int opaque = cmd.getOpaque();
 
        if (pair != null) {
            // 创建异步线程任务
            Runnable run = new Runnable() {
                @Override
                public void run() {
                    try {
                        RPCHook rpcHook = NettyRemotingAbstract.this.getRPCHook();
                        if (rpcHook != null) {
                            rpcHook.doBeforeRequest(RemotingHelper.parseChannelRemoteAddr(ctx.channel()), cmd);
                        }
// 将调用到pullMessageProcessor的processorRequest()
 
                        final RemotingCommand response = pair.getObject1().processRequest(ctx, cmd);
                        if (rpcHook != null) {
                            rpcHook.doAfterResponse(RemotingHelper.parseChannelRemoteAddr(ctx.channel()), cmd, response);
                        }
 
                        if (!cmd.isOnewayRPC()) {
                            if (response != null) {
                                response.setOpaque(opaque);
                                response.markResponseType();
                                try {
                                    ctx.writeAndFlush(response);
                                } catch (Throwable e) {
                                    log.error("process request over, but response failed", e);
                                    log.error(cmd.toString());
                                    log.error(response.toString());
                                }
                            } else {
 
                            }
                        }
                    } catch (Throwable e) {
                        log.error("process request exception", e);
                        log.error(cmd.toString());
 
                        if (!cmd.isOnewayRPC()) {
                            final RemotingCommand response = RemotingCommand.createResponseCommand(RemotingSysResponseCode.SYSTEM_ERROR,
                                RemotingHelper.exceptionSimpleDesc(e));
                            response.setOpaque(opaque);
                            ctx.writeAndFlush(response);
                        }
                    }
                }
            };
            // system busy时,broker端拉取消息将开启流控
            if (pair.getObject1().rejectRequest()) {
                final RemotingCommand response = RemotingCommand.createResponseCommand(RemotingSysResponseCode.SYSTEM_BUSY,
                    "[REJECTREQUEST]system busy, start flow control for a while");
                response.setOpaque(opaque);
                ctx.writeAndFlush(response);
                return;
            }
            // 将异步执行任务提交给线程池,由对应的线程池执行
            try {
                final RequestTask requestTask = new RequestTask(run, ctx.channel(), cmd);
                pair.getObject2().submit(requestTask);
            } catch (RejectedExecutionException e) {
                if ((System.currentTimeMillis() % 10000) == 0) {
                    log.warn(RemotingHelper.parseChannelRemoteAddr(ctx.channel())
                        + ", too many requests and system thread pool busy, RejectedExecutionException "
                        + pair.getObject2().toString()
                        + " request code: " + cmd.getCode());
                }
 
                if (!cmd.isOnewayRPC()) {
                    final RemotingCommand response = RemotingCommand.createResponseCommand(RemotingSysResponseCode.SYSTEM_BUSY,
                        "[OVERLOAD]system busy, start flow control for a while");
                    response.setOpaque(opaque);
                    ctx.writeAndFlush(response);
                }
            }
        } else {
            String error = " request type " + cmd.getCode() + " not supported";
            final RemotingCommand response =
                RemotingCommand.createResponseCommand(RemotingSysResponseCode.REQUEST_CODE_NOT_SUPPORTED, error);
            response.setOpaque(opaque);
            ctx.writeAndFlush(response);
            log.error(RemotingHelper.parseChannelRemoteAddr(ctx.channel()) + error);
        }
    }

pair.getObject1().processRequest(ctx, cmd),将调用到pullMessageProcessor的processRequest()方法;

3、pullMessageProcessor中如何拉取消息?


      这部分内容,很多博主讲解过了,有机会再画图解释。给大家推荐个人认为源码分析不错的一篇文章:RocketMQ源码(19)—Broker处理DefaultMQPushConsumer发起的拉取消息请求源码【一万字】-CSDN博客

总结:      

      1、pullMessageProcessor在broker启动时被创建和实例化,并和pullMessageExecutor组合成pair对象,被注册到NettyRemotingServer的processorTable中,最终由NettyRemotingServer 管理。

      2、broker 启动时,将NettyServerHandler注册到netty的ServerBootStrap中,启动了netty服务端,开始监听请求。broker根据请求体中封装的code码,从processorTable中获取执行的processor和对应的线程执行类,执行请求。

      3、整个流程在设计上非常巧妙,使用多态和泛型,使整个流更具备抽象性和通用性。Pair<T1, T2>使用泛型的方式,管理processor和对应的线程池(T1对应NettyRequestProcessor,是所有processor的父类;T2对应ExecutorService是线程池的父类)从而可以将rocketMq中所有的processor和对应的线程池封装到Pair类中。经过前面的封装,使NettyRemotingAbstract类的processRequestCommand()方法成为broker处理REQUEST_COMMAND请求的统一入口,使流程清晰简洁。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

小王师傅66

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值