rocketmq消费消息流程

主要讲述rocketmq的消费流程,ack机制以及消费失败的处理问题。

1 rocketmq的消费流程

    public static void main(String[] args) throws InterruptedException, MQClientException {

        /*
         * Instantiate with specified consumer group name.
         */
        DefaultMQPushConsumer consumer = new DefaultMQPushConsumer("please_rename_unique_group_name_4");
        consumer.setConsumeThreadMin(3);
        consumer.setConsumeThreadMax(3);

        consumer.setConsumeMessageBatchMaxSize(2);
        final Random random=new Random();

        consumer.setNamesrvAddr("127.0.0.1:9876"); // TODO add by yunai

        /*
         * Specify name server addresses.
         * <p/>
         *
         * Alternatively, you may specify name server addresses via exporting environmental variable: NAMESRV_ADDR
         * <pre>
         * {@code
         * consumer.setNamesrvAddr("name-server1-ip:9876;name-server2-ip:9876");
         * }
         * </pre>
         */

        /*
         * Specify where to start in case the specified consumer group is a brand new one.
         */
        consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_FIRST_OFFSET);

        /*
         * Subscribe one more more topics to consume.
         */
//        consumer.subscribe("TopicRead3", "*");
//        consumer.subscribe("TopicTest", "*");
        consumer.subscribe("TopicTestjjj", "*");
     

        /*
         *  Register callback to execute on arrival of messages fetched from brokers.
         */
        consumer.registerMessageListener(new MessageListenerConcurrently() {

            @Override
            public ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> msgs,
                ConsumeConcurrentlyContext context) {
            	
            	for (MessageExt msg : msgs) {
            		  System.out.println("----"+msg + " ----------- " +new String( msg.getBody()) + "---");
                      
                      try {
                          //模拟业务逻辑处理中...
                         // TimeUnit.SECONDS.sleep(random.nextInt(10));
                      } catch (Exception e) {
                          e.printStackTrace();
                      }

                     
				}
            	 return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
              
              // return ConsumeConcurrentlyStatus.RECONSUME_LATER;
            }
        });

        /*
         *  Launch the consumer instance.
         */
        consumer.start();

        System.out.printf("Consumer Started.%n");
    }

其中 consumer.subscribe(“TopicTestjjj”, “*”);,将
SubscriptionData subscriptionData = FilterAPI.buildSubscriptionData(this.defaultMQPushConsumer.getConsumerGroup(), //
topic, subExpression);
this.rebalanceImpl.getSubscriptionInner().put(topic, subscriptionData);
生成subscriptionData 信息,并以topic为键,存入rebalanceImpl.getSubscriptionInner()中。而
subscriptionData的存了哪些数据呢?
SubscriptionData [classFilterMode=false, topic=TopicTestjjj, subString=*, tagsSet=[], codeSet=[], subVersion=1546866999056]

consumer的start方法,最终执行为:

   public void start() throws MQClientException {
        switch (this.serviceState) {
            case CREATE_JUST:
                log.info("the consumer [{}] start beginning. messageModel={}, isUnitMode={}", this.defaultMQPushConsumer.getConsumerGroup(),
                    this.defaultMQPushConsumer.getMessageModel(), this.defaultMQPushConsumer.isUnitMode());
                this.serviceState = ServiceState.START_FAILED;

                // 检查配置
                this.checkConfig();

                // Rebalance负载均衡 复制订阅数据,这里,
                //会订阅%RETRY% + consumerGroup队列的消息,即重试队列。
                //这个队列存的是什么消息呢?数据源从哪来,看后面
                this.copySubscription();       //@1

                // 设置instanceName,为一个字符串化的数字,比如10072
                if (this.defaultMQPushConsumer.getMessageModel() == MessageModel.CLUSTERING) {
                    this.defaultMQPushConsumer.changeInstanceNameToPID();
                }

                // 获取MQClient对象,clientId为ip@instanceName,比如192.168.0.1@10072
                //一个客户端,只会生成一个MQClient
                this.mQClientFactory = MQClientManager.getInstance().getAndCreateMQClientInstance(this.defaultMQPushConsumer, this.rpcHook);    //@2

                // 设置负载均衡器
                this.rebalanceImpl.setConsumerGroup(this.defaultMQPushConsumer.getConsumerGroup());
                //默认这是消费模式为集群模式,每条消息被同一组的消费者中的一个消费
                //还可以设置为广播模式,每条消息被同一个组的所有消费者都消费一次
                this.rebalanceImpl.setMessageModel(this.defaultMQPushConsumer.getMessageModel());
                //默认是AllocateMessageQueueAveragely,均分策略
                this.rebalanceImpl.setAllocateMessageQueueStrategy(this.defaultMQPushConsumer.getAllocateMessageQueueStrategy());
                this.rebalanceImpl.setmQClientFactory(this.mQClientFactory);

                // 拉取API封装
                this.pullAPIWrapper = new PullAPIWrapper(mQClientFactory, this.defaultMQPushConsumer.getConsumerGroup(), isUnitMode());
                this.pullAPIWrapper.registerFilterMessageHook(filterMessageHookList);

                //生成消费进度处理器,集群模式下消费进度保存在Broker上,因为同一组内的消费者要共享进度;广播模式下进度保存在消费者端 @3
                if (this.defaultMQPushConsumer.getOffsetStore() != null) {
                    this.offsetStore = this.defaultMQPushConsumer.getOffsetStore();
                } else {
                    switch (this.defaultMQPushConsumer.getMessageModel()) {
                        case BROADCASTING:
                            this.offsetStore = new LocalFileOffsetStore(this.mQClientFactory, this.defaultMQPushConsumer.getConsumerGroup());
                            break;
                        case CLUSTERING:
                            this.offsetStore = new RemoteBrokerOffsetStore(this.mQClientFactory, this.defaultMQPushConsumer.getConsumerGroup());
                            break;
                        default:
                            break;
                    }
                }
                this.offsetStore.load();   //若是广播模式,加载本地的消费进度文件

                // 根据监听是顺序模式还是并发模式来生成相应的ConsumerService
                if (this.getMessageListenerInner() instanceof MessageListenerOrderly) {
                    this.consumeOrderly = true;
                    this.consumeMessageService = new ConsumeMessageOrderlyService(this, (MessageListenerOrderly)this.getMessageListenerInner());
                } else if (this.getMessageListenerInner() instanceof MessageListenerConcurrently) {
                    this.consumeOrderly = false;
                    this.consumeMessageService = new ConsumeMessageConcurrentlyService(this, (MessageListenerConcurrently)this.getMessageListenerInner());
                }
                this.consumeMessageService.start();        //@4

                // 设置MQClient对象  @5
                boolean registerOK = mQClientFactory.registerConsumer(this.defaultMQPushConsumer.getConsumerGroup(), this);    
                if (!registerOK) {
                    this.serviceState = ServiceState.CREATE_JUST;
                    this.consumeMessageService.shutdown();
                    throw new MQClientException("The consumer group[" + this.defaultMQPushConsumer.getConsumerGroup()
                        + "] has been created before, specify another name please." + FAQUrl.suggestTodo(FAQUrl.GROUP_NAME_DUPLICATE_URL),
                        null);
                }
                mQClientFactory.start();
                log.info("the consumer [{}] start OK.", this.defaultMQPushConsumer.getConsumerGroup());

                // 设置服务状态
                this.serviceState = ServiceState.RUNNING;
                break;
            case RUNNING:
            case START_FAILED:
            case SHUTDOWN_ALREADY:
                throw new MQClientException("The PushConsumer service state not OK, maybe started once, "//
                    + this.serviceState//
                    + FAQUrl.suggestTodo(FAQUrl.CLIENT_SERVICE_NOT_OK),
                    null);
            default:
                break;
        }

        //  从Namesrv获取TopicRouteData,更新TopicPublishInfo和MessageQueue   (在Consumer start时马上调用,之后每隔一段时间调用一次)
        this.updateTopicSubscribeInfoWhenSubscriptionChanged();

        // 向TopicRouteData里的所有Broker发送心跳,注册Consumer/Producer信息到Broker上   (在Consumer start时马上调用,之后每隔一段时间调用一次)
        this.mQClientFactory.sendHeartbeatToAllBrokerWithLock();

        // 唤醒MessageQueue均衡服务,负载均衡后马上开启第一次拉取消息
        this.mQClientFactory.rebalanceImmediately();
    }

其中,
@1: 是订阅消费组的重试队列,这个队列在broker,是retry+消费组的名字。这里的数据从哪来?
@2: 生成MQClientInstance,每一个消费者生成一个MQClientInstance,其形式是192.168.0.1@10072,其start方法见下
@3:是获取消息队列的offset。其以groupName绑定,内部有ConcurrentHashMap<MessageQueue, AtomicLong> offsetTable,绑定者每条消息消费队列的offset。对于广播模式,offset存在本地,对于集群模式,存在broker。后续分析
@4: consumeMessageService 是消费消息服务,即最后调用我们的自定义的服务,并返回状态码。
@5: mQClientFactory.registerConsumer(this.defaultMQPushConsumer.getConsumerGroup(), this); 是指定底层的netty收到消息后如何处理,即绑定处理的channelhandlel。

1.2 再看看MQClientInstance

org.apache.rocketmq.client.impl.factory.MQClientInstance
这个类用于启动负载均衡服务,拉取消息服务,定期的获取namerserver、broker信息,发送心跳、每过5s向broker发送消息的offset信息等。
MQClientInstance实例的start方法

    public void start() throws MQClientException {

        synchronized (this) {
            switch (this.serviceState) {
                case CREATE_JUST:
                    this.serviceState = ServiceState.START_FAILED;
                    // If not specified,looking address from name server
                    if (null == this.clientConfig.getNamesrvAddr()) {
                        this.mQClientAPIImpl.fetchNameServerAddr(); // TODO 待读:如果url未指定,可以通过Http请求从其他处获取
                    }
                    // Start request-response channel
                    this.mQClientAPIImpl.start();
                    // 启动多个定时任务
                    this.startScheduledTask();
                    // Start pull service
                    this.pullMessageService.start();
                    // Start Consumer rebalance service
                    this.rebalanceService.start();
                    //启动内部默认的生产者,用于消费者SendMessageBack,但不会执行MQClientInstance.start(),也就是当前方法不会被执行
                    this.defaultMQProducer.getDefaultMQProducerImpl().start(false);
                    log.info("the client factory [{}] start OK", this.clientId);
                    this.serviceState = ServiceState.RUNNING;
                    break;
                case RUNNING:
                    break;
                case SHUTDOWN_ALREADY:
                    break;
                case START_FAILED:
                    throw new MQClientException("The Factory object[" + this.getClientId() + "] has been created before, and failed.", null);
                default:
                    break;
            }
        }
    }

1.3 负载均衡服务RebalanceService

org.apache.rocketmq.client.impl.consumer.RebalanceService
RebalanceService是负载均衡服务,rocketmq是支持高并发服务。消息发送端发送消息发给broker(有可能是多个),每个broker生成一个commitLog队列(为所有的topic共有),对应于消费端,每个broker的topic下又有多个consumerqueue(默认4个),这些consumerqueue(如果是2个broker,那么就是8个consumerqueue),会分配给消费端。每条consumerqueue在一个时间只能属于一个消费端。由于broker和消费端是动态增减的,所以需要RebalanceService服务,对这些consumerqueue进行动态的负载均衡。默认是20s进行一次。

  @Override
    public void run() {
        log.info(this.getServiceName() + " service started");

        while (!this.isStopped()) {
            //每等待20S执行一次负载均衡
            this.waitForRunning(waitInterval);
            this.mqClientFactory.doRebalance();
        }

        log.info(this.getServiceName() + " service end");
    }

其真正执行负载均衡服务的是下面方法:

/**
     * 消费者对 单个Topic 重新进行平衡
     *
     * @param topic   Topic
     * @param isOrder 是否顺序
     */
    private void rebalanceByTopic(final String topic, final boolean isOrder) {
        switch (messageModel) {
            case BROADCASTING: {  //广播模式,每条消息被同一组的所有消费者均消费
                Set<MessageQueue> mqSet = this.topicSubscribeInfoTable.get(topic);
                if (mqSet != null) {
                    boolean changed = this.updateProcessQueueTableInRebalance(topic, mqSet, isOrder);
                    if (changed) {
                        this.messageQueueChanged(topic, mqSet, mqSet);
                        log.info("messageQueueChanged {} {} {} {}", //
                            consumerGroup, //
                            topic, //
                            mqSet, //
                            mqSet);
                    }
                } else {
                    log.warn("doRebalance, {}, but the topic[{}] not exist.", consumerGroup, topic);
                }
                break;
            }
            case CLUSTERING: {     //默认是集群模式,每条消息被同一消费者组的一个消费,
                // 获取 topic 对应的 队列 和 consumer信息
            	//topicSubscribeInfoTable的信息从哪来?
            	//比如有两个broker,每个broker有4条MessageQueue,那么加起来就是8条
                Set<MessageQueue> mqSet = this.topicSubscribeInfoTable.get(topic);
                //集群里有几个消费者,这些都是通过查询broker获得
                List<String> cidAll = this.mQClientFactory.findConsumerIdList(topic, consumerGroup);
                if (null == mqSet) {
                    if (!topic.startsWith(MixAll.RETRY_GROUP_TOPIC_PREFIX)) {
                        log.warn("doRebalance, {}, but the topic[{}] not exist.", consumerGroup, topic);
                    }
                }

                if (null == cidAll) {
                    log.warn("doRebalance, {} {}, get consumer id list failed", consumerGroup, topic);
                }

                if (mqSet != null && cidAll != null) {
                    // 排序 消费队列 和 消费者数组。因为是在Client进行分配队列,排序后,各Client的顺序才能保持一致。
                    List<MessageQueue> mqAll = new ArrayList<>();
                    mqAll.addAll(mqSet);

                    Collections.sort(mqAll);
                    Collections.sort(cidAll);

                    AllocateMessageQueueStrategy strategy = this.allocateMessageQueueStrategy;   //AllocateMessageQueueAveragely

                    // 根据 队列分配策略 分配消费队列
                    List<MessageQueue> allocateResult;
                    try {
                        allocateResult = strategy.allocate(this.consumerGroup, this.mQClientFactory.getClientId(), mqAll, cidAll);
                    } catch (Throwable e) {
                        log.error("AllocateMessageQueueStrategy.allocate Exception. allocateMessageQueueStrategyName={}", strategy.getName(), e);
                        return;
                    }

                    Set<MessageQueue> allocateResultSet = new HashSet<>();
                    if (allocateResult != null) {
                    	//通过broker,获得最新的消费队列
                        allocateResultSet.addAll(allocateResult);
                    }

                    // 更新消费队列,与本地的消费队列做对比
                    boolean changed = this.updateProcessQueueTableInRebalance(topic, allocateResultSet, isOrder);
                    if (changed) {
                        log.info(
                            "rebalanced result changed. allocateMessageQueueStrategyName={}, group={}, topic={}, clientId={}, mqAllSize={}, cidAllSize={}, "
                                + "rebalanceResultSize={}, rebalanceResultSet={}",
                            strategy.getName(), consumerGroup, topic, this.mQClientFactory.getClientId(), mqSet.size(), cidAll.size(),
                            allocateResultSet.size(), allocateResultSet);
                        this.messageQueueChanged(topic, mqSet, allocateResultSet);
                    }
                }
                break;
            }
            default:
                break;
        }
    }

在集群模式下,this.updateProcessQueueTableInRebalance(topic, allocateResultSet, isOrder);方法会对从broker获取到的messagequeue与本地已经分配的messagequeue进行对比,对于新加进来的,要进行拉取消息服务,对于踢出去的,要设置droped=true。

1.3 拉取消息服务PullMessageService

org.apache.rocketmq.client.impl.consumer.PullMessageService
拉取消息服务的主要工作都在下面方法中,
1、对各种情况进行判断,比如为消费消息超过1000条,消费处理队列是否被丢弃等待。
2、设置回调方法pullCallback ,用于在收到broker发送的消息后应该如何处理?如果拉取成功,则设置下次拉取消息的偏移量,作为拉取消息请求,继续拉取,把本次获得的消息放入processQueue中,同时提请消息消费端消费。
其中,下次拉取消息的偏移量是pullresult带回来的,即是broker给我们的。看看broker是如何操作的
nextBeginOffset = offset + (i / ConsumeQueue.CQ_STORE_UNIT_SIZE);
在broker,上面的i是此次获取的ConsumeQueue的字节数,ConsumeQueue.CQ_STORE_UNIT_SIZE是20,表示ConsumeQueue中一条消息是20字节,所以上面nextBeginOffset 的结果就是本次的offset+本次的消息字节数/20.。即为下一次获取消息数的偏移量(不是字节数)。
收到消息后,要将消息存入本地的processQueue,同时要判断processQueue的消息是否已经消费完,在顺序消费中,如果没有消费完,是不允许进行下次拉取。
然后提请消费消息程序进行消息消费。在消息消费中,我们知道每一条consumequeue只能对应于一个消费者。而我们可以设置每次拉取消息的数量,和我们消费消息的线程数。比如我们一次拉取了5条消息,消息消费线程数是3,那么会分两次进行消费。这就是并发消费。

public void pullMessage(final PullRequest pullRequest) {
        final ProcessQueue processQueue = pullRequest.getProcessQueue();
        //processQueue什么情况下回droped?  在RebalanceService中,会进行messagequeue的负载均衡,对于不在分配给
         //此消费端的messagequeue,设为dropped。 messagequeue与processqueue 一一对应。
        if (processQueue.isDropped()) {
            log.info("the pull request[{}] is dropped.", pullRequest.toString());
            return;
        }

        // 设置队列最后拉取消息时间
        pullRequest.getProcessQueue().setLastPullTimestamp(System.currentTimeMillis());

        // 判断consumer状态是否运行中。如果不是,则延迟拉取消息。
        try {
            this.makeSureStateOK();
        } catch (MQClientException e) {
            log.warn("pullMessage exception, consumer state not ok", e);
            this.executePullRequestLater(pullRequest, PULL_TIME_DELAY_MILLS_WHEN_EXCEPTION);
            return;
        }

        // 判断是否暂停中。
        if (this.isPause()) {
            log.warn("consumer was paused, execute pull request later. instanceName={}, group={}", this.defaultMQPushConsumer.getInstanceName(),
                this.defaultMQPushConsumer.getConsumerGroup());
            this.executePullRequestLater(pullRequest, PULL_TIME_DELAY_MILLS_WHEN_SUSPEND);
            return;
        }

        // 判断是否超过最大持有消息数量。默认最大值为1000
        //processQueue是消费端本地的processQueue,每个request中都有一个对应的processQueue,即每条messagequeue都有个消费端本地的processqueue
        //processQueue生成一个request,拉取完,又重新生成request,不断循环.要注意msgcount
        long size = processQueue.getMsgCount().get();
        if (size > this.defaultMQPushConsumer.getPullThresholdForQueue()) {
            this.executePullRequestLater(pullRequest, PULL_TIME_DELAY_MILLS_WHEN_FLOW_CONTROL); // 提交延迟消息拉取请求。50ms。
           //flowControllTimes是啥
            if ((flowControlTimes1++ % 1000) == 0) {
                log.warn(
                    "the consumer message buffer is full, so do flow control, minOffset={}, maxOffset={}, size={}, pullRequest={}, flowControlTimes={}",
                    processQueue.getMsgTreeMap().firstKey(), processQueue.getMsgTreeMap().lastKey(), size, pullRequest, flowControlTimes1);
            }
            return;
        }
       
        //不是顺序消费
        if (!this.consumeOrderly) { // 判断消息Offset跨度是否过大 > 2000
            if (processQueue.getMaxSpan() > this.defaultMQPushConsumer.getConsumeConcurrentlyMaxSpan()) {
                this.executePullRequestLater(pullRequest, PULL_TIME_DELAY_MILLS_WHEN_FLOW_CONTROL); // 提交延迟消息拉取请求。50ms。
                if ((flowControlTimes2++ % 1000) == 0) {
                    log.warn(
                        "the queue's messages, span too long, so do flow control, minOffset={}, maxOffset={}, maxSpan={}, pullRequest={}, flowControlTimes={}",
                        processQueue.getMsgTreeMap().firstKey(), processQueue.getMsgTreeMap().lastKey(), processQueue.getMaxSpan(),
                        pullRequest, flowControlTimes2);
                }
                return;
            }
        } else { // TODO 顺序消费    顺序消费!!!!!!
            if (processQueue.isLocked()) {   //如果当前线程锁定了
                if (!pullRequest.isLockedFirst()) {  //不是第一次锁定
                    final long offset = this.rebalanceImpl.computePullFromWhere(pullRequest.getMessageQueue());
                    boolean brokerBusy = offset < pullRequest.getNextOffset();
                    log.info("the first time to pull message, so fix offset from broker. pullRequest: {} NewOffset: {} brokerBusy: {}",
                        pullRequest, offset, brokerBusy);
                    if (brokerBusy) {
                        log.info(
                            "[NOTIFYME]the first time to pull message, but pull request offset larger than broker consume offset. pullRequest: {} NewOffset: "
                                + "{}",
                            pullRequest, offset);
                    }

                    pullRequest.setLockedFirst(true);
                    pullRequest.setNextOffset(offset);
                }
            } else {
                this.executePullRequestLater(pullRequest, PULL_TIME_DELAY_MILLS_WHEN_EXCEPTION);
                log.info("pull message later because not locked in broker, {}", pullRequest);
                return;
            }
        }

        // 获取Topic 对应的订阅信息。若不存在,则延迟拉取消息
        final SubscriptionData subscriptionData = this.rebalanceImpl.getSubscriptionInner().get(pullRequest.getMessageQueue().getTopic());
        if (null == subscriptionData) {
            this.executePullRequestLater(pullRequest, PULL_TIME_DELAY_MILLS_WHEN_EXCEPTION);
            log.warn("find the consumer's subscription failed, {}", pullRequest);
            return;
        }

        final long beginTimestamp = System.currentTimeMillis();

        // pullCallback作为this.pullAPIWrapper.pullKernelImpl的回调参数,拉取消息后执行这个方法
        PullCallback pullCallback = new PullCallback() {
            @Override
            public void onSuccess(PullResult pullResult) {
                if (pullResult != null) {
                    //提取ByteBuffer生成List<MessageExt>
                    pullResult = DefaultMQPushConsumerImpl.this.pullAPIWrapper.processPullResult(pullRequest.getMessageQueue(), pullResult, subscriptionData);
                    switch (pullResult.getPullStatus()) {
                        case FOUND:
                            // 设置下次拉取消息队列位置
                            long prevRequestOffset = pullRequest.getNextOffset();
                            //pullResult.getNextBeginOffset(),应该是返回的结果里带的,表示下一次拉取的位置
                            pullRequest.setNextOffset(pullResult.getNextBeginOffset());

                            // 统计
                            long pullRT = System.currentTimeMillis() - beginTimestamp;
                            DefaultMQPushConsumerImpl.this.getConsumerStatsManager().incPullRT(pullRequest.getConsumerGroup(),
                                pullRequest.getMessageQueue().getTopic(), pullRT);

                            long firstMsgOffset = Long.MAX_VALUE;
                            if (pullResult.getMsgFoundList() == null || pullResult.getMsgFoundList().isEmpty()) {
                                DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);     //如果这次请求没有获取到消息,马上进行另一次拉取
                            } else {
                                firstMsgOffset = pullResult.getMsgFoundList().get(0).getQueueOffset();

                                // 统计
                                DefaultMQPushConsumerImpl.this.getConsumerStatsManager().incPullTPS(pullRequest.getConsumerGroup(),
                                    pullRequest.getMessageQueue().getTopic(), pullResult.getMsgFoundList().size());

                                // 提交拉取到的消息到ProcessQueue的TreeMap中,红黑树的map,可以排序
                                //返回 true : 上一批次的消息已经消费完了
                                //返回 false: 上一批次的消息还没消费完
                                boolean dispathToConsume = processQueue.putMessage(pullResult.getMsgFoundList());

                                // 在有序消费模式下,仅当 dispathToConsume=true 时提交消费请求,也就是上一批次的消息消费完了才提交消费请求
                                // 在并发消费模式下,dispathToConsume不起作用,直接提交消费请求
                                //何为消费,就是从processQueue取出消息,在我们consumer自定义的  public ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> msgs,等消费
                                DefaultMQPushConsumerImpl.this.consumeMessageService
                                    .submitConsumeRequest(pullResult.getMsgFoundList(), processQueue, pullRequest.getMessageQueue(), dispathToConsume);

                                // 提交下次拉取消息请求,就是往pullRequestQueue添加pullrequest消息
                                if (DefaultMQPushConsumerImpl.this.defaultMQPushConsumer.getPullInterval() > 0) {
                                    DefaultMQPushConsumerImpl.this.executePullRequestLater(pullRequest,
                                        DefaultMQPushConsumerImpl.this.defaultMQPushConsumer.getPullInterval());
                                } else {
                                    DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
                                }
                            }

                            // 下次拉取消费队列位置小于上次拉取消息队列位置 或者 第一条消息的消费队列位置小于上次拉取消息队列位置,则判定为BUG,输出log
                            if (pullResult.getNextBeginOffset() < prevRequestOffset || firstMsgOffset < prevRequestOffset) {
                                log.warn(
                                    "[BUG] pull message result maybe data wrong, nextBeginOffset: {} firstMsgOffset: {} prevRequestOffset: {}",
                                    pullResult.getNextBeginOffset(),
                                    firstMsgOffset,
                                    prevRequestOffset);
                            }

                            break;
                        case NO_NEW_MSG:
                            // 设置下次拉取消息队列位置
                            pullRequest.setNextOffset(pullResult.getNextBeginOffset());

                            // 持久化消费进度
                            DefaultMQPushConsumerImpl.this.correctTagsOffset(pullRequest);

                            // 立即提交拉取消息请求
                            DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
                            break;
                        case NO_MATCHED_MSG:
                            // 设置下次拉取消息队列位置
                            pullRequest.setNextOffset(pullResult.getNextBeginOffset());

                            // 持久化消费进度
                            DefaultMQPushConsumerImpl.this.correctTagsOffset(pullRequest);

                            // 提交立即拉取消息请求
                            DefaultMQPushConsumerImpl.this.executePullRequestImmediately(pullRequest);
                            break;
                        case OFFSET_ILLEGAL:
                            log.warn("the pull request offset illegal, {} {}", //
                                pullRequest.toString(), pullResult.toString());
                            // 设置下次拉取消息队列位置
                            pullRequest.setNextOffset(pullResult.getNextBeginOffset());

                            // 设置消息处理队列为dropped
                            pullRequest.getProcessQueue().setDropped(true);

                            // 提交延迟任务,进行消费处理队列移除。 // TODO 疑问:为什么不立即移除
                            DefaultMQPushConsumerImpl.this.executeTaskLater(new Runnable() {

                                @Override
                                public void run() {
                                    try {
                                        // 更新消费进度,同步消费进度到Broker
                                        DefaultMQPushConsumerImpl.this.offsetStore.updateOffset(pullRequest.getMessageQueue(),
                                            pullRequest.getNextOffset(), false);
                                        DefaultMQPushConsumerImpl.this.offsetStore.persist(pullRequest.getMessageQueue());

                                        // 移除消费处理队列
                                        DefaultMQPushConsumerImpl.this.rebalanceImpl.removeProcessQueue(pullRequest.getMessageQueue());

                                        log.warn("fix the pull request offset, {}", pullRequest);
                                    } catch (Throwable e) {
                                        log.error("executeTaskLater Exception", e);
                                    }
                                }
                            }, 10000);
                            break;
                        default:
                            break;
                    }
                }
            }

1.4 消费消息服务ConsumeMessageConcurrentlyService

org.apache.rocketmq.client.impl.consumer.ConsumeMessageConcurrentlyService
拉取完消息后,会调用ConsumeMessageConcurrentlyService的submitConsumeRequest方法(如下),主要是对比消息数和批量消费数,如果消息数大于批量消费数,就会使用线程池,多线程消费.
下面是设置了线程数是3的情况,可以看到有三个线程参与消费。

ConsumeMessageThread_3----MessageExt [queueId=3, storeSize=180, queueOffset=302, sysFlag=0, bornTimestamp=1546913186472, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186472, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000C9622, commitLogOffset=824866, bodyCRC=1887188941, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=303, CONSUME_START_TIME=1546913186473, UNIQ_KEY=C0A801691BFC73D16E932637BAA8016D, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->365---
ConsumeMessageThread_1----MessageExt [queueId=0, storeSize=180, queueOffset=203, sysFlag=0, bornTimestamp=1546913186483, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186483, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000C96D6, commitLogOffset=825046, bodyCRC=1769301623, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=204, CONSUME_START_TIME=1546913186484, UNIQ_KEY=C0A801691BFC73D16E932637BAB3016E, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->366---
ConsumeMessageThread_2----MessageExt [queueId=1, storeSize=180, queueOffset=256, sysFlag=0, bornTimestamp=1546913186494, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186494, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000C978A, commitLogOffset=825226, bodyCRC=510809825, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=257, CONSUME_START_TIME=1546913186496, UNIQ_KEY=C0A801691BFC73D16E932637BABE016F, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->367---
ConsumeMessageThread_3----MessageExt [queueId=2, storeSize=180, queueOffset=177, sysFlag=0, bornTimestamp=1546913186505, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186505, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000C983E, commitLogOffset=825406, bodyCRC=248335216, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=178, CONSUME_START_TIME=1546913186506, UNIQ_KEY=C0A801691BFC73D16E932637BAC90170, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->368---
ConsumeMessageThread_1----MessageExt [queueId=3, storeSize=180, queueOffset=303, sysFlag=0, bornTimestamp=1546913186516, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186516, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000C98F2, commitLogOffset=825586, bodyCRC=2043313126, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=304, CONSUME_START_TIME=1546913186518, UNIQ_KEY=C0A801691BFC73D16E932637BAD40171, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->369---
ConsumeMessageThread_2----MessageExt [queueId=0, storeSize=180, queueOffset=204, sysFlag=0, bornTimestamp=1546913186527, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186527, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000C99A6, commitLogOffset=825766, bodyCRC=420344323, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=205, CONSUME_START_TIME=1546913186528, UNIQ_KEY=C0A801691BFC73D16E932637BADF0172, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->370---
ConsumeMessageThread_3----MessageExt [queueId=1, storeSize=180, queueOffset=257, sysFlag=0, bornTimestamp=1546913186538, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186538, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000C9A5A, commitLogOffset=825946, bodyCRC=1846198933, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=258, CONSUME_START_TIME=1546913186539, UNIQ_KEY=C0A801691BFC73D16E932637BAEA0173, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->371---
ConsumeMessageThread_1----MessageExt [queueId=2, storeSize=180, queueOffset=178, sysFlag=0, bornTimestamp=1546913186549, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186550, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000C9B0E, commitLogOffset=826126, bodyCRC=1996722991, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=179, CONSUME_START_TIME=1546913186552, UNIQ_KEY=C0A801691BFC73D16E932637BAF50174, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->372---
ConsumeMessageThread_2----MessageExt [queueId=3, storeSize=180, queueOffset=304, sysFlag=0, bornTimestamp=1546913186561, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186562, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000C9BC2, commitLogOffset=826306, bodyCRC=304057, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=305, CONSUME_START_TIME=1546913186564, UNIQ_KEY=C0A801691BFC73D16E932637BB010175, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->373---
ConsumeMessageThread_3----MessageExt [queueId=0, storeSize=180, queueOffset=205, sysFlag=0, bornTimestamp=1546913186573, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186575, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000C9C76, commitLogOffset=826486, bodyCRC=509621786, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=206, CONSUME_START_TIME=1546913186576, UNIQ_KEY=C0A801691BFC73D16E932637BB0D0176, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->374---
ConsumeMessageThread_1----MessageExt [queueId=1, storeSize=180, queueOffset=258, sysFlag=0, bornTimestamp=1546913186586, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186586, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000C9D2A, commitLogOffset=826666, bodyCRC=1768359564, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=259, CONSUME_START_TIME=1546913186588, UNIQ_KEY=C0A801691BFC73D16E932637BB1A0177, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->375---
ConsumeMessageThread_2----MessageExt [queueId=2, storeSize=180, queueOffset=179, sysFlag=0, bornTimestamp=1546913186597, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186597, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000C9DDE, commitLogOffset=826846, bodyCRC=1886279478, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=180, CONSUME_START_TIME=1546913186598, UNIQ_KEY=C0A801691BFC73D16E932637BB250178, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->376---
ConsumeMessageThread_3----MessageExt [queueId=3, storeSize=180, queueOffset=305, sysFlag=0, bornTimestamp=1546913186608, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186608, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000C9E92, commitLogOffset=827026, bodyCRC=124348320, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=306, CONSUME_START_TIME=1546913186611, UNIQ_KEY=C0A801691BFC73D16E932637BB300179, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->377---
ConsumeMessageThread_1----MessageExt [queueId=0, storeSize=180, queueOffset=206, sysFlag=0, bornTimestamp=1546913186620, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186621, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000C9F46, commitLogOffset=827206, bodyCRC=399931953, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=207, CONSUME_START_TIME=1546913186623, UNIQ_KEY=C0A801691BFC73D16E932637BB3C017A, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->378---
ConsumeMessageThread_2----MessageExt [queueId=1, storeSize=180, queueOffset=259, sysFlag=0, bornTimestamp=1546913186632, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186633, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000C9FFA, commitLogOffset=827386, bodyCRC=1624328871, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=260, CONSUME_START_TIME=1546913186636, UNIQ_KEY=C0A801691BFC73D16E932637BB48017B, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->379---
ConsumeMessageThread_3----MessageExt [queueId=2, storeSize=180, queueOffset=180, sysFlag=0, bornTimestamp=1546913186647, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186647, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000CA0AE, commitLogOffset=827566, bodyCRC=513142476, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=181, CONSUME_START_TIME=1546913186648, UNIQ_KEY=C0A801691BFC73D16E932637BB57017C, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->380---
ConsumeMessageThread_1----MessageExt [queueId=3, storeSize=180, queueOffset=306, sysFlag=0, bornTimestamp=1546913186658, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186658, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000CA162, commitLogOffset=827746, bodyCRC=1771232858, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=307, CONSUME_START_TIME=1546913186659, UNIQ_KEY=C0A801691BFC73D16E932637BB62017D, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->381---
ConsumeMessageThread_2----MessageExt [queueId=0, storeSize=180, queueOffset=207, sysFlag=0, bornTimestamp=1546913186669, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186670, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000CA216, commitLogOffset=827926, bodyCRC=1889243104, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=208, CONSUME_START_TIME=1546913186671, UNIQ_KEY=C0A801691BFC73D16E932637BB6D017E, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->382---
ConsumeMessageThread_3----MessageExt [queueId=1, storeSize=180, queueOffset=260, sysFlag=0, bornTimestamp=1546913186692, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186695, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000CA2CA, commitLogOffset=828106, bodyCRC=127713142, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=261, CONSUME_START_TIME=1546913186697, UNIQ_KEY=C0A801691BFC73D16E932637BB84017F, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->383---
ConsumeMessageThread_1----MessageExt [queueId=2, storeSize=180, queueOffset=181, sysFlag=0, bornTimestamp=1546913186706, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186706, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000CA37E, commitLogOffset=828286, bodyCRC=435694293, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=182, CONSUME_START_TIME=1546913186707, UNIQ_KEY=C0A801691BFC73D16E932637BB920180, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->384---
ConsumeMessageThread_2----MessageExt [queueId=3, storeSize=180, queueOffset=307, sysFlag=0, bornTimestamp=1546913186717, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186717, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000CA432, commitLogOffset=828466, bodyCRC=1862212163, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=308, CONSUME_START_TIME=1546913186718, UNIQ_KEY=C0A801691BFC73D16E932637BB9D0181, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->385---
ConsumeMessageThread_3----MessageExt [queueId=0, storeSize=180, queueOffset=208, sysFlag=0, bornTimestamp=1546913186728, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186728, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000CA4E6, commitLogOffset=828646, bodyCRC=2012630009, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=209, CONSUME_START_TIME=1546913186729, UNIQ_KEY=C0A801691BFC73D16E932637BBA80182, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->386---
ConsumeMessageThread_1----MessageExt [queueId=1, storeSize=180, queueOffset=261, sysFlag=0, bornTimestamp=1546913186739, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186739, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000CA59A, commitLogOffset=828826, bodyCRC=15825775, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=262, CONSUME_START_TIME=1546913186740, UNIQ_KEY=C0A801691BFC73D16E932637BBB30183, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->387---
ConsumeMessageThread_2----MessageExt [queueId=2, storeSize=180, queueOffset=182, sysFlag=0, bornTimestamp=1546913186750, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186750, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000CA64E, commitLogOffset=829006, bodyCRC=273573630, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=183, CONSUME_START_TIME=1546913186751, UNIQ_KEY=C0A801691BFC73D16E932637BBBE0184, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->388---
ConsumeMessageThread_3----MessageExt [queueId=3, storeSize=180, queueOffset=308, sysFlag=0, bornTimestamp=1546913186761, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186761, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000CA702, commitLogOffset=829186, bodyCRC=1732859496, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=309, CONSUME_START_TIME=1546913186762, UNIQ_KEY=C0A801691BFC73D16E932637BBC90185, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->389---
ConsumeMessageThread_1----MessageExt [queueId=0, storeSize=180, queueOffset=209, sysFlag=0, bornTimestamp=1546913186772, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186772, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000CA7B6, commitLogOffset=829366, bodyCRC=126803853, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=210, CONSUME_START_TIME=1546913186773, UNIQ_KEY=C0A801691BFC73D16E932637BBD40186, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->390---
ConsumeMessageThread_2----MessageExt [queueId=1, storeSize=180, queueOffset=262, sysFlag=0, bornTimestamp=1546913186783, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186783, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000CA86A, commitLogOffset=829546, bodyCRC=1888087835, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=263, CONSUME_START_TIME=1546913186784, UNIQ_KEY=C0A801691BFC73D16E932637BBDF0187, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->391---
ConsumeMessageThread_3----MessageExt [queueId=2, storeSize=180, queueOffset=183, sysFlag=0, bornTimestamp=1546913186794, bornHost=/192.168.25.1:23425, storeTimestamp=1546913186795, storeHost=/192.168.25.1:10911, msgId=C0A8190100002A9F00000000000CA91E, commitLogOffset=829726, bodyCRC=1770045089, reconsumeTimes=0, preparedTransactionOffset=0, toString()=Message [topic=TopicTestjjj, flag=0, properties={MIN_OFFSET=0, MAX_OFFSET=184, CONSUME_START_TIME=1546913186797, UNIQ_KEY=C0A801691BFC73D16E932637BBEA0188, WAIT=true, TAGS=TagA}, body=15]] ----------- producer1-->392---

 /**
     * 提交消费请求
     * 当拉取的消息数 <= 每次消费数量 , 不拆分消息,直接消费
     * 当拉取的消息数 > 每次消费数量 , 将消息拆分成多个线程,并发消费
     * 若消费者线程池满了,触发了{@link RejectedExecutionException} ,则5S后再提交消费请求
     *
     * @param msgs              消息列表
     * @param processQueue      消息处理队列
     * @param messageQueue      消息队列
     * @param dispatchToConsume 是否提交消费。目前该参数无用处。
     */
    @Override
    public void submitConsumeRequest(final List<MessageExt> msgs,
                                     final ProcessQueue processQueue,
                                     final MessageQueue messageQueue,
                                     final boolean dispatchToConsume) {

        final int consumeBatchSize = this.defaultMQPushConsumer.getConsumeMessageBatchMaxSize();
        if (msgs.size() <= consumeBatchSize) { // 本次拉取到的消息数量 < 每次消费数 ,直接消费
        	//ConsumeRequest就是我们自定义的消费消息的回调
            ConsumeRequest consumeRequest = new ConsumeRequest(msgs, processQueue, messageQueue);
            try {
                this.consumeExecutor.submit(consumeRequest);
            } catch (RejectedExecutionException e) {
                this.submitConsumeRequestLater(consumeRequest);
            }
        } else { // 提交消息大于批量消息数,进行分拆成多个消费请求
            for (int total = 0; total < msgs.size(); ) {
                // 计算当前拆分请求包含的消息
                List<MessageExt> msgThis = new ArrayList<>(consumeBatchSize);
                for (int i = 0; i < consumeBatchSize; i++, total++) {
                    if (total < msgs.size()) {
                        msgThis.add(msgs.get(total));
                    } else {
                        break;
                    }
                }

                // 提交拆分消费请求
                ConsumeRequest consumeRequest = new ConsumeRequest(msgThis, processQueue, messageQueue);
                try {
                    this.consumeExecutor.submit(consumeRequest);
                } catch (RejectedExecutionException e) {
                    // 如果被拒绝,则将当前拆分消息+剩余消息提交延迟消费请求。
                    for (; total < msgs.size(); total++) {
                        msgThis.add(msgs.get(total));
                    }
                    this.submitConsumeRequestLater(consumeRequest);
                }
            }
        }
    }

其中,继续看上面的方法的代码 this.consumeExecutor.submit(consumeRequest); ,consumeRequest是一个线程类,我们看其的run方法: 里面的listener.consumeMessage(Collections.unmodifiableList(msgs), context);就是我们自定义的消费处理程序,其结果返回success和later。

 @Override
        public void run() {
            // 废弃队列不进行消费
            if (this.processQueue.isDropped()) {
                log.info("the message queue not be able to consume, because it's dropped. group={} {}", ConsumeMessageConcurrentlyService.this.consumerGroup,
                    this.messageQueue);
                return;
            }

            MessageListenerConcurrently listener = ConsumeMessageConcurrentlyService.this.messageListener; // 监听器
            ConsumeConcurrentlyContext context = new ConsumeConcurrentlyContext(messageQueue); // 消费Context
            ConsumeConcurrentlyStatus status = null; // 消费结果状态

            // Hook
            ConsumeMessageContext consumeMessageContext = null;
            if (ConsumeMessageConcurrentlyService.this.defaultMQPushConsumerImpl.hasHook()) {
                consumeMessageContext = new ConsumeMessageContext();
                consumeMessageContext.setConsumerGroup(defaultMQPushConsumer.getConsumerGroup());
                consumeMessageContext.setProps(new HashMap<String, String>());
                consumeMessageContext.setMq(messageQueue);
                consumeMessageContext.setMsgList(msgs);
                consumeMessageContext.setSuccess(false);
                ConsumeMessageConcurrentlyService.this.defaultMQPushConsumerImpl.executeHookBefore(consumeMessageContext);
            }

            long beginTimestamp = System.currentTimeMillis();
            boolean hasException = false;
            ConsumeReturnType returnType = ConsumeReturnType.SUCCESS; // 消费返回结果类型
            try {
                // 当消息为重试消息,还原Topic为原始topic, "%RETRYconsumeGroup%" >> originalTopic
                ConsumeMessageConcurrentlyService.this.resetRetryTopic(msgs);

                // 设置开始消费时间
                if (msgs != null && !msgs.isEmpty()) {
                    for (MessageExt msg : msgs) {
                        MessageAccessor.setConsumeStartTimeStamp(msg, String.valueOf(System.currentTimeMillis()));
                    }
                }
                // 进行消费
                status = listener.consumeMessage(Collections.unmodifiableList(msgs), context);
            } catch (Throwable e) {
                log.warn("consumeMessage exception: {} Group: {} Msgs: {} MQ: {}",
                    RemotingHelper.exceptionSimpleDesc(e), //
                    ConsumeMessageConcurrentlyService.this.consumerGroup,
                    msgs,
                    messageQueue);
                hasException = true;
            }

            // 解析消费返回结果类型
            long consumeRT = System.currentTimeMillis() - beginTimestamp;
            if (null == status) {
                if (hasException) {
                    returnType = ConsumeReturnType.EXCEPTION;
                } else {
                    returnType = ConsumeReturnType.RETURNNULL;
                }
            } else if (consumeRT >= defaultMQPushConsumer.getConsumeTimeout() * 60 * 1000) {  //如果消费时间超过15分钟
                returnType = ConsumeReturnType.TIME_OUT;
            } else if (ConsumeConcurrentlyStatus.RECONSUME_LATER == status) {
                returnType = ConsumeReturnType.FAILED;
            } else if (ConsumeConcurrentlyStatus.CONSUME_SUCCESS == status) {
                returnType = ConsumeReturnType.SUCCESS;
            }

            // Hook
            if (ConsumeMessageConcurrentlyService.this.defaultMQPushConsumerImpl.hasHook()) {
                consumeMessageContext.getProps().put(MixAll.CONSUME_CONTEXT_TYPE, returnType.name());
            }

            // 消费结果状态为空时(抛出异常 || return null),则设置为稍后重新消费
            if (null == status) {
                log.warn("consumeMessage return null, Group: {} Msgs: {} MQ: {}",
                    ConsumeMessageConcurrentlyService.this.consumerGroup,
                    msgs,
                    messageQueue);
                status = ConsumeConcurrentlyStatus.RECONSUME_LATER;
            }

            // Hook
            if (ConsumeMessageConcurrentlyService.this.defaultMQPushConsumerImpl.hasHook()) {
                consumeMessageContext.setStatus(status.toString());
                consumeMessageContext.setSuccess(ConsumeConcurrentlyStatus.CONSUME_SUCCESS == status);
                ConsumeMessageConcurrentlyService.this.defaultMQPushConsumerImpl.executeHookAfter(consumeMessageContext);
            }

            // 统计
            ConsumeMessageConcurrentlyService.this.getConsumerStatsManager()
                .incConsumeRT(ConsumeMessageConcurrentlyService.this.consumerGroup, messageQueue.getTopic(), consumeRT);

            // 处理消费结果
            if (!processQueue.isDropped()) {
                ConsumeMessageConcurrentlyService.this.processConsumeResult(status, context, this);
            } else {
                log.warn("processQueue is dropped without process consume result. messageQueue={}, msgs={}", messageQueue, msgs);
            }
        }

2、消息消费的ACK机制

在上面,消息消费后会返回结果值,分别是CONSUME_SUCCESS和RECONSUME_LATER。如果返回CONSUME_SUCCESS表明消费消息成功,而返回RECONSUME_LATER表明需要重新消费。
看看其内部的实现机制:
上面的方法中,消息消费完后(不管成功与否),都会调用
ConsumeMessageConcurrentlyService.this.processConsumeResult(status, context, this);进行消费结果的处理。
从下面的代码可以看出,消费成功的消息,是不会返回给broker任何消息的。而只有消费失败的消息,会将收到的消息全部发送给broker,broker在本地生成一个retry+消费组名称的重试队列,然后继续发送。如果发送给broker没有成功呢,那么就再消费一次。


        /**
     * 处理消费结果
     * 这里的消费结果是按队列划(ProcessQueue)分的,我们在消费时候时消费list<mesg>,所以如果返回失败,那么整个都失败?
     * 成功的话
     *
     * @param status         消费结果
     * @param context        消费Context
     * @param consumeRequest 提交请求
     */
    public void processConsumeResult(
        final ConsumeConcurrentlyStatus status,
        final ConsumeConcurrentlyContext context,
        final ConsumeRequest consumeRequest
    ) {
    	//初始值是最大值 2^32-1
        int ackIndex = context.getAckIndex();

        // 消息为空,直接返回,这里的getMsgs是单次拉取的消息数
        if (consumeRequest.getMsgs().isEmpty()) {
            return;
        }

        // 计算从consumeRequest.msgs[0]到consumeRequest.msgs[ackIndex]的消息消费成功
        switch (status) {
            case CONSUME_SUCCESS:
                if (ackIndex >= consumeRequest.getMsgs().size()) {
                    ackIndex = consumeRequest.getMsgs().size() - 1;   //ackIndex=消息数-1
                }
                // 统计成功/失败数量
                int ok = ackIndex + 1;   
                int failed = consumeRequest.getMsgs().size() - ok; //failed=0;
                this.getConsumerStatsManager().incConsumeOKTPS(consumerGroup, consumeRequest.getMessageQueue().getTopic(), ok);
                this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, consumeRequest.getMessageQueue().getTopic(), failed);
                break;
            case RECONSUME_LATER:
                ackIndex = -1;
                // 统计失败数量
                this.getConsumerStatsManager().incConsumeFailedTPS(consumerGroup, consumeRequest.getMessageQueue().getTopic(),
                    consumeRequest.getMsgs().size());
                break;
            default:
                break;
        }

        // 处理消费失败的消息
        switch (this.defaultMQPushConsumer.getMessageModel()) {
            case BROADCASTING: // 广播模式,无论是否消费失败,不发回消息到Broker,只打印Log
                for (int i = ackIndex + 1; i < consumeRequest.getMsgs().size(); i++) {  //每条消息都打印
                    MessageExt msg = consumeRequest.getMsgs().get(i);
                    log.warn("BROADCASTING, the message consume failed, drop it, {}", msg.toString());
                }
                break;
            case CLUSTERING:
                // 发回失败消息到Broker,将失败线程内所有消息发回
                List<MessageExt> msgBackFailed = new ArrayList<>(consumeRequest.getMsgs().size());
                for (int i = ackIndex + 1; i < consumeRequest.getMsgs().size(); i++) {
                    MessageExt msg = consumeRequest.getMsgs().get(i);
                    //发送消费失败的消息回broker
                    boolean result = this.sendMessageBack(msg, context);
                    //如果发送不成功,就再次本地消费
                    if (!result) {
                        msg.setReconsumeTimes(msg.getReconsumeTimes() + 1);   //重消费次数+1
                        msgBackFailed.add(msg);
                    }
                }

                // 将消费失新发回Broker,若发送失败,则提交延迟消费请求,也就是一会儿后在客户端重新消费
                //msgBackFailed存的是消费失败又发送回broker失败的?
                if (!msgBackFailed.isEmpty()) {
                    consumeRequest.getMsgs().removeAll(msgBackFailed);
                    this.submitConsumeRequestLater(msgBackFailed, consumeRequest.getProcessQueue(), consumeRequest.getMessageQueue());
                }
                break;
            default:
                break;
        }

        // 移除消费成功消息,并返回消费的最新进度,
        // 当TreeMap内消费消费完时,返回putMessage时的maxOffset(最新一批消息的最大offset);
        // 当TreeMap内还存在消息时,返回firstKey,也就是第一条消息的offset,因为不能确定里面的TreeMap内消息的消费情况
        long offset = consumeRequest.getProcessQueue().removeMessage(consumeRequest.getMsgs());
        //更新最新消费进度,进度更新只能增长,不能降低
        if (offset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
            this.defaultMQPushConsumerImpl.getOffsetStore().updateOffset(consumeRequest.getMessageQueue(), offset, true);
        }
    }
    

3 消费进度offset

3.1 消费进度的获取

在集群模式下,消费进度存在broker,在org.apache.rocketmq.client.impl.consumer.DefaultMQPushConsumerImpl的start方法中,会初始化进行offset操作的类,如果是集群模式,那么就是RemoteBrokerOffsetStore:

      //生成消费进度处理器,集群模式下消费进度保存在Broker上,因为同一组内的消费者要共享进度;广播模式下进度保存在消费者端
                if (this.defaultMQPushConsumer.getOffsetStore() != null) {
                    this.offsetStore = this.defaultMQPushConsumer.getOffsetStore();
                } else {
                    switch (this.defaultMQPushConsumer.getMessageModel()) {
                        case BROADCASTING:
                            this.offsetStore = new LocalFileOffsetStore(this.mQClientFactory, this.defaultMQPushConsumer.getConsumerGroup());
                            break;
                        case CLUSTERING:
                            this.offsetStore = new RemoteBrokerOffsetStore(this.mQClientFactory, this.defaultMQPushConsumer.getConsumerGroup());
                            break;
                        default:
                            break;
                    }
                }
                this.offsetStore.load();   //若是广播模式,加载本地的消费进度文件

在负载均衡服务中,我们每次新添加进来的队列,都会从broker中获取队列的偏移量:
long nextOffset = this.computePullFromWhere(mq); //从Broker读取MessageQueue的消费OffSet
其中读取偏移量的操作可以用户自己设置,分别是从上一次消费处继续读取,从头读取,从某个时间点读取。

3.2 消费进度的保存

在集群模式下,消费端每隔5s会向broker发送一次本地的comsumequeue的消费进度offset。
org.apache.rocketmq.client.impl.factory.MQClientInstance

this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {

            @Override
            public void run() {
                try {
                    MQClientInstance.this.persistAllConsumerOffset();
                } catch (Exception e) {
                    log.error("ScheduledTask persistAllConsumerOffset exception", e);
                }
            }
        }, 1000 * 10, this.clientConfig.getPersistConsumerOffsetInterval(), TimeUnit.MILLISECONDS);
   @Override
    public void persistConsumerOffset() {
        try {
            this.makeSureStateOK();
            Set<MessageQueue> mqs = new HashSet<MessageQueue>();
            Set<MessageQueue> allocateMq = this.rebalanceImpl.getProcessQueueTable().keySet();
            mqs.addAll(allocateMq);

            this.offsetStore.persistAll(mqs);
        } catch (Exception e) {
            log.error("group: " + this.defaultMQPushConsumer.getConsumerGroup() + " persistConsumerOffset exception", e);
        }
    }

   /**
     * 持久化指定消息队列数组的消费进度到Broker,并移除非指定消息队列
     *
     * @param mqs 指定消息队列
     */
    @Override
    public void persistAll(Set<MessageQueue> mqs) {
        if (null == mqs || mqs.isEmpty()) { return; }

        // 持久化消息队列
        final HashSet<MessageQueue> unusedMQ = new HashSet<>();
        if (!mqs.isEmpty()) {
            for (Map.Entry<MessageQueue, AtomicLong> entry : this.offsetTable.entrySet()) {
                MessageQueue mq = entry.getKey();
                AtomicLong offset = entry.getValue();
                if (offset != null) {
                    if (mqs.contains(mq)) {
                        try {
                            this.updateConsumeOffsetToBroker(mq, offset.get());
                            log.info("[persistAll] Group: {} ClientId: {} updateConsumeOffsetToBroker {} {}",
                                this.groupName,
                                this.mQClientFactory.getClientId(),
                                mq,
                                offset.get());
                        } catch (Exception e) {
                            log.error("updateConsumeOffsetToBroker exception, " + mq.toString(), e);
                        }
                    } else {
                        unusedMQ.add(mq);
                    }
                }
            }
        }

        // 移除不适用的消息队列
        if (!unusedMQ.isEmpty()) {
            for (MessageQueue mq : unusedMQ) {
                this.offsetTable.remove(mq);
                log.info("remove unused mq, {}, {}", mq, this.groupName);
            }
        }
    }

那么 在哪更新呢?
在上一节中,将消费完的队列数据要从getProcessQueue中进行清除。如果getProcessQueue存消息的红黑树最后是空的,那么偏移量就是队列的最大位置,否则,就是第一个元素的偏移量。

 // 移除消费成功消息,并返回消费的最新进度,
        // 当TreeMap内消费消费完时,返回putMessage时的maxOffset(最新一批消息的最大offset);
        // 当TreeMap内还存在消息时,返回firstKey,也就是第一条消息的offset,因为不能确定里面的TreeMap内消息的消费情况
        long offset = consumeRequest.getProcessQueue().removeMessage(consumeRequest.getMsgs());
        //更新最新消费进度,进度更新只能增长,不能降低
        if (offset >= 0 && !consumeRequest.getProcessQueue().isDropped()) {
            this.defaultMQPushConsumerImpl.getOffsetStore().updateOffset(consumeRequest.getMessageQueue(), offset, true);
        }

从上面的分析,队列的偏移量是每5s进行一次保存,如果在这5s内客户端停止工作了,那么broker还是存储着5s前的偏移量,那么下次消费端重新启动,读取偏移量时候,还是之前的偏移量,那么有些消息就会重复消费。

  • 4
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
RocketMQ 是一个开源的分布式消息中间件,它的消费流程如下: 1. 创建消费者:首先,你需要创建一个消费者实例,用于接收并处理消息。你需要指定消费者所属的消费者组(Consumer Group),这样可以实现负载均衡和容错。 2. 订阅主题:在创建消费者后,你需要订阅一个或多个主题(Topic),以便接收该主题下的消息。订阅可以使用通配符匹配多个主题。 3. 拉取消息:一旦订阅了主题,消费者就可以从消息队列中拉取消息RocketMQ 提供了两种拉取方式:拉取模式和推动模式。在拉取模式下,消费者主动拉取消息;在推动模式下,消息服务器将消息推送给消费者。 4. 消息过滤:你可以使用消息过滤器对接收到的消息进行过滤。消息过滤器可以基于消息的属性、标签等进行条件过滤。 5. 消息处理:一旦消费者接收到消息,就可以进行相应的处理逻辑。你可以根据业务需求进行自定义的消息处理操作。 6. 消息确认:在消息处理完成后,消费者需要向消息服务器发送消息确认(ACK),以告知服务器该消息已经被成功消费消息服务器将根据 ACK 的反馈情况进行消息的删除或重试。 7. 顺序消费:如果你需要保证消息的顺序消费RocketMQ 提供了顺序消费的机制。你可以通过指定消息队列的顺序消费模式来实现按顺序消费消息。 总结起来,RocketMQ消费流程包括创建消费者、订阅主题、拉取消息消息过滤、消息处理和消息确认等步骤。这些步骤可以根据业务需求进行灵活配置和扩展。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值