RocketMQ源码学习七:Broker端处理消息

文章详细介绍了RocketMQBroker端如何处理生产者发送的消息,主要关注SendMessageProcessor类的源码,包括同步和异步刷盘的实现。同步刷盘使用GroupCommitService,异步刷盘则涉及FlushRealTimeService和CommitRealTimeService。此外,还讨论了ConsumerQueue和IndexFile在消息存储过程中的作用。
摘要由CSDN通过智能技术生成


上篇文章时生产者怎么发送消息的,这次学习下broker是怎么处理生产者发送的消息。

入口

在发送端有这么一段代码

        if (isReply) {
            if (sendSmartMsg) {
                SendMessageRequestHeaderV2 requestHeaderV2 = SendMessageRequestHeaderV2.createSendMessageRequestHeaderV2(requestHeader);
                request = RemotingCommand.createRequestCommand(RequestCode.SEND_REPLY_MESSAGE_V2, requestHeaderV2);
            } else {
                request = RemotingCommand.createRequestCommand(RequestCode.SEND_REPLY_MESSAGE, requestHeader);
            }
        } else {
            if (sendSmartMsg || msg instanceof MessageBatch) {
                SendMessageRequestHeaderV2 requestHeaderV2 = SendMessageRequestHeaderV2.createSendMessageRequestHeaderV2(requestHeader);
                request = RemotingCommand.createRequestCommand(msg instanceof MessageBatch ? RequestCode.SEND_BATCH_MESSAGE : RequestCode.SEND_MESSAGE_V2, requestHeaderV2);
            } else {
                request = RemotingCommand.createRequestCommand(RequestCode.SEND_MESSAGE, requestHeader);
            }
        }```
因为这次学习的时普通消息,我们就看最后一行。其中RequestCode.SEND_MESSAGE = 10,也就是broker处理端10对应的处理器,我们看下Broker端注册处理器的地方。

```java
        SendMessageProcessor sendProcessor = new SendMessageProcessor(this);
        sendProcessor.registerSendMessageHook(sendMessageHookList);
        sendProcessor.registerConsumeMessageHook(consumeMessageHookList);

        /** {@link NettyRemotingServer#registerProcessor(int, NettyRequestProcessor, ExecutorService)} */
        //根据不同的code注册不同的处理器
        //第一个就是生产者发送消息到broker的处理器
        this.remotingServer.registerProcessor(RequestCode.SEND_MESSAGE, sendProcessor, this.sendMessageExecutor);
        this.remotingServer.registerProcessor(RequestCode.SEND_MESSAGE_V2, sendProcessor, this.sendMessageExecutor);

看到10对应的处理器是SendMessageProcessor类。下边我们就主要看下这个类是怎么处理消息。

SendMessageProcessor类源码
    @Override
    public RemotingCommand processRequest(ChannelHandlerContext ctx,
                                          RemotingCommand request) throws RemotingCommandException {
        RemotingCommand response = null;
        try {
            //同步调用使用异步接口的get()方法
            response = asyncProcessRequest(ctx, request).get();
        } catch (InterruptedException | ExecutionException e) {
            log.error("process SendMessage error, request : " + request.toString(), e);
        }
        return response;
    }

可以看到同步处理实际是调用的异步处理的get()方法转的同步。

    /** 异步调用 */
    public CompletableFuture<RemotingCommand> asyncProcessRequest(ChannelHandlerContext ctx,
                                                                  RemotingCommand request) throws RemotingCommandException {
        final SendMessageContext mqtraceContext;
        switch (request.getCode()) {
            case RequestCode.CONSUMER_SEND_MSG_BACK:
                return this.asyncConsumerSendMsgBack(ctx, request);
            default: //处理生产者发送的消息
                //解析构建请求
                SendMessageRequestHeader requestHeader = parseRequestHeader(request);
                if (requestHeader == null) {
                    return CompletableFuture.completedFuture(null);
                }
                mqtraceContext = buildMsgContext(ctx, requestHeader);
                this.executeSendMessageHookBefore(ctx, request, mqtraceContext);
                if (requestHeader.isBatch()) {
                    //批处理
                    return this.asyncSendBatchMessage(ctx, request, mqtraceContext, requestHeader);
                } else {
                    return this.asyncSendMessage(ctx, request, mqtraceContext, requestHeader);
                }
        }
    }

这段代码分为两段,第一部分是处理消费者消费后的异步回调逻辑,第二部分是处理消息的逻辑并且分为批量和单个。批量和单个底层处理逻辑是通用的,这次我们主要看单个处理逻辑。

private CompletableFuture<RemotingCommand> asyncSendMessage(ChannelHandlerContext ctx, RemotingCommand request,
                                                                SendMessageContext mqtraceContext,
                                                                SendMessageRequestHeader requestHeader) {
        //初始化返回对象
        final RemotingCommand response = preSend(ctx, request, requestHeader);
        final SendMessageResponseHeader responseHeader = (SendMessageResponseHeader)response.readCustomHeader();

        //当前系统不接受请求,直接返回
        if (response.getCode() != -1) {
            return CompletableFuture.completedFuture(response);
        }

        final byte[] body = request.getBody();

        //获取队列id
        int queueIdInt = requestHeader.getQueueId();
        //获取topic信息
        TopicConfig topicConfig = this.brokerController.getTopicConfigManager().selectTopicConfig(requestHeader.getTopic());

        if (queueIdInt < 0) {
            queueIdInt = randomQueueId(topicConfig.getWriteQueueNums());
        }

        MessageExtBrokerInner msgInner = new MessageExtBrokerInner();
        msgInner.setTopic(requestHeader.getTopic());
        msgInner.setQueueId(queueIdInt);

        //处理重试消息和死信队列消息
        if (!handleRetryAndDLQ(requestHeader, response, request, msgInner, topicConfig)) {
            return CompletableFuture.completedFuture(response);
        }

        msgInner.setBody(body);
        msgInner.setFlag(requestHeader.getFlag());
        MessageAccessor.setProperties(msgInner, MessageDecoder.string2messageProperties(requestHeader.getProperties()));
        msgInner.setPropertiesString(requestHeader.getProperties());
        msgInner.setBornTimestamp(requestHeader.getBornTimestamp());
        msgInner.setBornHost(ctx.channel().remoteAddress());
        msgInner.setStoreHost(this.getStoreHost());
        msgInner.setReconsumeTimes(requestHeader.getReconsumeTimes() == null ? 0 : requestHeader.getReconsumeTimes());
        String clusterName = this.brokerController.getBrokerConfig().getBrokerClusterName();
        MessageAccessor.putProperty(msgInner, MessageConst.PROPERTY_CLUSTER, clusterName);
        msgInner.setPropertiesString(MessageDecoder.messageProperties2String(msgInner.getProperties()));

        CompletableFuture<PutMessageResult> putMessageResult = null;
        //把属性信息转换为map
        Map<String, String> origProps = MessageDecoder.string2messageProperties(requestHeader.getProperties());
        //是否是事务消息
        String transFlag = origProps.get(MessageConst.PROPERTY_TRANSACTION_PREPARED);
        if (transFlag != null && Boolean.parseBoolean(transFlag)) {
            if (this.brokerController.getBrokerConfig().isRejectTransactionMessage()) {
                response.setCode(ResponseCode.NO_PERMISSION);
                response.setRemark(
                        "the broker[" + this.brokerController.getBrokerConfig().getBrokerIP1()
                                + "] sending transaction message is forbidden");
                return CompletableFuture.completedFuture(response);
            }
            //处理事务消息
            putMessageResult = this.brokerController.getTransactionalMessageService().asyncPrepareMessage(msgInner);
        } else {
            //处理普通消息
            putMessageResult = this.brokerController.getMessageStore().asyncPutMessage(msgInner);
        }
        return handlePutMessageResultFuture(putMessageResult, response, request, msgInner, responseHeader, mqtraceContext, ctx, queueIdInt);
    }

这段代码主要做了以下几点逻辑:

  1. 检查当前系统的状态,判断当前系统是否是可处理状态。如果当前系统不可处理状态,直接返回
  2. 处理重试消息和死信队列消息,如果是重试消息或者死信消息会校验消息信息的有效性
  3. 判断是否是事务消息,如果是事务消息会校验事务消息的有效性
  4. 封装请求参数,调用DefaultMessageStore类的asyncPutMessage()方法处理消息

从上述代码中可以看到有对事务消息做了单独的处理,我们这次只是学习怎么处理普通消息

    //处理消息
    @Override
    public CompletableFuture<PutMessageResult> asyncPutMessage(MessageExtBrokerInner msg) {
        //检查当前DefaultMessageStore状态
        PutMessageStatus checkStoreStatus = this.checkStoreStatus();
        if (checkStoreStatus != PutMessageStatus.PUT_OK) {
            return CompletableFuture.completedFuture(new PutMessageResult(checkStoreStatus, null));
        }

        //校验消息内容
        PutMessageStatus msgCheckStatus = this.checkMessage(msg);
        if (msgCheckStatus == PutMessageStatus.MESSAGE_ILLEGAL) {
            return CompletableFuture.completedFuture(new PutMessageResult(msgCheckStatus, null));
        }

        long beginTime = this.getSystemClock().now();
        //存储消息
        CompletableFuture<PutMessageResult> putResultFuture = this.commitLog.asyncPutMessage(msg);

        putResultFuture.thenAccept((result) -> {
            long elapsedTime = this.getSystemClock().now() - beginTime;
            //存储消息超时
            if (elapsedTime > 500) {
                log.warn("putMessage not in lock elapsed time(ms)={}, bodyLength={}", elapsedTime, msg.getBody().length);
            }
            this.storeStatsService.setPutMessageEntireTimeMax(elapsedTime);

            //存储消息失败,存储消息失败次数加一
            if (null == result || !result.isOk()) {
                this.storeStatsService.getPutMessageFailedTimes().incrementAndGet();
            }
        });

        return putResultFuture;
    }
  1. 检查当前DefaultMessageStore的状态。判断是否可以处理当前消息
  2. 检查消息内容,判断topic名称长度和消息体长度是否满足要求
  3. 消息存储失败会叠加失败次数,超时会打印超时日志
  4. 存储消息调用的是commitlog类中的asyncPutMessage()方法

下面我们看下commitlog类中的asyncPutMessage()方法逻辑

    //消息存储
    public CompletableFuture<PutMessageResult> asyncPutMessage(final MessageExtBrokerInner msg) {
        // Set the storage time
        msg.setStoreTimestamp(System.currentTimeMillis());
        // Set the message body BODY CRC (consider the most appropriate setting
        // on the client)
        //压缩消息
        msg.setBodyCRC(UtilAll.crc32(msg.getBody()));
        // Back to Results
        AppendMessageResult result = null;

        StoreStatsService storeStatsService = this.defaultMessageStore.getStoreStatsService();

        String topic = msg.getTopic();
        int queueId = msg.getQueueId();

        final int tranType = MessageSysFlag.getTransactionValue(msg.getSysFlag());
        if (tranType == MessageSysFlag.TRANSACTION_NOT_TYPE
                || tranType == MessageSysFlag.TRANSACTION_COMMIT_TYPE) {
            // Delay Delivery 延时消息处理
            if (msg.getDelayTimeLevel() > 0) {
                if (msg.getDelayTimeLevel() > this.defaultMessageStore.getScheduleMessageService().getMaxDelayLevel()) {
                    msg.setDelayTimeLevel(this.defaultMessageStore.getScheduleMessageService().getMaxDelayLevel());
                }

                topic = TopicValidator.RMQ_SYS_SCHEDULE_TOPIC;
                queueId = ScheduleMessageService.delayLevel2QueueId(msg.getDelayTimeLevel());

                // Backup real topic, queueId
                MessageAccessor.putProperty(msg, MessageConst.PROPERTY_REAL_TOPIC, msg.getTopic());
                MessageAccessor.putProperty(msg, MessageConst.PROPERTY_REAL_QUEUE_ID, String.valueOf(msg.getQueueId()));
                msg.setPropertiesString(MessageDecoder.messageProperties2String(msg.getProperties()));

                msg.setTopic(topic);
                msg.setQueueId(queueId);
            }
        }

        long elapsedTimeInLock = 0;
        MappedFile unlockMappedFile = null;
        //获取一个MappedFile
        MappedFile mappedFile = this.mappedFileQueue.getLastMappedFile();

        //加锁,根据配置使用可重入锁或者cas
        putMessageLock.lock(); //spin or ReentrantLock ,depending on store config
        try {
            long beginLockTimestamp = this.defaultMessageStore.getSystemClock().now();
            this.beginTimeInLock = beginLockTimestamp;

            // Here settings are stored timestamp, in order to ensure an orderly
            // global
            msg.setStoreTimestamp(beginLockTimestamp);

            if (null == mappedFile || mappedFile.isFull()) {
                mappedFile = this.mappedFileQueue.getLastMappedFile(0); // Mark: NewFile may be cause noise
            }
            if (null == mappedFile) {
                log.error("create mapped file1 error, topic: " + msg.getTopic() + " clientAddr: " + msg.getBornHostString());
                beginTimeInLock = 0;
                return CompletableFuture.completedFuture(new PutMessageResult(PutMessageStatus.CREATE_MAPEDFILE_FAILED, null));
            }

            //将消息追加到MappenFile末尾
            result = mappedFile.appendMessage(msg, this.appendMessageCallback);
            switch (result.getStatus()) {
                case PUT_OK:
                    break;
                case END_OF_FILE:  //空间不足,重试写入消息
                    unlockMappedFile = mappedFile;
                    // Create a new file, re-write the message
                    mappedFile = this.mappedFileQueue.getLastMappedFile(0);
                    if (null == mappedFile) {
                        // XXX: warn and notify me
                        log.error("create mapped file2 error, topic: " + msg.getTopic() + " clientAddr: " + msg.getBornHostString());
                        beginTimeInLock = 0;
                        return CompletableFuture.completedFuture(new PutMessageResult(PutMessageStatus.CREATE_MAPEDFILE_FAILED, result));
                    }
                    result = mappedFile.appendMessage(msg, this.appendMessageCallback);
                    break;
                case MESSAGE_SIZE_EXCEEDED:
                case PROPERTIES_SIZE_EXCEEDED:
                    beginTimeInLock = 0;
                    return CompletableFuture.completedFuture(new PutMessageResult(PutMessageStatus.MESSAGE_ILLEGAL, result));
                case UNKNOWN_ERROR:
                    beginTimeInLock = 0;
                    return CompletableFuture.completedFuture(new PutMessageResult(PutMessageStatus.UNKNOWN_ERROR, result));
                default:
                    beginTimeInLock = 0;
                    return CompletableFuture.completedFuture(new PutMessageResult(PutMessageStatus.UNKNOWN_ERROR, result));
            }

            elapsedTimeInLock = this.defaultMessageStore.getSystemClock().now() - beginLockTimestamp;
            beginTimeInLock = 0;
        } finally {
            putMessageLock.unlock();
        }

        if (elapsedTimeInLock > 500) {
            log.warn("[NOTIFYME]putMessage in lock cost time(ms)={}, bodyLength={} AppendMessageResult={}", elapsedTimeInLock, msg.getBody().length, result);
        }

        if (null != unlockMappedFile && this.defaultMessageStore.getMessageStoreConfig().isWarmMapedFileEnable()) {
            this.defaultMessageStore.unlockMappedFile(unlockMappedFile);
        }

        PutMessageResult putMessageResult = new PutMessageResult(PutMessageStatus.PUT_OK, result);

        // Statistics
        storeStatsService.getSinglePutMessageTopicTimesTotal(msg.getTopic()).incrementAndGet();
        storeStatsService.getSinglePutMessageTopicSizeTotal(topic).addAndGet(result.getWroteBytes());

        // 将消息从内存刷到磁盘中
        CompletableFuture<PutMessageStatus> flushResultFuture = submitFlushRequest(result, msg);
        // 调用HAService进行主从同步
        CompletableFuture<PutMessageStatus> replicaResultFuture = submitReplicaRequest(result, msg);
        // 合并两个异步任务的结果
        return flushResultFuture.thenCombine(replicaResultFuture, (flushStatus, replicaStatus) -> {
            if (flushStatus != PutMessageStatus.PUT_OK) {
                putMessageResult.setPutMessageStatus(flushStatus);
            }
            if (replicaStatus != PutMessageStatus.PUT_OK) {
                putMessageResult.setPutMessageStatus(replicaStatus);
                if (replicaStatus == PutMessageStatus.FLUSH_SLAVE_TIMEOUT) {
                    log.error("do sync transfer other node, wait return, but failed, topic: {} tags: {} client address: {}",
                            msg.getTopic(), msg.getTags(), msg.getBornHostNameString());
                }
            }
            return putMessageResult;
        });
    }
  1. 这里也是对延迟消息做了一些逻辑处理
  2. 这里的加锁是有两个选择,可重入锁和cas,根据配置进行选择
  3. 消息实际是先追加到MappedFile尾部的,实际就是内存中,并不是一开始就写入到磁盘中
  4. 再写入消息时,关于此topic的状态统计也会进行
  5. 消息刷盘时有个单独的模块进行,根据配置进行同步刷盘或异步刷盘
  6. 主从同步也是调用单独的模块,根据配置进行响应的同步逻辑

我们学习下内存刷盘的逻辑,主从同步的后续专门学习

数据刷盘
    public CompletableFuture<PutMessageStatus> submitFlushRequest(AppendMessageResult result, MessageExt messageExt) {
        // Synchronization flush 同步刷盘
        if (FlushDiskType.SYNC_FLUSH == this.defaultMessageStore.getMessageStoreConfig().getFlushDiskType()) {
            final GroupCommitService service = (GroupCommitService) this.flushCommitLogService;
            if (messageExt.isWaitStoreMsgOK()) {
                //等待刷盘,把当前数据添加到写队列中
                GroupCommitRequest request = new GroupCommitRequest(result.getWroteOffset() + result.getWroteBytes(),
                        this.defaultMessageStore.getMessageStoreConfig().getSyncFlushTimeout());
                service.putRequest(request);
                return request.future();
            } else {
                //唤醒线程,立即执行刷盘
                service.wakeup();
                return CompletableFuture.completedFuture(PutMessageStatus.PUT_OK);
            }
        }
        // Asynchronous flush  异步刷盘
        else {
            if (!this.defaultMessageStore.getMessageStoreConfig().isTransientStorePoolEnable()) {
                /** {@link FlushRealTimeService#wakeup()} */
                flushCommitLogService.wakeup();
            } else  {
                /** {@link CommitRealTimeService#wakeup()} */
                commitLogService.wakeup();
            }
            return CompletableFuture.completedFuture(PutMessageStatus.PUT_OK);
        }
    }

这段代码中有三处执行刷盘的逻辑,而且并不是执行的同一套逻辑,我们看下CommitLog类的构造函数

    public CommitLog(final DefaultMessageStore defaultMessageStore) {
        this.mappedFileQueue = new MappedFileQueue(defaultMessageStore.getMessageStoreConfig().getStorePathCommitLog(),
            defaultMessageStore.getMessageStoreConfig().getMappedFileSizeCommitLog(), defaultMessageStore.getAllocateMappedFileService());
        this.defaultMessageStore = defaultMessageStore;

        if (FlushDiskType.SYNC_FLUSH == defaultMessageStore.getMessageStoreConfig().getFlushDiskType()) {
            this.flushCommitLogService = new GroupCommitService();
        } else {
            this.flushCommitLogService = new FlushRealTimeService();
        }

        this.commitLogService = new CommitRealTimeService();

        this.appendMessageCallback = new DefaultAppendMessageCallback(defaultMessageStore.getMessageStoreConfig().getMaxMessageSize());
        batchEncoderThreadLocal = new ThreadLocal<MessageExtBatchEncoder>() {
            @Override
            protected MessageExtBatchEncoder initialValue() {
                return new MessageExtBatchEncoder(defaultMessageStore.getMessageStoreConfig().getMaxMessageSize());
            }
        };
        this.putMessageLock = defaultMessageStore.getMessageStoreConfig().isUseReentrantLockWhenPutMessage() ? new PutMessageReentrantLock() : new PutMessageSpinLock();

    }

这其中可以看到,对象flushCommitLogService根据情况赋值GroupCommitService()和GroupCommitService(),commitLogService对象赋值CommitRealTimeService()。这样我们就知道具体调用的逻辑是那个方法的了。

同步刷盘
        if (FlushDiskType.SYNC_FLUSH == this.defaultMessageStore.getMessageStoreConfig().getFlushDiskType()) {
            final GroupCommitService service = (GroupCommitService) this.flushCommitLogService;
            if (messageExt.isWaitStoreMsgOK()) {
                //等待刷盘,把当前数据添加到写队列中
                GroupCommitRequest request = new GroupCommitRequest(result.getWroteOffset() + result.getWroteBytes(),
                        this.defaultMessageStore.getMessageStoreConfig().getSyncFlushTimeout());
                service.putRequest(request);
                return request.future();
            } else {
                //唤醒线程,立即执行刷盘
                service.wakeup();
                return CompletableFuture.completedFuture(PutMessageStatus.PUT_OK);
            }
        }

同步刷盘是有个判断逻辑是否等待的,如果等待,就会把当前数据添加到写队列中。同步刷盘调用的方法是类GroupCommitService中的方法,其中有两个属性。

    class GroupCommitService extends FlushCommitLogService {
        private volatile List<GroupCommitRequest> requestsWrite = new ArrayList<GroupCommitRequest>();
        private volatile List<GroupCommitRequest> requestsRead = new ArrayList<GroupCommitRequest>();

刷盘的时候是把写队列的数据刷新到磁盘上,这个有一个读写队列的概念,他们可以互相切换,保证读写时分离的,能并行运行的。
如果是立即刷盘的,就是唤醒GroupCommitService线程,这个GroupCommitService顶层继承了公共服务线程ServiceThread。那我们看下GroupCommitService方法中的run()方法。

        public void run() {
            CommitLog.log.info(this.getServiceName() + " service started");

            while (!this.isStopped()) {
                try {
                    //每隔10ms尝试写入磁盘
                    this.waitForRunning(10);
                    //处理逻辑
                    this.doCommit();
                } catch (Exception e) {
                    CommitLog.log.warn(this.getServiceName() + " service has exception. ", e);
                }
            }

            // Under normal circumstances shutdown, wait for the arrival of the
            // request, and then flush
            try {
                Thread.sleep(10);
            } catch (InterruptedException e) {
                CommitLog.log.warn("GroupCommitService Exception, ", e);
            }

            //加锁,读写队列互换
            synchronized (this) {
                this.swapRequests();
            }

            this.doCommit();

            CommitLog.log.info(this.getServiceName() + " service end");
        }

这段逻辑主要逻辑是doCommit()方法,其他是外层方法。每隔10ms尝试写入;读写队列交换。

        private void doCommit() {
            synchronized (this.requestsRead) {
                //读队列不为空
                if (!this.requestsRead.isEmpty()) {
                    for (GroupCommitRequest req : this.requestsRead) {
                        // There may be a message in the next file, so a maximum of
                        // two times the flush
                        boolean flushOK = CommitLog.this.mappedFileQueue.getFlushedWhere() >= req.getNextOffset();
                        //不成功,从新尝试一次
                        for (int i = 0; i < 2 && !flushOK; i++) {
                            // ☆☆☆ 将内存中的数据刷到磁盘上
                            CommitLog.this.mappedFileQueue.flush(0);
                            flushOK = CommitLog.this.mappedFileQueue.getFlushedWhere() >= req.getNextOffset();
                        }

                        req.wakeupCustomer(flushOK ? PutMessageStatus.PUT_OK : PutMessageStatus.FLUSH_DISK_TIMEOUT);
                    }

                    long storeTimestamp = CommitLog.this.mappedFileQueue.getStoreTimestamp();
                    if (storeTimestamp > 0) {
                        CommitLog.this.defaultMessageStore.getStoreCheckpoint().setPhysicMsgTimestamp(storeTimestamp);
                    }

                    this.requestsRead.clear();
                } else {
                    // Because of individual messages is set to not sync flush, it
                    // will come to this process
                    CommitLog.this.mappedFileQueue.flush(0);
                }
            }
        }

这段是落盘的核心逻辑,更底层是MappedFile的逻辑了,感兴趣的化可以自己查看源码。

异步刷盘但不满足条件

异步刷盘也是分两种情况,一是:defaultMessageStore组件中的刷盘类型是异步,但是transientStorePoolEnable配置是false或者当前节点已变为salve。

        // Asynchronous flush  异步刷盘
        else {
            if (!this.defaultMessageStore.getMessageStoreConfig().isTransientStorePoolEnable()) {
                /** {@link FlushRealTimeService#wakeup()} */
                flushCommitLogService.wakeup();
            } else  {
                /** {@link CommitRealTimeService#wakeup()} */
                commitLogService.wakeup();
            }
            return CompletableFuture.completedFuture(PutMessageStatus.PUT_OK);
        }

其中上半部分就是这种情况,他调用的似乎FlushRealTimeService#wakeup()。FlushRealTimeService也继承了ServiceThread线程,我们接下来看FlushRealTimeService#run()方法

public void run() {
            CommitLog.log.info(this.getServiceName() + " service started");

            //同样是一个循环
            while (!this.isStopped()) {
                //主要用于后边线程运行的间隔时间
                boolean flushCommitLogTimed = CommitLog.this.defaultMessageStore.getMessageStoreConfig().isFlushCommitLogTimed();

                int interval = CommitLog.this.defaultMessageStore.getMessageStoreConfig().getFlushIntervalCommitLog();
                int flushPhysicQueueLeastPages = CommitLog.this.defaultMessageStore.getMessageStoreConfig().getFlushCommitLogLeastPages();

                int flushPhysicQueueThoroughInterval =
                    CommitLog.this.defaultMessageStore.getMessageStoreConfig().getFlushCommitLogThoroughInterval();

                //打印日志间隔
                boolean printFlushProgress = false;

                // Print flush progress
                long currentTimeMillis = System.currentTimeMillis();
                if (currentTimeMillis >= (this.lastFlushTimestamp + flushPhysicQueueThoroughInterval)) {
                    this.lastFlushTimestamp = currentTimeMillis;
                    flushPhysicQueueLeastPages = 0;
                    printFlushProgress = (printTimes++ % 10) == 0;
                }

                try {
                    //每500ms尝试运行
                    if (flushCommitLogTimed) {
                        Thread.sleep(interval);
                    } else {
                        this.waitForRunning(interval);
                    }

                    if (printFlushProgress) {
                        this.printFlushProgress();
                    }

                    long begin = System.currentTimeMillis();
                    //刷盘
                    CommitLog.this.mappedFileQueue.flush(flushPhysicQueueLeastPages);
                    //刷盘消耗时间
                    long storeTimestamp = CommitLog.this.mappedFileQueue.getStoreTimestamp();
                    if (storeTimestamp > 0) {
                        CommitLog.this.defaultMessageStore.getStoreCheckpoint().setPhysicMsgTimestamp(storeTimestamp);
                    }
                    long past = System.currentTimeMillis() - begin;
                    if (past > 500) {
                        log.info("Flush data to disk costs {} ms", past);
                    }
                } catch (Throwable e) {
                    CommitLog.log.warn(this.getServiceName() + " service has exception. ", e);
                    this.printFlushProgress();
                }
            }

            // Normal shutdown, to ensure that all the flush before exit
            //确保服务停机前内存中所有的数据都刷盘,尝试10次
            boolean result = false;
            for (int i = 0; i < RETRY_TIMES_OVER && !result; i++) {
                result = CommitLog.this.mappedFileQueue.flush(0);
                CommitLog.log.info(this.getServiceName() + " service shutdown, retry " + (i + 1) + " times " + (result ? "OK" : "Not OK"));
            }

            this.printFlushProgress();

            CommitLog.log.info(this.getServiceName() + " service end");
        }
  1. 循环执行这段逻辑,时间间隔是500ms
  2. 刷盘逻辑执行的是MappedFile.flush()方法
  3. 中间会做一些日志打印,消耗时间的统计
  4. 防止服务停机时,内存数据丢失,进行10次刷盘
异步刷盘

真正的异步刷盘走的时CommitRealTimeService类的run()方法,也是继承公共线程类ServiceThread()

/**
         * 每200ms{@link MessageStoreConfig#commitIntervalCommitLog},将消息从DirectByteBuffer转存到MappedByteBuffer上
         * 被{@link CommitLog#submitFlushRequest(AppendMessageResult, MessageExt)}唤醒
         * 被{@link CommitLog#handleDiskFlush(AppendMessageResult, PutMessageResult, MessageExt)}唤醒
         * 被{@link CommitLog#start()}调用
         */
        @Override
        public void run() {
            CommitLog.log.info(this.getServiceName() + " service started");
            while (!this.isStopped()) {
                //循环尝试刷盘间隔时间
                int interval = CommitLog.this.defaultMessageStore.getMessageStoreConfig().getCommitIntervalCommitLog();

                //刷盘最小页数
                int commitDataLeastPages = CommitLog.this.defaultMessageStore.getMessageStoreConfig().getCommitCommitLogLeastPages();

                int commitDataThoroughInterval =
                    CommitLog.this.defaultMessageStore.getMessageStoreConfig().getCommitCommitLogThoroughInterval();

                long begin = System.currentTimeMillis();
                if (begin >= (this.lastCommitTimestamp + commitDataThoroughInterval)) {
                    this.lastCommitTimestamp = begin;
                    commitDataLeastPages = 0;
                }

                try {
                    boolean result = CommitLog.this.mappedFileQueue.commit(commitDataLeastPages);
                    long end = System.currentTimeMillis();
                    if (!result) {
                        this.lastCommitTimestamp = end; // result = false means some data committed.
                        //now wake up flush thread.
                        /**
                         * {@link FlushRealTimeService#run()}:每500ms,将MappedByteBuffer上的消息,flush到磁盘上
                         */
                        flushCommitLogService.wakeup();
                    }

                    if (end - begin > 500) {
                        log.info("Commit data to file costs {} ms", end - begin);
                    }
                    //每个200ms尝试刷盘
                    this.waitForRunning(interval);
                } catch (Throwable e) {
                    CommitLog.log.error(this.getServiceName() + " service has exception. ", e);
                }
            }

            boolean result = false;
            for (int i = 0; i < RETRY_TIMES_OVER && !result; i++) {
                result = CommitLog.this.mappedFileQueue.commit(0);
                CommitLog.log.info(this.getServiceName() + " service shutdown, retry " + (i + 1) + " times " + (result ? "OK" : "Not OK"));
            }
            CommitLog.log.info(this.getServiceName() + " service end");
        }

上边这段逻辑主要是每隔200ms就尝试刷盘到磁盘,这次调用的方法时mappedFileQueue.commit()方法,其他方法调用的是mappedFileQueue.flush()方法,这俩个方法是有一点区别的。
在RocketMQ中,MappedFile是用于存储消息的文件映射。flush和commit是MappedFile类中的两个方法,它们有不同的作用。

  1. flush:flush方法用于将内存中的数据刷新到磁盘。当消息写入MappedFile后,它们首先会被写入内存缓冲区。通过调用flush方法,可以将内存缓冲区中的数据刷新到磁盘,确保数据持久化。flush操作是异步的,即调用后会立即返回,不会等待磁盘IO完成。

  2. commit:commit方法用于将磁盘上的数据同步到存储设备。当调用flush方法后,数据被写入磁盘的内核缓冲区,但并不会立即写入存储设备。通过调用commit方法,可以将内核缓冲区中的数据刷到存储设备上,确保数据真正落盘。commit操作是同步的,即调用后会等待磁盘IO完成。

简而言之,flush方法用于将数据从内存刷新到磁盘的内核缓冲区,而commit方法用于将内核缓冲区中的数据真正写入存储设备。

ConsumerQueue和IndexFile

以上逻辑都是commitLog模块的逻辑,和其相对应的 ConsumerQueue和IndexFile在broker启动逻辑中。启动逻辑会调用ReputMessageService().start()方法,这是个继承ServiceThread的线程。我们看其run()方法

        @Override
        public void run() {
            DefaultMessageStore.log.info(this.getServiceName() + " service started");

            //循环执行,间隔1毫毛
            while (!this.isStopped()) {
                try {
                    Thread.sleep(1);
                    this.doReput();
                } catch (Exception e) {
                    DefaultMessageStore.log.warn(this.getServiceName() + " service has exception. ", e);
                }
            }

            DefaultMessageStore.log.info(this.getServiceName() + " service end");
        }

每个1ms会尝试执行,具体逻辑是this.doReput()。核心逻辑入口

                                    /**
                                     * 将请求分别派发给ConsumerQueue和IndexFile
                                     * @see CommitLogDispatcherBuildConsumeQueue#dispatch(DispatchRequest)
                                     * @see CommitLogDispatcherBuildIndex#dispatch(DispatchRequest)
                                     */
                                    DefaultMessageStore.this.doDispatch(dispatchRequest);

这个方法就是根据commitLog数据初始化 ConsumerQueue和IndexFile。

总结
  1. 消息存储超过500ms算超时但是并不会报错,只会打一个warn日志
  2. commitLog刷盘逻辑是加锁的,并且有可重入锁和cas两种方式选择
  3. 消息存储并不是一开始就落盘,而是尝试添加到MappedFile文件的尾部
  4. 刷盘逻辑要根据配置来判断是同步刷盘或者异步刷盘,其中刷盘方法又分为flush和commit
  5. 用调用HAService服务进行主从同步

上一篇:RocketMQ源码学习六:Producer端消息发送
下一篇:RocketMQ源码学习八:Broker端消息的主从同步

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值