ROCKETMQ—— BrokerStartup启动源码分析

摘要

  1. 服务默认监听端口 10911,主从同步默认端口10912(服务端口+1),VIP通道侦听端口10909(服务端口-2),vip通道只处理producer消息发送
  2. 各个文件存放的位置是在 ${storePathRootDir}目录下,具体文件的内容、格式、含义参考后面的文件说明部分

源码分析

方法执行顺序分析

org.apache.rocketmq.broker.BrokerStartup.main(String[])

---org.apache.rocketmq.broker.BrokerStartup.createBrokerController(String[]) 封装启动参数,四个参数对象都可以通过properties文件同名属性 反射注入,注入方法是 org.apache.rocketmq.common.MixAll.properties2Object(Properties, Object)。

------ org.apache.rocketmq.broker.BrokerController.initialize() 所有实例化在这里完成,包括netty 服务端和客户端,存储

---------org.apache.rocketmq.common.topicConfigManager.load() 加载topic相关配置,/data/rocketmq/store/config/topics.json

---------org.apache.rocketmq.common.consumerOffsetManager.load() 加载comsumerOffset,就是各个topic消费者在队列中消费到了那个位置

---------org.apache.rocketmq.common.subscriptionGroupManager.load() 加载订阅关系

---------org.apache.rocketmq.common.consumerFilterManager.load() 加载消费过滤关系,上面这几个加载都是在 config目录下

---------org.apache.rocketmq.store.DefaultMessageStore 上面几步成功之后,实例化存储

---------org.apache.rocketmq.store.DefaultMessageStore.load()

------------org.apache.rocketmq.store.DefaultMessageStore.isTempFileExist() 判断 /data/rocketmq/store/abort文件是否存在,存在是不正常的,(这个文件在启用起动的时候创建,关闭的时候销毁,第二次启动的时候存在说明上一次未正常关闭),正常走正常启动逻辑,不正常走不正常启动逻辑。

------------org.apache.rocketmq.store.CommitLog.load()加载commitlog目录下面的消息文件

------------org.apache.rocketmq.store.DefaultMessageStore.loadConsumeQueue() 加载 consumeOffset

------------org.apache.rocketmq.store.StoreCheckpoint.StoreCheckpoint(String) 加载checkpoint文件

------------org.apache.rocketmq.store.index.IndexService.load(boolean) 加载index文件

------------org.apache.rocketmq.store.DefaultMessageStore.recover(boolean) 恢复commitlog,根据abort文件件存在与否,选择正常恢复commitlog(abort不存在)还是异常恢复commitlog(abort文件存在)

---------org.apache.rocketmq.remoting.netty.NettyRemotingServer.NettyRemotingServer(NettyServerConfig, ChannelEventListener) 初始化普通消息服务端 NettyRemotingServer,侦听端口10911

---------org.apache.rocketmq.remoting.netty.NettyRemotingServer.NettyRemotingServer(NettyServerConfig, ChannelEventListener) 初始化快速消息服务端 NettyRemotingServer,侦听端口10909

---------org.apache.rocketmq.broker.latency.BrokerFixedThreadPoolExecutor.BrokerFixedThreadPoolExecutor(int, int, long, TimeUnit, BlockingQueue<Runnable>, ThreadFactory) 初始化消息发送固定线程池,默认是1,一般是 16 + Runtime.getRuntime().availableProcessors() * 4

---------org.apache.rocketmq.broker.latency.BrokerFixedThreadPoolExecutor.BrokerFixedThreadPoolExecutor(int, int, long, TimeUnit, BlockingQueue<Runnable>, ThreadFactory) 初始化拉取消息线程池,16 + Runtime.getRuntime().availableProcessors() * 2

---------org.apache.rocketmq.broker.latency.BrokerFixedThreadPoolExecutor.BrokerFixedThreadPoolExecutor(int, int, long, TimeUnit, BlockingQueue<Runnable>, ThreadFactory) 初始化查询线程池 ,8 + Runtime.getRuntime().availableProcessors()

---------org.apache.rocketmq.broker.BrokerController.adminBrokerExecutor 初始化broker管理线程池,默认16个线程

---------org.apache.rocketmq.broker.BrokerController.clientManageExecutor 初始化client管理线程池,默认32个

---------org.apache.rocketmq.broker.BrokerController.heartbeatExecutor 维护心跳的线程池 Math.min(32, Runtime.getRuntime().availableProcessors())

---------org.apache.rocketmq.broker.BrokerController.endTransactionExecutor 完结事务消息的线程池 8 + Runtime.getRuntime().availableProcessors() * 2

---------org.apache.rocketmq.broker.BrokerController.consumerManageExecutor 消息消费线程池 32

---------org.apache.rocketmq.broker.BrokerController.registerProcessor() 服务注册,这里是将上面实例化的这些服务,放到一个map中,方便下次请求来了及时处理。key=requestCode(详情参见RequestCode),value=对应的处理类

---------org.apache.rocketmq.store.stats.BrokerStats.record() 以固定的频率 打印今天处理了多少消息

---------org.apache.rocketmq.common.ConfigManager.persist() 以固定的频率 持久化 consumerOffset

---------org.apache.rocketmq.common.ConfigManager.persist() 以固定的频率 持久化 consumerfilter

---------org.apache.rocketmq.broker.BrokerController.protectBroker() 以固定的频率 检查有那些group消费比较慢,如果慢过了threshold,则给disable掉,起到保护broker的作用

---------org.apache.rocketmq.broker.BrokerController.printWaterMark()以固定的频率 打印那些消费慢的队列信息

---------org.apache.rocketmq.store.MessageStore.dispatchBehindBytes() 以固定的频率 打印出commitlog 还有多少字节没有分发

---------org.apache.rocketmq.broker.out.BrokerOuterAPI.fetchNameServerAddr()以固定的频率 更新nameserverlist,为空的话,从远程去取

---------org.apache.rocketmq.broker.slave.SlaveSynchronize.syncAll() master 和 slave以固定的频率 做消息同步,如果当前是master,则只打印消息差距,如果是slave则做消息同步

---------org.apache.rocketmq.broker.BrokerController.initialTransaction() 初始化事务消息

---org.apache.rocketmq.broker.BrokerStartup.start(BrokerController)

部分类、方法分析

BrokerConfig属性

private String rocketmqHome = System.getProperty(MixAll.ROCKETMQ_HOME_PROPERTY, System.getenv(MixAll.ROCKETMQ_HOME_ENV));
    @ImportantField
    private String namesrvAddr = System.getProperty(MixAll.NAMESRV_ADDR_PROPERTY, System.getenv(MixAll.NAMESRV_ADDR_ENV));
    @ImportantField
    private String brokerIP1 = RemotingUtil.getLocalAddress();
    private String brokerIP2 = RemotingUtil.getLocalAddress();
    @ImportantField
    private String brokerName = localHostName();
    @ImportantField
    private String brokerClusterName = "DefaultCluster";
    @ImportantField
    private long brokerId = MixAll.MASTER_ID;
    private int brokerPermission = PermName.PERM_READ | PermName.PERM_WRITE;
    private int defaultTopicQueueNums = 8;
    @ImportantField
    private boolean autoCreateTopicEnable = true;

    private boolean clusterTopicEnable = true;

    private boolean brokerTopicEnable = true;
    @ImportantField
    private boolean autoCreateSubscriptionGroup = true;
    private String messageStorePlugIn = "";

    /**
     * thread numbers for send message thread pool, since spin lock will be used by default since 4.0.x, the default
     * value is 1.
     */
    private int sendMessageThreadPoolNums = 1; //16 + Runtime.getRuntime().availableProcessors() * 4;
    private int pullMessageThreadPoolNums = 16 + Runtime.getRuntime().availableProcessors() * 2;
    private int queryMessageThreadPoolNums = 8 + Runtime.getRuntime().availableProcessors();

    private int adminBrokerThreadPoolNums = 16;
    private int clientManageThreadPoolNums = 32;
    private int consumerManageThreadPoolNums = 32;
    private int heartbeatThreadPoolNums = Math.min(32, Runtime.getRuntime().availableProcessors());

    /**
     * Thread numbers for EndTransactionProcessor
     */
    private int endTransactionThreadPoolNums = 8 + Runtime.getRuntime().availableProcessors() * 2;

    private int flushConsumerOffsetInterval = 1000 * 5;

    private int flushConsumerOffsetHistoryInterval = 1000 * 60;

    @ImportantField
    private boolean rejectTransactionMessage = false;
    @ImportantField
    private boolean fetchNamesrvAddrByAddressServer = false;
    private int sendThreadPoolQueueCapacity = 10000;
    private int pullThreadPoolQueueCapacity = 100000;
    private int queryThreadPoolQueueCapacity = 20000;
    private int clientManagerThreadPoolQueueCapacity = 1000000;
    private int consumerManagerThreadPoolQueueCapacity = 1000000;
    private int heartbeatThreadPoolQueueCapacity = 50000;
    private int endTransactionPoolQueueCapacity = 100000;

    private int filterServerNums = 0;

    private boolean longPollingEnable = true;

    private long shortPollingTimeMills = 1000;

    private boolean notifyConsumerIdsChangedEnable = true;

    private boolean highSpeedMode = false;

    private boolean commercialEnable = true;
    private int commercialTimerCount = 1;
    private int commercialTransCount = 1;
    private int commercialBigCount = 1;
    private int commercialBaseCount = 1;

    private boolean transferMsgByHeap = true;
    private int maxDelayTime = 40;

    private String regionId = MixAll.DEFAULT_TRACE_REGION_ID;
    private int registerBrokerTimeoutMills = 6000;

    private boolean slaveReadEnable = false;

    private boolean disableConsumeIfConsumerReadSlowly = false;
    private long consumerFallbehindThreshold = 1024L * 1024 * 1024 * 16;

    private boolean brokerFastFailureEnable = true;
    private long waitTimeMillsInSendQueue = 200;
    private long waitTimeMillsInPullQueue = 5 * 1000;
    private long waitTimeMillsInHeartbeatQueue = 31 * 1000;
    private long waitTimeMillsInTransactionQueue = 3 * 1000;

    private long startAcceptSendRequestTimeStamp = 0L;

    private boolean traceOn = true;

    // Switch of filter bit map calculation.
    // If switch on:
    // 1. Calculate filter bit map when construct queue.
    // 2. Filter bit map will be saved to consume queue extend file if allowed.
    private boolean enableCalcFilterBitMap = false;

    // Expect num of consumers will use filter.
    private int expectConsumerNumUseFilter = 32;

    // Error rate of bloom filter, 1~100.
    private int maxErrorRateOfBloomFilter = 20;

    //how long to clean filter data after dead.Default: 24h
    private long filterDataCleanTimeSpan = 24 * 3600 * 1000;

    // whether do filter when retry.
    private boolean filterSupportRetry = false;
    private boolean enablePropertyFilter = false;

    private boolean compressedRegister = false;

    private boolean forceRegister = true;

    /**
     * This configurable item defines interval of topics registration of broker to name server. Allowing values are
     * between 10, 000 and 60, 000 milliseconds.
     */
    private int registerNameServerPeriod = 1000 * 30;

    /**
     * The minimum time of the transactional message  to be checked firstly, one message only exceed this time interval
     * that can be checked.
     */
    @ImportantField
    private long transactionTimeOut = 6 * 1000;

    /**
     * The maximum number of times the message was checked, if exceed this value, this message will be discarded.
     */
    @ImportantField
    private int transactionCheckMax = 15;

    /**
     * Transaction message check interval.
     */
    @ImportantField
    private long transactionCheckInterval = 60 * 1000;

NettyServerConfig

broker 跟 nameserv 用的是同一个,都是remote包里面的。属性代码如下:

private int listenPort = 8888;//netty server 侦听端口会被设置为10911,代码里面写死的,netty客户端向nettyserver端发起连接,都是这个端口
    private int serverWorkerThreads = 8;
    private int serverCallbackExecutorThreads = 0;
    private int serverSelectorThreads = 3;
    private int serverOnewaySemaphoreValue = 256;
    private int serverAsyncSemaphoreValue = 64;
    private int serverChannelMaxIdleTimeSeconds = 120;

    private int serverSocketSndBufSize = NettySystemConfig.socketSndbufSize;
    private int serverSocketRcvBufSize = NettySystemConfig.socketRcvbufSize;
    private boolean serverPooledByteBufAllocatorEnable = true;

    /**
     * make make install
     *
     *
     * ../glibc-2.10.1/configure \ --prefix=/usr \ --with-headers=/usr/include \
     * --host=x86_64-linux-gnu \ --build=x86_64-pc-linux-gnu \ --without-gd
     */
    private boolean useEpollNativeSelector = false;private int listenPort = 8888;
    private int serverWorkerThreads = 8;
    private int serverCallbackExecutorThreads = 0;
    private int serverSelectorThreads = 3;
    private int serverOnewaySemaphoreValue = 256;
    private int serverAsyncSemaphoreValue = 64;
    private int serverChannelMaxIdleTimeSeconds = 120;

    private int serverSocketSndBufSize = NettySystemConfig.socketSndbufSize;
    private int serverSocketRcvBufSize = NettySystemConfig.socketRcvbufSize;
    private boolean serverPooledByteBufAllocatorEnable = true;


    private boolean useEpollNativeSelector = false;

NettyClientConfig

broker 跟 nameserv 用的是同一个,都是remote包里面的。属性代码如下:

 private int clientWorkerThreads = 4;
    private int clientCallbackExecutorThreads = Runtime.getRuntime().availableProcessors();
    private int clientOnewaySemaphoreValue = NettySystemConfig.CLIENT_ONEWAY_SEMAPHORE_VALUE;
    private int clientAsyncSemaphoreValue = NettySystemConfig.CLIENT_ASYNC_SEMAPHORE_VALUE;
    private int connectTimeoutMillis = 3000;
    private long channelNotActiveInterval = 1000 * 60;

    private int clientChannelMaxIdleTimeSeconds = 120;

    private int clientSocketSndBufSize = NettySystemConfig.socketSndbufSize;
    private int clientSocketRcvBufSize = NettySystemConfig.socketRcvbufSize;
    private boolean clientPooledByteBufAllocatorEnable = false;
    private boolean clientCloseSocketIfTimeout = false;

    private boolean useTLS;

MessageStoreConfig存储属性配置

源码如下:

@ImportantField
    private String storePathRootDir = System.getProperty("user.home") + File.separator + "store";

    //The directory in which the commitlog is kept
    @ImportantField
    private String storePathCommitLog = System.getProperty("user.home") + File.separator + "store"
        + File.separator + "commitlog";

    // CommitLog file size,default is 1G
    private int mapedFileSizeCommitLog = 1024 * 1024 * 1024;
    // ConsumeQueue file size,default is 30W
    private int mapedFileSizeConsumeQueue = 300000 * ConsumeQueue.CQ_STORE_UNIT_SIZE;
    // enable consume queue ext
    private boolean enableConsumeQueueExt = false;
    // ConsumeQueue extend file size, 48M
    private int mappedFileSizeConsumeQueueExt = 48 * 1024 * 1024;
    // Bit count of filter bit map.
    // this will be set by pipe of calculate filter bit map.
    private int bitMapLengthConsumeQueueExt = 64;

    // CommitLog flush interval
    // flush data to disk
    @ImportantField
    private int flushIntervalCommitLog = 500;

    // Only used if TransientStorePool enabled
    // flush data to FileChannel
    @ImportantField
    private int commitIntervalCommitLog = 200;

    /**
     * introduced since 4.0.x. Determine whether to use mutex reentrantLock when putting message.<br/>
     * By default it is set to false indicating using spin lock when putting message.
     */
    private boolean useReentrantLockWhenPutMessage = false;

    // Whether schedule flush,default is real-time
    @ImportantField
    private boolean flushCommitLogTimed = false;
    // ConsumeQueue flush interval
    private int flushIntervalConsumeQueue = 1000;
    // Resource reclaim interval
    private int cleanResourceInterval = 10000;
    // CommitLog removal interval
    private int deleteCommitLogFilesInterval = 100;
    // ConsumeQueue removal interval
    private int deleteConsumeQueueFilesInterval = 100;
    private int destroyMapedFileIntervalForcibly = 1000 * 120;
    private int redeleteHangedFileInterval = 1000 * 120;
    // When to delete,default is at 4 am
    @ImportantField
    private String deleteWhen = "04";
    private int diskMaxUsedSpaceRatio = 75;
    // The number of hours to keep a log file before deleting it (in hours)
    @ImportantField
    private int fileReservedTime = 72;
    // Flow control for ConsumeQueue
    private int putMsgIndexHightWater = 600000;
    // The maximum size of a single log file,default is 512K
    private int maxMessageSize = 1024 * 1024 * 4;
    // Whether check the CRC32 of the records consumed.
    // This ensures no on-the-wire or on-disk corruption to the messages occurred.
    // This check adds some overhead,so it may be disabled in cases seeking extreme performance.
    private boolean checkCRCOnRecover = true;
    // How many pages are to be flushed when flush CommitLog
    private int flushCommitLogLeastPages = 4;
    // How many pages are to be committed when commit data to file
    private int commitCommitLogLeastPages = 4;
    // Flush page size when the disk in warming state
    private int flushLeastPagesWhenWarmMapedFile = 1024 / 4 * 16;
    // How many pages are to be flushed when flush ConsumeQueue
    private int flushConsumeQueueLeastPages = 2;
    private int flushCommitLogThoroughInterval = 1000 * 10;
    private int commitCommitLogThoroughInterval = 200;
    private int flushConsumeQueueThoroughInterval = 1000 * 60;
    @ImportantField
    private int maxTransferBytesOnMessageInMemory = 1024 * 256;
    @ImportantField
    private int maxTransferCountOnMessageInMemory = 32;
    @ImportantField
    private int maxTransferBytesOnMessageInDisk = 1024 * 64;
    @ImportantField
    private int maxTransferCountOnMessageInDisk = 8;
    @ImportantField
    private int accessMessageInMemoryMaxRatio = 40;//内存中缓存消息的比率,slave缓存30%,master缓存40%
    @ImportantField
    private boolean messageIndexEnable = true;
    private int maxHashSlotNum = 5000000;
    private int maxIndexNum = 5000000 * 4;
    private int maxMsgsNumBatch = 64;
    @ImportantField
    private boolean messageIndexSafe = false;
    private int haListenPort = 10912;
    private int haSendHeartbeatInterval = 1000 * 5;
    private int haHousekeepingInterval = 1000 * 20;
    private int haTransferBatchSize = 1024 * 32;
    @ImportantField
    private String haMasterAddress = null;
    private int haSlaveFallbehindMax = 1024 * 1024 * 256;
    @ImportantField
    private BrokerRole brokerRole = BrokerRole.ASYNC_MASTER;
    @ImportantField
    private FlushDiskType flushDiskType = FlushDiskType.ASYNC_FLUSH;
    private int syncFlushTimeout = 1000 * 5;
	/**
	 * 可以在properties文件里面配置,是按照属性名称反射注入的,解析按照levelString.split(" ")解析的,解析方法为 
	 * org.apache.rocketmq.store.schedule.ScheduleMessageService.parseDelayLevel()
	 */
    private String messageDelayLevel = "1s 5s 10s 30s 1m 2m 3m 4m 5m 6m 7m 8m 9m 10m 20m 30m 1h 2h";
    private long flushDelayOffsetInterval = 1000 * 10;
    @ImportantField
    private boolean cleanFileForciblyEnable = true;
    private boolean warmMapedFileEnable = false;
    private boolean offsetCheckInSlave = false;
    private boolean debugLockEnable = false;
    private boolean duplicationEnable = false;
    private boolean diskFallRecorded = true;
    private long osPageCacheBusyTimeOutMills = 1000;
    private int defaultQueryMaxNum = 32;

    @ImportantField
    private boolean transientStorePoolEnable = false;
    private int transientStorePoolSize = 5;
    private boolean fastFailIfNoBufferInStorePool = false;

文件说明

文件存储在这里目录下${storePathRootDir}下,主要有 config文件夹,commitlog文件夹,consumequeue文件夹,index文件夹,abort文件,checkpoint文件,lock文件

config文件夹

这个文件夹下面存放的是topic、订阅关系、消费offset等的文件

topics.json配置文件

功能: 存储每个topic的读写队列数、权限、是否顺序等信息 位置:

TOPIC配置文件位置${storePathRootDir}/config/topics.json(测试机是在/data/rocketmq/store/config/topics.json)

默认是先都.json文件,如果json文件为空,则读.json.bak。对应的类:

org.apache.rocketmq.common.TopicConfig

内容:

{
        "dataVersion":{
                "counter":53,
                "timestamp":1548383640964
        },
        "topicConfigTable":{
                "%RETRY%IMASS_RISK_LOAN_APPLY_CONSUMER":{//key为topic
                        "order":false,//是否是有序的topic
                        "perm":6,// 对应 PermName,1是可继承,2 是可写,4 是可读,8是有优先级,6(2+4)可读也可写
                        "readQueueNums":1,//读的队列数
                        "topicFilterType":"SINGLE_TAG",
                        "topicName":"%RETRY%IMASS_RISK_LOAN_APPLY_CONSUMER",// topic名称
                        "topicSysFlag":0,// 0 是单个tag,1是多个tag TopicFilterType
                        "writeQueueNums":1//写队列数
                },
				 "REPAY_STATUS_TOPIC":{
                        "order":false,
                        "perm":6,
                        "readQueueNums":16,
                        "topicFilterType":"SINGLE_TAG",
                        "topicName":"REPAY_STATUS_TOPIC",
                        "topicSysFlag":0,
                        "writeQueueNums":16
                },

……
subscriptionGroup.json配置文件

功能: 存储每个消费者Consumer的订阅信息 位置:

TOPIC配置文件位置${storePathRootDir}/config/subscriptionGroup.json(测试机是在/data/rocketmq/store/config/subscriptionGroup.json)

对应的类:

org.apache.rocketmq.common.subscription.SubscriptionGroupConfig

文件内容:

{
        "dataVersion":{
                "counter":26,
                "timestamp":1548380448037
        },
        "subscriptionGroupTable":{
                "IMASS_RISK_LOAN_APPLY_CONSUMER":{//key为topic
                        "brokerId":0,//0表示是master,其它值表示是slave
                        "consumeBroadcastEnable":true,//是否开启广播消息
                        "consumeEnable":true,//是否允许消费
                        "consumeFromMinEnable":true,
                        "groupName":"IMASS_RISK_LOAN_APPLY_CONSUMER",
                        "notifyConsumerIdsChangedEnable":true,
                        "retryMaxTimes":16,//最大重试次数
                        "retryQueueNums":1,//重试队列的个数
                        "whichBrokerWhenConsumeSlowly":1//默认值是1,TODO
                },
				"SELF_TEST_C_GROUP":{
                        "brokerId":0,
                        "consumeBroadcastEnable":true,
                        "consumeEnable":true,
                        "consumeFromMinEnable":true,
                        "groupName":"SELF_TEST_C_GROUP",
                        "notifyConsumerIdsChangedEnable":true,
                        "retryMaxTimes":16,
                        "retryQueueNums":1,
                        "whichBrokerWhenConsumeSlowly":1
                },
				"IMASSBANK_DEMO_WEB_GROUP":{
                        "brokerId":0,
                        "consumeBroadcastEnable":true,
                        "consumeEnable":true,
                        "consumeFromMinEnable":true,
                        "groupName":"IMASSBANK_DEMO_WEB_GROUP",
                        "notifyConsumerIdsChangedEnable":true,
                        "retryMaxTimes":16,
                        "retryQueueNums":1,
                        "whichBrokerWhenConsumeSlowly":1
                },
                "REPAY_STATUS_TXGROUP":{
                        "brokerId":0,
                        "consumeBroadcastEnable":true,
                        "consumeEnable":true,
                        "consumeFromMinEnable":true,
                        "groupName":"REPAY_STATUS_TXGROUP",
                        "notifyConsumerIdsChangedEnable":true,
                        "retryMaxTimes":16,
                        "retryQueueNums":1,
                        "whichBrokerWhenConsumeSlowly":1
                },
				……
consumerOffset.json配置文件

功能: 存储每个消费者Consumer在每个topic上对于该topic的consumequeue队列的消费进度; 位置:

TOPIC配置文件位置${storePathRootDir}/config/consumerOffset.json(测试机是在/data/rocketmq/store/config/consumerOffset.json)

没有对应的类,是这样的一个数据结构。

ConcurrentMap<String/* topic@group */, ConcurrentMap<Integer, Long>> offsetTable =
        new ConcurrentHashMap<String, ConcurrentMap<Integer, Long>>(512);

文件内容:

{
        "offsetTable":{ // key 为 topic@group value为map(key=队列id,value=offset位置)
                "%RETRY%IMASS_RISK_QG_EXAMINE_CONSUMER@IMASS_RISK_QG_EXAMINE_CONSUMER":{0:0
                },
                "%RETRY%imassbank-risk-consumer@imassbank-risk-consumer":{0:0
                },
                "message-ext-topic@rocketmq-consume-demo-message-ext-consumer":{0:0,1:0,2:0,3:0,4:0,5:0,6:1,7:1,8:1,9:1,10:0,11:0,12:0,13:0,14:1,15:1
                },
                "spring-transaction-topic@string_trans_consumer":{0:10,1:8,2:10,3:9
                },
                "%RETRY%IMASS_RISK_SMY_CREDIT_QUOTA_CONSUMER@IMASS_RISK_SMY_CREDIT_QUOTA_CONSUMER":{0:0
                },
                "IMASS_TOPIC_RISK_QG_EXAMINE_PUSH@IMASS_RISK_QG_EXAMINE_PUSH_CONSUMER":{0:3,1:1,2:2,3:6
                },
                "IMASS_TOPIC_RISK_QG_EXAMINE@IMASS_RISK_QG_EXAMINE_CONSUMER":{0:0,1:1,2:0,3:0,4:1,5:2,6:0,7:0,8:0,9:0,10:0,11:1,12:1,13:2,14:1,15:0
                },
delayOffset.json配置文件

功能: 存储对于延迟主题SCHEDULE_TOPIC_XXXX的每个consumequeue队列的消费进度; 位置:

TOPIC配置文件位置${storePathRootDir}/config/delayOffset.json(测试机是在/data/rocketmq/store/config/delayOffset.json)

没有对应的类,是这样的一个数据结构。

ConcurrentMap<Integer /* level */, Long/* offset */> offsetTable =
        new ConcurrentHashMap<Integer, Long>(32);

文件内容

{
        "offsetTable":{3:525,4:525,5:525,6:525,7:525,8:525,9:525,10:524,11:523,12:523,13:523,14:523,15:523,16:523,17:523,18:1046
        }// key 为level,value为 offset
}

对应level示例

延迟队列 = 1s 5s 10s 30s 1m 2m 3m 4m 5m 6m 7m 8m 9m 10m 20m 30m 1h 2h
解析后的map(key=level,value=延迟毫秒数,不累加前面的)为
{1=1000, 2=5000, 3=10000, 4=30000, 5=60000, 6=120000, 7=180000, 8=240000, 9=300000, 10=360000, 11=420000, 12=480000, 13=540000, 14=600000, 15=1200000, 16=1800000, 17=3600000, 18=7200000}
consumerFilter.json 配置文件样例

commitlog文件夹

该目录下存放的是真实的消息内容文件。

文件默认大小:

每个文件默认大小是1G

文件命名规则:

起始offset,左侧补0,固定长度20位。(比如 00000000000000000000 代表了第一个文件,起始偏移量为 0, 文件大小为 1G=1073741824; 当这个文件满了,第二个文件名字为 00000000001073741824。)

文件删除逻辑:
  1. 消息文件过期(默认72小时),且到达清理时点(默认是凌晨4点),删除过期文件。
  2. 消息文件过期(默认72小时),且磁盘空间达到了水位线(默认75%),删除过期文件。
  3. 磁盘已经达到必须释放的上限(85%水位线)的时候,则开始批量清理文件(无论是否过期),直到空间充足。 注:若磁盘空间达到危险水位线(默认90%),出于保护自身的目的,broker会拒绝写入服务。
commitlog文件内消息的格式
字段字段大小(字节)字段含义
msgSize4代表这个消息的大小
MAGICCODE4MAGICCODE = daa320a7
BODY CRC4消息体 BODY CRC
queueId4队列号
flag4 
QUEUEOFFSET8这个值是个自增值不是真正的 consume queue 的偏移量,可以代表这个consumeQueue 队列或者 tranStateTable 队列中消息的个数
SYSFLAG4    指明消息是事物事物状态等消息特征,二进制为四个字节从右往左数:当 4 个字节均为 0(值为 0)时表示非事务消息;当第 1 个字 节为 1(值为 1)时表示表示消息是压缩的(Compressed);当第 2 个字节为 1(值为 2) 表示多消息(MultiTags);当第 3 个字节为 1(值为 4)时表示 prepared 消息;当第 4 个字节为 1(值为 8)时表示commit 消息; 当第 3/4 个字节均为 1 时(值为 12)时表示 rollback 消息;当第 3/4 个字节均为 0 时表 示非事务消息;
 
BORNTIMESTAMP8消息产生端(producer)的时间戳
BORNHOST8消息产生端(producer)地址(address:port)
STORETIMESTAMP8消息在broker存储时间
BORNHOST8消息产生端(producer)地址(address:port)
STORETIMESTAMP8消息在broker存储时间
STOREHOSTADDRESS8消息存储到broker的地址(address:port)
RECONSUMETIMES8消息被某个订阅组重新消费了几次(订阅组 之间独立计数),因为重试消息发送到了topic 名字为%retry%groupName的队列 queueId=0的队列中去了,成功消费一次记 录为0;
PreparedTransaction Offset8表示是prepared状态的事物消息
messagebodyLength4消息体大小值
messagebodybodyLength消息体内容
topictopicLengthtopic的内容值
propertiesLength2属性值大小
propertiespropertiesLengthpropertiesLength大小的属性数据

consumequeue文件夹(文件名待确定)

文件路径为:${storePathRootDir}/consumequeue/{topic}/{queueId}/{fileName}。该目录下存放的是各个topic在commitlog里面文件的offset,是逻辑位置。下面的文件夹名称是topic名称,再下面一级文件夹是队列号,里面才是offset文件。SCHEDULE_TOPIC_XXXX 这种文件夹下放的是定时消费的topic,RMQ_SYS_TRANS_HALF_TOPIC 这个文件夹下面放的是事务消息的prepare阶段的消息,RMQ_SYS_TRANS_OP_HALF_TOPIC是已经commit或者rollback的消息。

文件名称:

固定长度20位,文件名字为起始偏移量offset,长度不够20位的,左侧补0.文件大小默认600M 比如 00000000000000000000 代表了第一个文件,起始偏移量为 0,文件大小为 600W, 当第一个文件满之后创建的第二个文件的名字为00000000000006000000,起始偏移量为6000000,以此类推,第三个文件名字为00000000000012000000,起始偏移量12000000。消息存储的时候会顺序写入文件,当文件满了,写入下一个文件。

数据结构

文件内每条数据的结构如下:

**消息起始物理偏移量(physical offset, long 8字节)+消息大小(int,4字节)+tagsCode(long 8字节) **

文件作用
  1. 删除指定偏移量之后的逻辑文件( truncateDirtyLogicFiles)
  2. 恢复 ConsumeQueue 内存数据( recover)
  3. 查找消息发送时间最接近 timestamp 逻辑队列的 offset
  4. 获取最后一条消息对应物理队列的下一个偏移量( getLastOffset)
  5. 消息刷盘( commit)
  6. 将 commitlog 物理偏移量、消息大小等信息直接写入 consumequeue 中
  7. 根据消息序号索引获取 consumequeue 数据( getIndexBuffer)
  8. 根据物理队列最小 offset计算修正逻辑队列最小 offset ( correctMinOffset)
  9. 获取指定位置所在文件的下一个文件的起始偏移量( rollNextFile)

index文件夹

文件路径为:${storePathRootDir}/index/${fileName}。文件名fileName 是以创建时的时间命名的,文件大小是固定的。文件名如 20190130114730462(年月日时分秒毫秒)。文件内容如下:

各个字段解释

Index Header 结构各字段的含义:

  1. beginTimestamp:第一个索引消息落在 Broker 的时间戳;
  2. endTimestamp:最后一个索引消息落在 Broker 的时间戳;
  3. beginPhyOffset:第一个索引消息在 commitlog 的偏移量;
  4. endPhyOffset:最后一个索引消息在 commitlog 的偏移量;
  5. hashSlotCount:构建索引占用的槽位数;
  6. indexCount:构建的索引个数;

Slot Table 里面的每一项保存的是这个 topic-key 是第几个索引;根据topic-key 的 Hash 值除以 500W 取余得到这个 Slot Table 的序列号,然后将此索引的顺序个数存入此 Table 中。Slottable 的位置( absSlotPos)的计算公式: 40+keyHash%( 500W) *4;

Index Linked List 的字段含义:

  1. keyHash:topic-key(key 是消息的 key)的 Hash 值;
  2. phyOffset:commitLog 真实的物理位移;
  3. timeOffset:时间位移,消息的存储时间与 Index Header 中 beginTimestamp的时间差;
  4. slotValue:当 topic-key(key 是消息的 key)的 Hash 值取 500W 的余之后得到的 Slot Table 的 slot 位置中已经有值了(即 Hash 值取余后在 Slot Table中有冲突时),则会用最新的 Index 值覆盖,并且将上一个值写入最新 Index的 slotValue 中,从而形成了一个链表的结构。 Index Linked List 的位置( absIndexPos)的计算公式: 40+ 500W4+index的顺序数40;

abort文件

启动 Broker 时,在目录${storePathRootDir}/abort 文件,没有任何内容;只是标记是否正常关闭,若为正常关闭,则在关闭时会删掉此文件;若未正常关闭则此文件一直保留,下次启动时根据是否存在此文件进行不同方式的内存数据恢复

checkpoint文件

Checkpoint 文件的存储路径默认为${storePathRootDir}/checkpoint,文件名就是checkpoint,文件的数据结构如下: physicMsgTimestamp(8)+logicsMsgTimestamp(8)+ indexMsgTimestamp(8)

Checkpoint 文件由 StoreCheckpoint 类来解析并存储内容。在进行消息写入commitlog,物理偏移量/消息大小写入 consumequeue、创建 Index 索引这三个操作之后都要分别更新 physicMsgTimestamp、 logicsMsgTimestamp、indexMsgTimestamp 字段的内容; 在从异常中恢复 commitlog 内存数据时要用得该文件的三个字段的时间戳。

lock文件

参考文档

RocketMQ详解-高并发读写 RocketMQ源码深度解析一之消息存储 RocketMQ高性能之底层存储设计 RocketMQ源码解析(五)-Broker架构及服务启动RocketMQ源码解析(五)-Broker架构及服务启动

转载于:https://my.oschina.net/liangxiao/blog/3002888

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值