RocketMQ源码解析-Store篇

这一篇我们主要来梳理下`RocketMQ`消息的存储,这一块的逻辑主要是在`rocketmq-store`模块

在这里插入图片描述

​ 我们对于这个模块的逻辑梳理主要是借助这些测试类来debug分析主要是MappedFileQueueMappedFileCommitLogMessageStoreConsumeQueueIndexFile这些类。我们主要是梳理这些类关于获取、存储消息的主要逻辑,梳理其的大致脉络。

一、MappedFile

​ 这个类主要对应操作的是我们的消息最终会写到的文件。

1、初始化构建

public class MappedFileTest {
    private final String storeMessage = "Once, there was a chance for me!";

    @Test
    public void testSelectMappedBuffer() throws IOException {
        MappedFile mappedFile = new MappedFile("target/unit_test_store/MappedFileTest/000", 1024 * 64);
        boolean result = mappedFile.appendMessage(storeMessage.getBytes());
        assertThat(result).isTrue();

        SelectMappedBufferResult selectMappedBufferResult = mappedFile.selectMappedBuffer(0);
        byte[] data = new byte[storeMessage.length()];
        selectMappedBufferResult.getByteBuffer().get(data);
        String readString = new String(data);

        assertThat(readString).isEqualTo(storeMessage);
        ........
    }
}

​ 我们看到这个测试用例主要是创建一个大小为1024 * 64的文件000:

private MappedByteBuffer mappedByteBuffer;

private void init(final String fileName, final int fileSize) throws IOException {
    this.fileName = fileName;
    this.fileSize = fileSize;
    this.file = new File(fileName);
    this.fileFromOffset = Long.parseLong(this.file.getName());
    boolean ok = false;

    ensureDirOK(this.file.getParent());

    try {
        this.fileChannel = new RandomAccessFile(this.file, "rw").getChannel();
        this.mappedByteBuffer = this.fileChannel.map(MapMode.READ_WRITE, 0, fileSize);
        TOTAL_MAPPED_VIRTUAL_MEMORY.addAndGet(fileSize);
        TOTAL_MAPPED_FILES.incrementAndGet();
        ok = true;
    		..........
}

​ 同时会将文件最终映射到mappedByteBuffer,然后是通过mappedByteBuffer来处理这个文件的写入、读取(这些是java中nio的一些类,先不深入),我们的消息就会写在这个文件。

2、消息的写入

在这里插入图片描述

这里消息的写入有几种,例如 这些写入byte[],或者入参是MessageExt,或者批量这些。

1)、appendMessage(final byte[] data)

public boolean appendMessage(final byte[] data) {
    int currentPos = this.wrotePosition.get();

    if ((currentPos + data.length) <= this.fileSize) {
        try {
            this.fileChannel.position(currentPos);
            this.fileChannel.write(ByteBuffer.wrap(data));
        } catch (Throwable e) {
            log.error("Error occurred when append message to mappedFile.", e);
        }
        this.wrotePosition.addAndGet(data.length);
        return true;
    }

    return false;
}

​ 如果是以byte[]方式的写入,由于其本身就是数组,处理是很简单的,直接通过fileChannel.write(ByteBuffer.wrap(data))写入,然后通过this.wrotePosition.addAndGet(data.length)来累加记录当前写入的多长的内容。

2)、MessageExt

​ 这个类就是发送的消息的信息,同时这些信息会写到文件中

public class MessageExt extends Message {
    private static final long serialVersionUID = 5720810158625748049L;

    private int queueId;

    private int storeSize;

    private long queueOffset;
    private int sysFlag;
    private long bornTimestamp;
    private SocketAddress bornHost;

    private long storeTimestamp;
    private SocketAddress storeHost;
    private String msgId;
    private long commitLogOffset;
    private int bodyCRC;
    private int reconsumeTimes;

    private long preparedTransactionOffset;

    public MessageExt() {
    }
public class Message implements Serializable {
    private static final long serialVersionUID = 8445773977080406428L;

    private String topic;
    private int flag;
    private Map<String, String> properties;
    private byte[] body;
    private String transactionId;

3)、appendMessagesInner(final MessageExt messageExt,…)

public AppendMessageResult appendMessagesInner(final MessageExt messageExt, final AppendMessageCallback cb) {
    assert messageExt != null;
    assert cb != null;

    int currentPos = this.wrotePosition.get();

    if (currentPos < this.fileSize) {
        ByteBuffer byteBuffer = writeBuffer != null ? writeBuffer.slice() : this.mappedByteBuffer.slice();
        byteBuffer.position(currentPos);
        AppendMessageResult result = null;
        if (messageExt instanceof MessageExtBrokerInner) {
            result = cb.doAppend(this.getFileFromOffset(), byteBuffer, this.fileSize - currentPos, (MessageExtBrokerInner) messageExt);
        } else if (messageExt instanceof MessageExtBatch) {
            result = cb.doAppend(this.getFileFromOffset(), byteBuffer, this.fileSize - currentPos, (MessageExtBatch) messageExt);
        } else {
            return new AppendMessageResult(AppendMessageStatus.UNKNOWN_ERROR);
        }
        this.wrotePosition.addAndGet(result.getWroteBytes());
        this.storeTimestamp = result.getStoreTimestamp();
        return result;
    }
    log.error("MappedFile.appendMessage return null, wrotePosition: {} fileSize: {}", currentPos, this.fileSize);
    return new AppendMessageResult(AppendMessageStatus.UNKNOWN_ERROR);
}

这里首先是通过currentPos < this.fileSize判断当前的文件有没有写满,然后就是获取文件对应的ByteBuffer来处理后序的写入

public class MessageExtBrokerInner extends MessageExt {

如果不是批量写入:

public AppendMessageResult doAppend(final long fileFromOffset, final ByteBuffer byteBuffer, final int maxBlank,
    final MessageExtBrokerInner msgInner) {
    // STORETIMESTAMP + STOREHOSTADDRESS + OFFSET <br>

    // PHY OFFSET
    long wroteOffset = fileFromOffset + byteBuffer.position();

    this.resetByteBuffer(hostHolder, 8);
    String msgId = MessageDecoder.createMessageId(this.msgIdMemory, msgInner.getStoreHostBytes(hostHolder), wroteOffset);

    // Record ConsumeQueue information
    keyBuilder.setLength(0);
    keyBuilder.append(msgInner.getTopic());
    keyBuilder.append('-');
    keyBuilder.append(msgInner.getQueueId());
    String key = keyBuilder.toString();
    //private HashMap<String/* topic-queueid */, Long/* offset */> topicQueueTable
    //这里首先是通过topic+queueId来从topicQueueTable中获取该条消息对应的队列ID已经写到哪里了,也就是该条消息从文件的哪里开始写入
    Long queueOffset = CommitLog.this.topicQueueTable.get(key);
    if (null == queueOffset) {
        queueOffset = 0L;
        CommitLog.this.topicQueueTable.put(key, queueOffset);
    }
		............
    final byte[] topicData = msgInner.getTopic().getBytes(MessageDecoder.CHARSET_UTF8);
    final int topicLength = topicData.length;

    final int bodyLength = msgInner.getBody() == null ? 0 : msgInner.getBody().length;

    final int msgLen = calMsgLength(bodyLength, topicLength, propertiesLength);

    // Exceeds the maximum message
    //判断该条消息体是否太大了,如果已经超过限制就不写入
    if (msgLen > this.maxMessageSize) {
        return new AppendMessageResult(AppendMessageStatus.MESSAGE_SIZE_EXCEEDED);
    }
    // Determines whether there is sufficient free space
    //判断当前文件是否已经写满了,是的话就返回`END_OF_FILE`
    if ((msgLen + END_FILE_MIN_BLANK_LENGTH) > maxBlank) {
        	..........
        return new AppendMessageResult(AppendMessageStatus.END_OF_FILE, wroteOffset, maxBlank, msgId, msgInner.getStoreTimestamp(),
            queueOffset, CommitLog.this.defaultMessageStore.now() - beginTimeMills);
    }

    //private final ByteBuffer msgStoreItemMemory
    //下面的逻辑就是消息的信息的具体写入了
    // Initialization of storage space
    this.resetByteBuffer(msgStoreItemMemory, msgLen);
    // 1 TOTALSIZE
    this.msgStoreItemMemory.putInt(msgLen);
    // 2 MAGICCODE
    this.msgStoreItemMemory.putInt(CommitLog.MESSAGE_MAGIC_CODE);
    // 3 BODYCRC
    this.msgStoreItemMemory.putInt(msgInner.getBodyCRC());
    // 4 QUEUEID
    this.msgStoreItemMemory.putInt(msgInner.getQueueId());
    // 5 FLAG
    this.msgStoreItemMemory.putInt(msgInner.getFlag());
    // 6 QUEUEOFFSET
    this.msgStoreItemMemory.putLong(queueOffset);
    // 7 PHYSICALOFFSET
    this.msgStoreItemMemory.putLong(fileFromOffset + byteBuffer.position());
    // 8 SYSFLAG
    this.msgStoreItemMemory.putInt(msgInner.getSysFlag());
    // 9 BORNTIMESTAMP
    this.msgStoreItemMemory.putLong(msgInner.getBornTimestamp());
    // 10 BORNHOST
    this.resetByteBuffer(hostHolder, 8);
    this.msgStoreItemMemory.put(msgInner.getBornHostBytes(hostHolder));
    // 11 STORETIMESTAMP
    this.msgStoreItemMemory.putLong(msgInner.getStoreTimestamp());
    // 12 STOREHOSTADDRESS
    this.resetByteBuffer(hostHolder, 8);
    this.msgStoreItemMemory.put(msgInner.getStoreHostBytes(hostHolder));
    //this.msgBatchMemory.put(msgInner.getStoreHostBytes());
    // 13 RECONSUMETIMES
    this.msgStoreItemMemory.putInt(msgInner.getReconsumeTimes());
    // 14 Prepared Transaction Offset
    this.msgStoreItemMemory.putLong(msgInner.getPreparedTransactionOffset());
    // 15 BODY
    this.msgStoreItemMemory.putInt(bodyLength);
    if (bodyLength > 0)
        this.msgStoreItemMemory.put(msgInner.getBody());
    // 16 TOPIC
    this.msgStoreItemMemory.put((byte) topicLength);
    this.msgStoreItemMemory.put(topicData);
    // 17 PROPERTIES
    this.msgStoreItemMemory.putShort((short) propertiesLength);
    if (propertiesLength > 0)
        this.msgStoreItemMemory.put(propertiesData);

    final long beginTimeMills = CommitLog.this.defaultMessageStore.now();
    // Write messages to the queue buffer
    // 这里就是将消息从msgStoreItemMemory中写入到byteBuffer中,也就是我们的记录文件,这个byteBuffer是前面的入参  
    byteBuffer.put(this.msgStoreItemMemory
msgStoreItemMemory中写入到.array(), 0, msgLen);

    AppendMessageResult result = new AppendMessageResult(AppendMessageStatus.PUT_OK, wroteOffset, msgLen, msgId,
 		.......
    return result;
}

在这里插入图片描述

​ 自此我们就完成了一条消息的写入。

二、MappedFileQueue

​ 这个类主要是用来管理MappedFile的,我们知道前面创建MappedFile是有设置其的文件大小,如果到了就需要新创建MappedFile,同时这些MappedFile从逻辑意义上来说是连续的,也就是position是一直增加的,加入第一个文件放入了0-1023的内容,则第二个文件是继续从1024-2047,一直连续,我们以其的demo来说明

public class MappedFileQueue {
		.........
    private final CopyOnWriteArrayList<MappedFile> mappedFiles = new CopyOnWriteArrayList<MappedFile>();
@Test
public void testGetLastMappedFile() {
    final String fixedMsg = "0123456789abcdef";

    MappedFileQueue mappedFileQueue =
        new MappedFileQueue("target/unit_test_store/a/", 1024, null);

    for (int i = 0; i < 1024; i++) {
        MappedFile mappedFile = mappedFileQueue.getLastMappedFile(0);
        assertThat(mappedFile).isNotNull();
        assertThat(mappedFile.appendMessage(fixedMsg.getBytes())).isTrue();
    }

    mappedFileQueue.shutdown(1000);
    mappedFileQueue.destroy();
}

​ 例如这个就是创建1024大小的文件,然后循环写入内容,所以肯定是会创建多个文件的。

在这里插入图片描述

例如这个文件就是从0000000000000000000000000000000000001024这样命名的。

在这里插入图片描述

然后获取的时候就通过便宜量计算其的index,然后通过this.mappedFiles.get(index)List中获取到对应的MappedFile

public MappedFile findMappedFileByOffset(final long offset, final boolean returnFirstOnNotFound) {
    try {
        MappedFile firstMappedFile = this.getFirstMappedFile();
        MappedFile lastMappedFile = this.getLastMappedFile();
        if (firstMappedFile != null && lastMappedFile != null) {
            if (offset < firstMappedFile.getFileFromOffset() || offset >= lastMappedFile.getFileFromOffset() + this.mappedFileSize) {
                ........
            } else {
                int index = (int) ((offset / this.mappedFileSize) - (firstMappedFile.getFileFromOffset() / this.mappedFileSize));
                MappedFile targetFile = null;
                try {
                    targetFile = this.mappedFiles.get(index);
                } catch (Exception ignored) {
                }
                if (targetFile != null && offset >= targetFile.getFileFromOffset()
                    && offset < targetFile.getFileFromOffset() + this.mappedFileSize) {
                    return targetFile;
                }
				..........
            }
        }
    } catch (Exception e) {
        log.error("findMappedFileByOffset Exception", e);
    }
    return null;
}

三、CommitLog

​ 这个就是写消息、获取消息的逻辑,其主要是从MappedFile中获取内容

public class CommitLog {
    // Message's MAGIC CODE daa320a7
    public final static int MESSAGE_MAGIC_CODE = -626843481;
    private static final InternalLogger log = InternalLoggerFactory.getLogger(LoggerName.STORE_LOGGER_NAME);
    // End of file empty MAGIC CODE cbd43194
    private final static int BLANK_MAGIC_CODE = -875286124;
    private final MappedFileQueue mappedFileQueue;
    private final DefaultMessageStore defaultMessageStore;
    private final FlushCommitLogService flushCommitLogService;

    //If TransientStorePool enabled, we must flush message to FileChannel at fixed periods
    private final FlushCommitLogService commitLogService;

    private final AppendMessageCallback appendMessageCallback;
    private final ThreadLocal<MessageExtBatchEncoder> batchEncoderThreadLocal;
    private HashMap<String/* topic-queueid */, Long/* offset */> topicQueueTable = new HashMap<String, Long>(1024);

1、getData(final long offset, …)

public SelectMappedBufferResult getData(final long offset, final boolean returnFirstOnNotFound) {
    int mappedFileSize = this.defaultMessageStore.getMessageStoreConfig().getMapedFileSizeCommitLog();
    MappedFile mappedFile = this.mappedFileQueue.findMappedFileByOffset(offset, returnFirstOnNotFound);
    if (mappedFile != null) {
        int pos = (int) (offset % mappedFileSize);
        SelectMappedBufferResult result = mappedFile.selectMappedBuffer(pos);
        return result;
    }

    return null;
}
public SelectMappedBufferResult selectMappedBuffer(int pos) {
    int readPosition = getReadPosition();
    if (pos < readPosition && pos >= 0) {
        if (this.hold()) {
            ByteBuffer byteBuffer = this.mappedByteBuffer.slice();
            byteBuffer.position(pos);
            int size = readPosition - pos;
            ByteBuffer byteBufferNew = byteBuffer.slice();
            byteBufferNew.limit(size);
            return new SelectMappedBufferResult(this.fileFromOffset + pos, byteBufferNew, size, this);
        }
    }

    return null;
}
public class SelectMappedBufferResult {

    private final long startOffset;
    private final ByteBuffer byteBuffer;
    private int size;
    private MappedFile mappedFile;
    public SelectMappedBufferResult(long startOffset, ByteBuffer byteBuffer, int size, MappedFile mappedFile) {
        this.startOffset = startOffset;
        this.byteBuffer = byteBuffer;
        this.size = size;
        this.mappedFile = mappedFile;
    }

​ 这里就是通过偏移量从MappedFile中获取对应的内容。

​ 先是用this.mappedFileQueue.findMappedFileByOffset方法通过offset来计算其是在哪个MappedFile,然后获取对应位置的ByteBuffer

2、getMessage(final long offset, final int size)

public SelectMappedBufferResult getMessage(final long offset, final int size) {
    int mappedFileSize = this.defaultMessageStore.getMessageStoreConfig().getMapedFileSizeCommitLog();
    MappedFile mappedFile = this.mappedFileQueue.findMappedFileByOffset(offset, offset == 0);
    if (mappedFile != null) {
        int pos = (int) (offset % mappedFileSize);
        return mappedFile.selectMappedBuffer(pos, size);
    }
    return null;
}

​ 这个是获取对应的消息,然后这里会有size入参,也就是这个消息有多大,同样是先通过偏移量获取MappedFile,不过这里获取时是获取从offset开始,只获取size的内容。上面的getData方法是获取剩下的所有内容int size = readPosition - pos

3、putMessage(MessageExtBrokerInner msg)

​ 这里有各种逻辑,进行下次内容的校验,例如TCP传输也会校验,然后通过mappedFile.appendMessage(msg, this.appendMessageCallback)来写入消息,这个前面MappedFile有梳理,如果前面文件写满了,就创建新的。

public PutMessageResult putMessage(final MessageExtBrokerInner msg) {
    // Set the storage time
    msg.setStoreTimestamp(System.currentTimeMillis());
    // Set the message body BODY CRC (consider the most appropriate setting
    // on the client)
    //解析消息的校验通过CRC32,类似于	tcp消息的校验,不深入了
    msg.setBodyCRC(UtilAll.crc32(msg.getBody()));
    // Back to Results
    AppendMessageResult result = null;

    StoreStatsService storeStatsService = this.defaultMessageStore.getStoreStatsService();

    String topic = msg.getTopic();
    int queueId = msg.getQueueId();
		............
    MappedFile mappedFile = this.mappedFileQueue.getLastMappedFile();

    putMessageLock.lock(); //spin or ReentrantLock ,depending on store config
    try {
        long beginLockTimestamp = this.defaultMessageStore.getSystemClock().now();
        this.beginTimeInLock = beginLockTimestamp;

        // Here settings are stored timestamp, in order to ensure an orderly
        // global
        msg.setStoreTimestamp(beginLockTimestamp);

        if (null == mappedFile || mappedFile.isFull()) {
            mappedFile = this.mappedFileQueue.getLastMappedFile(0); // Mark: NewFile may be cause noise
        }
        //如果没有文件内容,就提示CREATE_MAPEDFILE_FAILED,创建文件失败
        if (null == mappedFile) {
            log.error("create mapped file1 error, topic: " + msg.getTopic() + " clientAddr: " + msg.getBornHostString());
            beginTimeInLock = 0;
            return new PutMessageResult(PutMessageStatus.CREATE_MAPEDFILE_FAILED, null);
        }
        result = mappedFile.appendMessage(msg, this.appendMessageCallback);
        switch (result.getStatus()) {
            case PUT_OK:
                break;
            case END_OF_FILE:
                unlockMappedFile = mappedFile;
                // Create a new file, re-write the message 文件写满了,就创建新的
                mappedFile = this.mappedFileQueue.getLastMappedFile(0);
                //创建失败返回CREATE_MAPEDFILE_FAILED
                if (null == mappedFile) {
                    // XXX: warn and notify me
                    log.error("create mapped file2 error, topic: " + msg.getTopic() + " clientAddr: " + msg.getBornHostString());
                    beginTimeInLock = 0;
                    return new PutMessageResult(PutMessageStatus.CREATE_MAPEDFILE_FAILED, result);
                }
                //创建成功,就继续写入
                result = mappedFile.appendMessage(msg, this.appendMessageCallback);
                break;
            case MESSAGE_SIZE_EXCEEDED:
            case PROPERTIES_SIZE_EXCEEDED:
                beginTimeInLock = 0;
                return new PutMessageResult(PutMessageStatus.MESSAGE_ILLEGAL, result);
            case UNKNOWN_ERROR:
                beginTimeInLock = 0;
                return new PutMessageResult(PutMessageStatus.UNKNOWN_ERROR, result);
            default:
                beginTimeInLock = 0;
                return new PutMessageResult(PutMessageStatus.UNKNOWN_ERROR, result);
        }
		.......
    PutMessageResult putMessageResult = new PutMessageResult(PutMessageStatus.PUT_OK, result);
    // Statistics
    storeStatsService.getSinglePutMessageTopicTimesTotal(msg.getTopic()).incrementAndGet();
    storeStatsService.getSinglePutMessageTopicSizeTotal(topic).addAndGet(result.getWroteBytes());

    handleDiskFlush(result, putMessageResult, msg);
    handleHA(result, putMessageResult, msg);

    return putMessageResult;
}

​ 这里对应消息的写入,还有handleDiskFlush()handleHA(),一个是处理消息的刷盘的,一个是高可用,也就是主从同步

1)、handleDiskFlush

public void handleDiskFlush(AppendMessageResult result, PutMessageResult putMessageResult, MessageExt messageExt) {
    // Synchronization flush
    if (FlushDiskType.SYNC_FLUSH == this.defaultMessageStore.getMessageStoreConfig().getFlushDiskType()) {
        final GroupCommitService service = (GroupCommitService) this.flushCommitLogService;
        if (messageExt.isWaitStoreMsgOK()) {
            GroupCommitRequest request = new GroupCommitRequest(result.getWroteOffset() + result.getWroteBytes());
            service.putRequest(request);
            boolean flushOK = request.waitForFlush(this.defaultMessageStore.getMessageStoreConfig().getSyncFlushTimeout());
            if (!flushOK) {
                putMessageResult.setPutMessageStatus(PutMessageStatus.FLUSH_DISK_TIMEOUT);
            }
        } else {
            service.wakeup();
        }
    }
    // Asynchronous flush
    else {
        if (!this.defaultMessageStore.getMessageStoreConfig().isTransientStorePoolEnable()) {
            flushCommitLogService.wakeup();
        } else {
            commitLogService.wakeup();
        }
    }
}

​ 如果是通过刷盘,就会通过flushOK(request.waitForFlush)判断是否刷盘成功,如果不成功,就返回FLUSH_DISK_TIMEOUT

2)、handleHA

public void handleHA(AppendMessageResult result, PutMessageResult putMessageResult, MessageExt messageExt) {
    if (BrokerRole.SYNC_MASTER == this.defaultMessageStore.getMessageStoreConfig().getBrokerRole()) {
        HAService service = this.defaultMessageStore.getHaService();
        if (messageExt.isWaitStoreMsgOK()) {
            // Determine whether to wait
            if (service.isSlaveOK(result.getWroteOffset() + result.getWroteBytes())) {
                GroupCommitRequest request = new GroupCommitRequest(result.getWroteOffset() + result.getWroteBytes());
                service.putRequest(request);
                service.getWaitNotifyObject().wakeupAll();
                boolean flushOK =
                    request.waitForFlush(this.defaultMessageStore.getMessageStoreConfig().getSyncFlushTimeout());
                if (!flushOK) {
                    log.error("do sync transfer other node, wait return, but failed, topic: " + messageExt.getTopic() + " tags: "
                        + messageExt.getTags() + " client address: " + messageExt.getBornHostNameString());
                    putMessageResult.setPutMessageStatus(PutMessageStatus.FLUSH_SLAVE_TIMEOUT);
                }
            }
            // Slave problem
            else {
                // Tell the producer, slave not available
                putMessageResult.setPutMessageStatus(PutMessageStatus.SLAVE_NOT_AVAILABLE);
            }
        }
    }

}

​ 这里有两个状态,首先是通过service.isSlaveOK判断Slave是否可用,如果不可用就返回SLAVE_NOT_AVAILABLE,如果可用,就等待Slave保存,如果Slave刷盘成功,就FLUSH_SLAVE_TIMEOUT

四、ConsumeQueue

​ 我们知道消息存储在文件中commitlog,由于消息的长度不确定,同时一个消息文件其会存储所有topic的内容,所以我们需要另一种结构来记录分类topicqueueId这些,ConsumeQueue其描述的就是按Topic以及QueueId来记录这些消息的内容以及其的消息内容长度。(下面的目录index其是一些查询的索引内容,例如通过消息的key这些),消息的消费处理一般是通过commitlogconsumequeue来配合使用。

在这里插入图片描述

在这里插入图片描述

​ 其里面的目录结构层级就是Topic-》多个队列->消费的具体描叙文件,并且其的描述文件不像commitlog,其写的每次是固定长度的CQ_STORE_UNIT_SIZE,也就是写两个long、一个int

public static final int CQ_STORE_UNIT_SIZE = 20;
private boolean putMessagePositionInfo(final long offset, final int size, final long tagsCode,
    final long cqOffset) {

    if (offset <= this.maxPhysicOffset) {
        return true;
    }

    this.byteBufferIndex.flip();
    this.byteBufferIndex.limit(CQ_STORE_UNIT_SIZE);
    this.byteBufferIndex.putLong(offset);
    this.byteBufferIndex.putInt(size);
    this.byteBufferIndex.putLong(tagsCode);
boolean result = this.putMessagePositionInfo(request.getCommitLogOffset(),
    request.getMsgSize(), tagsCode, request.getConsumeQueueOffset());

​ 也就是这次消费的offset、消息的长度,以及在当前消费的队列的offset。所以在消费对应消息的时候,首先会从ConsumeQueue获取在commitlong文件(消息保存文件)中获取消息的offset,然后通过这个offset再去commitlong文件中获取消息内容。

五、DefaultMessageStore

然后查找成不同的内容,一般对于消息的处理,主要是通过DefaultMessageStore来处理的,例如存储消息,DefaultMessageStore就是对消息处理的完整逻辑的整合,

五、DefaultMessageStore

​ 上面哪些类是消息处理的基础类,他们分别完成不同的内容,一般对于消息的处理,主要是通过DefaultMessageStore来处理的,例如存储消息,DefaultMessageStore就是对消息处理的完整逻辑的整合

1、putMessage(MessageExtBrokerInner msg)

这个就是写消息的逻辑

public PutMessageResult putMessage(MessageExtBrokerInner msg) {
   		.....
     //写消息会判断`Topic`是否超过限制
    if (msg.getTopic().length() > Byte.MAX_VALUE) {
        log.warn("putMessage message topic length too long " + msg.getTopic().length());
        return new PutMessageResult(PutMessageStatus.MESSAGE_ILLEGAL, null);
    }

    if (msg.getPropertiesString() != null && msg.getPropertiesString().length() > Short.MAX_VALUE) {
        log.warn("putMessage message properties length too long " + msg.getPropertiesString().length());
        return new PutMessageResult(PutMessageStatus.PROPERTIES_SIZE_EXCEEDED, null);
    }

    if (this.isOSPageCacheBusy()) {
        return new PutMessageResult(PutMessageStatus.OS_PAGECACHE_BUSY, null);
    }

    long beginTime = this.getSystemClock().now();
    PutMessageResult result = this.commitLog.putMessage(msg);
		.........
    return result;
}

​ 这里我们可以看到其写是交给CommitLog来处理的this.commitLog.putMessage(msg)

2、getMessage

​ 这个是获取消息

1)、简单demo

private final String StoreMessage = "Once, there was a chance for me!";
@Test
    public void testWriteAndRead() throws UnsupportedEncodingException {
        long totalMsgs = 10;
        QUEUE_TOTAL = 1;
        MessageBody = StoreMessage.getBytes();
        for (long i = 0; i < totalMsgs; i++) {
            messageStore.putMessage(buildMessage());
        }

        for (long i = 0; i < totalMsgs; i++) {
            GetMessageResult result = messageStore.getMessage("GROUP_A", "FooBar", 0, i, 1024 * 1024, null);
            for (int j = 0; j < result.getMessageBufferList().size(); j++) {
                byte[] bytes = new byte[result.getMessageBufferList().get(0).limit()];
                ByteBuffer byteBuffer1 = result.getMessageBufferList().get(j).get(bytes);
                byteBuffer1.flip();
//                int lengthA = byteBuffer1.getInt();
//                byteBuffer1.getInt();
                // 3 BODYCRC
//                 byteBuffer1.getInt();
                // 4 QUEUEID
//                int queueId = byteBuffer1.getInt();
                // 5 FLAG
//                int flagA = byteBuffer1.getInt();
                MessageExt decode = MessageDecoder.decode(byteBuffer1);
                System.out.println("msgStr");
            }
            assertThat(result).isNotNull();
            result.release();
        }
        verifyThatMasterIsFunctional(totalMsgs, messageStore);
    }
private MessageExtBrokerInner buildMessage() {
    MessageExtBrokerInner msg = new MessageExtBrokerInner();
    msg.setTopic("FooBar");
    msg.setTags("TAG1");
    msg.setKeys("Hello");
    msg.setBody(MessageBody);
    msg.setKeys(String.valueOf(System.currentTimeMillis()));
    msg.setQueueId(Math.abs(QueueId.getAndIncrement()) % QUEUE_TOTAL);
    msg.setSysFlag(0);
    msg.setBornTimestamp(System.currentTimeMillis());
    msg.setStoreHost(StoreHost);
    msg.setBornHost(BornHost);
    return msg;
}

​ 这里也就是putget消息,获取后我们简单的转换为了消息MessageExt

在这里插入图片描述

2)、逻辑分析

public GetMessageResult getMessage(final String group, final String topic, final int queueId, final long offset,
    final int maxMsgNums,
    final MessageFilter messageFilter) {
    long beginTime = this.getSystemClock().now();

    GetMessageStatus status = GetMessageStatus.NO_MESSAGE_IN_QUEUE;
    long nextBeginOffset = offset;
    long minOffset = 0;
    long maxOffset = 0;

    GetMessageResult getResult = new GetMessageResult();

    final long maxOffsetPy = this.commitLog.getMaxOffset();

    ConsumeQueue consumeQueue = findConsumeQueue(topic, queueId);
    if (consumeQueue != null) {
        minOffset = consumeQueue.getMinOffsetInQueue();
        maxOffset = consumeQueue.getMaxOffsetInQueue();
        	.......
        } else {
            SelectMappedBufferResult bufferConsumeQueue = consumeQueue.getIndexBuffer(offset);
            if (bufferConsumeQueue != null) {
                try {
                    status = GetMessageStatus.NO_MATCHED_MESSAGE;

                    long nextPhyFileStartOffset = Long.MIN_VALUE;
                    long maxPhyOffsetPulling = 0;

                    int i = 0;
                    final int maxFilterMessageCount = Math.max(16000, maxMsgNums * ConsumeQueue.CQ_STORE_UNIT_SIZE);
                    final boolean diskFallRecorded = this.messageStoreConfig.isDiskFallRecorded();
                    ConsumeQueueExt.CqExtUnit cqExtUnit = new ConsumeQueueExt.CqExtUnit();
                    for (; i < bufferConsumeQueue.getSize() && i < maxFilterMessageCount; i += ConsumeQueue.CQ_STORE_UNIT_SIZE) {
                        long offsetPy = bufferConsumeQueue.getByteBuffer().getLong();
                        int sizePy = bufferConsumeQueue.getByteBuffer().getInt();
                        long tagsCode = bufferConsumeQueue.getByteBuffer().getLong();

                        maxPhyOffsetPulling = offsetPy;
						...........

                        SelectMappedBufferResult selectResult = this.commitLog.getMessage(offsetPy, sizePy);
                     		.............
                        this.storeStatsService.getGetMessageTransferedMsgCount().incrementAndGet();
                        getResult.addMessage(selectResult);
                        status = GetMessageStatus.FOUND;
                        nextPhyFileStartOffset = Long.MIN_VALUE;
                    }
						.........
    return getResult;
}

​ 这里首先是找到ConsumeQueue

ConsumeQueue consumeQueue = findConsumeQueue(topic, queueId);
public ConsumeQueue findConsumeQueue(String topic, int queueId) {
    ConcurrentMap<Integer, ConsumeQueue> map = consumeQueueTable.get(topic);
    if (null == map) {
        ConcurrentMap<Integer, ConsumeQueue> newMap = new ConcurrentHashMap<Integer, ConsumeQueue>(128);
        ConcurrentMap<Integer, ConsumeQueue> oldMap = consumeQueueTable.putIfAbsent(topic, newMap);
        if (oldMap != null) {
            map = oldMap;
        } else {
            map = newMap;
        }
    }

    ConsumeQueue logic = map.get(queueId);
    if (null == logic) {
        ConsumeQueue newLogic = new ConsumeQueue(
            topic,
            queueId,
            StorePathConfigHelper.getStorePathConsumeQueue(this.messageStoreConfig.getStorePathRootDir()),
            this.getMessageStoreConfig().getMapedFileSizeConsumeQueue(),
            this);
        ConsumeQueue oldLogic = map.putIfAbsent(queueId, newLogic);
        if (oldLogic != null) {
            logic = oldLogic;
        } else {
            logic = newLogic;
        }
    }

    return logic;
}

​ 然后通过consumeQueue.getIndexBuffer(offset),来获取对应的消息的偏移量,消息长度这些,再通过这些内容this.commitLog.getMessage(offsetPy, sizePy)也就是从commitLong中去获取对应的消息,而关于这个消息,其也会再次去找MappedFile,再讲获取的结构添加到getResult.addMessage(selectResult);返回中。

public void addMessage(final SelectMappedBufferResult mapedBuffer) {
    this.messageMapedList.add(mapedBuffer);
    this.messageBufferList.add(mapedBuffer.getByteBuffer());
    this.bufferTotalSize += mapedBuffer.getSize();
    this.msgCount4Commercial += (int) Math.ceil(
        mapedBuffer.getSize() / BrokerStatsManager.SIZE_PER_COUNT);
}

3)、消息解码(decode)

public static MessageExt decode(
    java.nio.ByteBuffer byteBuffer, final boolean readBody, final boolean deCompressBody, final boolean isClient) {
    try {
        MessageExt msgExt;
        if (isClient) {
            msgExt = new MessageClientExt();
        } else {
            msgExt = new MessageExt();
        }
        // 1 TOTALSIZE
        int storeSize = byteBuffer.getInt();
        msgExt.setStoreSize(storeSize);
        // 2 MAGICCODE
        byteBuffer.getInt();

        // 3 BODYCRC
        int bodyCRC = byteBuffer.getInt();
        msgExt.setBodyCRC(bodyCRC);

        // 4 QUEUEID
        int queueId = byteBuffer.getInt();
        msgExt.setQueueId(queueId);

        // 5 FLAG
        int flag = byteBuffer.getInt();
        msgExt.setFlag(flag);
		..........
        // 15 BODY
        int bodyLen = byteBuffer.getInt();
        if (bodyLen > 0) {
            if (readBody) {
                byte[] body = new byte[bodyLen];
                byteBuffer.get(body);

                // uncompress body
                if (deCompressBody && (sysFlag & MessageSysFlag.COMPRESSED_FLAG) == MessageSysFlag.COMPRESSED_FLAG) {
                    body = UtilAll.uncompress(body);
                }

                msgExt.setBody(body);
            } else {
                byteBuffer.position(byteBuffer.position() + bodyLen);
            }
        }

        // 16 TOPIC
        byte topicLen = byteBuffer.get();
        byte[] topic = new byte[(int) topicLen];
        byteBuffer.get(topic);
        msgExt.setTopic(new String(topic, CHARSET_UTF8));
		............
        return msgExt;
    } catch (Exception e) {
        byteBuffer.position(byteBuffer.limit());
    }

    return null;
}

​ 这个解码逻辑也就是按照我们前面梳理添加那样按照顺序的从byteBuffer获取填充MessageExt

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值