主题主要讲的就是实现消息按照顺序执行消费,常见的场景就是电商,一个订单肯定是先创建订单之后,才能支付订单或者取消支付,然后才能是确认订单收货或者退款,要保障先进先出,那如何实现消息按照顺序执行消费呢
按照这个搭个架子
发送和消费顺序消息
生产者Producer:
1、添加配置文件信息
#设置同步发送,默认是异步 spring.cloud.stream.rocketmq.bindings.output.producer.sync=true
2、编写模拟支付接口
@RestController public class HelloController { @Autowired private Source source; @GetMapping("/orderly") public String orderly() { //模拟订单生命周期 List<String> list=Arrays.asList("创建订单","支付订单","订单退款"); for (String s : list) { //设置header头部,表示这是一条顺序消息,将消息固定的发送到第0个消息队列 MessageBuilder builder = MessageBuilder.withPayload(s).setHeader(BinderHeaders.PARTITION_HEADER,0); Message message = builder.build(); source.output().send(message); } return "rocketmq orderly: OK!"; } }
3、启动服务,并访问接口即可
如果会报错或者接口失效,可以尝试着mvn clean一下,因为Idea有一定的缓存嫌疑
消费者Consumer:
1、添加配置文件信息
#指定顺序消费,默认是并发消费 spring.cloud.stream.rocketmq.bindings.input.consumer.orderly=true
2、启动类不变
3、执行服务,观察控制台日志输出
如图,说明按照List定义的顺序执行了
顺序发送的底层原理
RocketMQ的顺序消息分为两种情况:局部有序和全局有序,上面的支付是局部有序
- 局部有序:发送同一个队列的消息有序,可以在发送消息时指定队列,在消费消息时按顺序消费,比如微信聊天,或者订单支付,你和A的聊天,不会串到B聊天记录里,你支付的订单ID,也不会影响到其他订单,你和AB聊天、你支付鞋子衣服的订单可以是并行的,但是互不影响
- 全局有序:设置Topic只有一个队列可以实现全局有序,创建Topic时手动设置,但是一般情况下都是自动设置,一般不推荐手动设置
消息发送还有三种方式:同步、异步、单向
- 同步:发送网络请求后会同步等待Broker服务器返回结果,支持发送失败重试,适合绝大多数场景
- 异步:异步发送网络请求,不会阻塞当前线程,但是不支持发送失败重试,适用于对响应时间要求很高的场景
- 单向:单向发送原理和异步差不多,但它不支持回调,比较适用于响应时间非常短,对可靠性要求不太高的场景,比如日志采集系统
其实顺序消息的发送的原理很简单,就是把同一类的消息发送到相同的队列即可,为了保证先发送的先存储到队列,就必须使用同步发送的方式,否则就会出现乱序,为什么乱序,小伙伴自己想想哈,接下来我们结合上面的逻辑看一下源码,首先从RocketMQTemplate模板类开始:
public class RocketMQTemplate extends AbstractMessageSendingTemplate<String> implements InitializingBean, DisposableBean { private static final Logger log = LoggerFactory.getLogger(RocketMQTemplate.class); private DefaultMQProducer producer; private ObjectMapper objectMapper; private String charset = "UTF-8"; private MessageQueueSelector messageQueueSelector = new SelectMessageQueueByHash(); private final Map<String, TransactionMQProducer> cache = new ConcurrentHashMap(); //部分代码省略 public SendResult syncSendOrderly(String destination, Message<?> message, String hashKey) { return this.syncSendOrderly(destination, message, hashKey, (long)this.producer.getSendMsgTimeout()); } public SendResult syncSendOrderly(String destination, Message<?> message, String hashKey, long timeout) { if (!Objects.isNull(message) && !Objects.isNull(message.getPayload())) { try { long now = System.currentTimeMillis(); //转换成RocketMQAPI中的Message对象 org.apache.rocketmq.common.message.Message rocketMsg = RocketMQUtil.convertToRocketMessage(this.objectMapper, this.charset, destination, message); //调用发送消息接口 SendResult sendResult = this.producer.send(rocketMsg, this.messageQueueSelector, hashKey, timeout); //计算时长 long costTime = System.currentTimeMillis() - now; log.debug("send message cost: {} ms, msgId:{}", costTime, sendResult.getMsgId()); return sendResult; } catch (Exception var12) { log.error("syncSendOrderly failed. destination:{}, message:{} ", destination, message); throw new MessagingException(var12.getMessage(), var12); } } else { log.error("syncSendOrderly failed. destination:{}, message is null ", destination); throw new IllegalArgumentException("`message` and `message.payload` cannot be null"); } } //代码省略... }
选择队列的过程由messageQueueSelector和hashKey在实现类SelectMessageQueueByHash中实现的,话不多说,上源码:
public class SelectMessageQueueByHash implements MessageQueueSelector { public SelectMessageQueueByHash() { } public MessageQueue select(List<MessageQueue> mqs, Message msg, Object arg) { //计算对应的hash值。相当于订单的ID int value = arg.hashCode(); if (value < 0) { value = Math.abs(value); } //跟队列数取模,得到一个索引值, value %= mqs.size(); //取出一个hash值相同的队列 return (MessageQueue)mqs.get(value); } }
在队列列表的获取过程中,由Prducer从NameServer根据Topic查询Broker列表,缓存在本地内存中,方便下次从缓存中获取
普通消息的底层原理
啥叫普通的嘞,其实上面说的顺序发送,是一种特殊的消息,除了顺序发送,还有事务和延迟消息,除了这三种之外的所有消息都算是普通消息,当然平常开发最常用的就是普通消息 ,比如削峰填谷、异步解耦等等
那普通消息和顺序对比起来,有啥不同嘞,普通消息在发送时选择消息队列的策略不同,普通消息选择队列有两种方式:轮询机制和故障规避机制,默认是轮询,一个Topic有多个队列,轮询选择其中一个队列,那简单讲一下这个轮询:
原理就是路由信息TopicPublishInfo中维护了一个计数器sendWhichQueue,每发送一次消息需要查询一次路由,计数词就进行+1,通过计数器的值index来和队列的数量取模来实现轮询算法,上源码:
public class TopicPublishInfo { //代码省略... //选择队列 public MessageQueue selectOneMessageQueue(String lastBrokerName) { //第一次执行的时候先给一个null if (lastBrokerName == null) { return this.selectOneMessageQueue(); } else { int index = this.sendWhichQueue.getAndIncrement(); for(int i = 0; i < this.messageQueueList.size(); ++i) { //计算索引值 int pos = Math.abs(index++) % this.messageQueueList.size(); if (pos < 0) { pos = 0; } MessageQueue mq = (MessageQueue)this.messageQueueList.get(pos); //当前选中的Queue所在的Broker,如果不是上次发送的Broker,确保 //轮询的合理性最大化 if (!mq.getBrokerName().equals(lastBrokerName)) { return mq; } } return this.selectOneMessageQueue(); } } public MessageQueue selectOneMessageQueue() { int index = this.sendWhichQueue.getAndIncrement(); int pos = Math.abs(index) % this.messageQueueList.size(); if (pos < 0) { pos = 0; } return (MessageQueue)this.messageQueueList.get(pos); } //代码省略... }
相对于顺序消息,普通消息实现还算是比较简单的,但是有个缺点,就是如果轮询选择的队列在宕机的那个Broker上,就会导致消息发送失败,即使有重试,也有可能会在宕机的那个Broker上,无法规避消息丢失的问题,所以这个时候就有了故障规避机制,这里先不讲哈
顺序消息的技术原理
RocketMQ支持两种消息模式:集群和广播消费,两者的区别就在于:前者是每条消息只会被一个消费者消费,而广播就是所有消费者消费,一般来说都是集群模式,因为消息定位是一条消息就是一条业务,消费一次就说明实现了一次业务处理,因为集群模式比较常见,所以我们解析一下这种模式:
public class ConsumeMessageOrderlyService implements ConsumeMessageService { private static final InternalLogger log = ClientLogger.getLog(); private static final long MAX_TIME_CONSUME_CONTINUOUSLY = Long.parseLong(System.getProperty("rocketmq.client.maxTimeConsumeContinuously", "60000")); private final DefaultMQPushConsumerImpl defaultMQPushConsumerImpl; private final DefaultMQPushConsumer defaultMQPushConsumer; private final MessageListenerOrderly messageListener; private final BlockingQueue<Runnable> consumeRequestQueue; private final ThreadPoolExecutor consumeExecutor; private final String consumerGroup; private final MessageQueueLock messageQueueLock = new MessageQueueLock(); private final ScheduledExecutorService scheduledExecutorService; private volatile boolean stopped = false; public ConsumeMessageOrderlyService(DefaultMQPushConsumerImpl defaultMQPushConsumerImpl, MessageListenerOrderly messageListener) { this.defaultMQPushConsumerImpl = defaultMQPushConsumerImpl; this.messageListener = messageListener; this.defaultMQPushConsumer = this.defaultMQPushConsumerImpl.getDefaultMQPushConsumer(); this.consumerGroup = this.defaultMQPushConsumer.getConsumerGroup(); this.consumeRequestQueue = new LinkedBlockingQueue(); this.consumeExecutor = new ThreadPoolExecutor(this.defaultMQPushConsumer.getConsumeThreadMin(), this.defaultMQPushConsumer.getConsumeThreadMax(), 60000L, TimeUnit.MILLISECONDS, this.consumeRequestQueue, new ThreadFactoryImpl("ConsumeMessageThread_")); this.scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(new ThreadFactoryImpl("ConsumeMessageScheduledThread_")); } public void start() { if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) { this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() { public void run() { ConsumeMessageOrderlyService.this.lockMQPeriodically(); } }, 1000L, ProcessQueue.REBALANCE_LOCK_INTERVAL, TimeUnit.MILLISECONDS); } } public void shutdown() { this.stopped = true; this.scheduledExecutorService.shutdown(); this.consumeExecutor.shutdown(); if (MessageModel.CLUSTERING.equals(this.defaultMQPushConsumerImpl.messageModel())) { this.unlockAllMQ(); } } public synchronized void unlockAllMQ() { this.defaultMQPushConsumerImpl.getRebalanceImpl().unlockAll(false); } public void updateCorePoolSize(int corePoolSize) { if (corePoolSize > 0 && corePoolSize <= 32767 && corePoolSize < this.defaultMQPushConsumer.getConsumeThreadMax()) { this.consumeExecutor.setCorePoolSize(corePoolSize); } } public void incCorePoolSize() { } public void decCorePoolSize() { } public int getCorePoolSize() { return this.consumeExecutor.getCorePoolSize(); } public ConsumeMessageDirectlyResult consumeMessageDirectly(MessageExt msg, String brokerName) { ConsumeMessageDirectlyResult result = new ConsumeMessageDirectlyResult(); result.setOrder(true); List<MessageExt> msgs = new ArrayList(); msgs.add(msg); MessageQueue mq = new MessageQueue(); mq.setBrokerName(brokerName); mq.setTopic(msg.getTopic()); mq.setQueueId(msg.getQueueId()); ConsumeOrderlyContext context = new ConsumeOrderlyContext(mq); long beginTime = System.currentTimeMillis(); log.info("consumeMessageDirectly receive new message: {}", msg); try { ConsumeOrderlyStatus status = this.messageListener.consumeMessage(msgs, context); if (status != null) { switch(status) { case COMMIT: result.setConsumeResult(CMResult.CR_COMMIT); break; case ROLLBACK: result.setConsumeResult(CMResult.CR_ROLLBACK); break; case SUCCESS: result.setConsumeResult(CMResult.CR_SUCCESS); break; case SUSPEND_CURRENT_QUEUE_A_MOMENT: result.setConsumeResult(CMResult.CR_LATER); } } else { result.setConsumeResult(CMResult.CR_RETURN_NULL); } } catch (Throwable var10) { result.setConsumeResult(CMResult.CR_THROW_EXCEPTION); result.setRemark(RemotingHelper.exceptionSimpleDesc(var10)); log.warn(String.format("consumeMessageDirectly exception: %s Group: %s Msgs: %s MQ: %s", RemotingHelper.exceptionSimpleDesc(var10), this.consumerGroup, msgs, mq), var10); } result.setAutoCommit(context.isAutoCommit()); result.setSpentTimeMills(System.currentTimeMillis() - beginTime); log.info("consumeMessageDirectly Result: {}", result); return result; } public void submitConsumeRequest(List<MessageExt> msgs, ProcessQueue processQueue, MessageQueue messageQueue, boolean dispathToConsume) { if (dispathToConsume) { ConsumeMessageOrderlyService.ConsumeRequest consumeRequest = new ConsumeMessageOrderlyService.ConsumeRequest(processQueue, messageQueue); this.consumeExecutor.submit(consumeRequest); } } public synchronized void lockMQPeriodically() { if (!this.stopped) { this.defaultMQPushConsumerImpl.getRebalanceImpl().lockAll(); } } public void tryLockLaterAndReconsume(final MessageQueue mq, final ProcessQueue processQueue, long delayMills) { this.scheduledExecutorService.schedule(new Runnable() { public void run() { boolean lockOK = ConsumeMessageOrderlyService.this.lockOneMQ(mq); if (lockOK) { ConsumeMessageOrderlyService.this.submitConsumeRequestLater(processQueue, mq, 10L); } else { ConsumeMessageOrderlyService.this.submitConsumeRequestLater(processQueue, mq, 3000L); } } }, delayMills, TimeUnit.MILLISECONDS); } public synchronized boolean lockOneMQ(MessageQueue mq) { return !this.stopped ? this.defaultMQPushConsumerImpl.getRebalanceImpl().lock(mq) : false; } private void submitConsumeRequestLater(final ProcessQueue processQueue, final MessageQueue messageQueue, long suspendTimeMillis) { long timeMillis = suspendTimeMillis; if (suspendTimeMillis == -1L) { timeMillis = this.defaultMQPushConsumer.getSuspendCurrentQueueTimeMillis(); } if (timeMillis < 10L) { timeMillis = 10L; } else if (timeMillis > 30000L) { timeMillis = 30000L; } this.scheduledExecutorService.schedule(new Runnable() { public void run() { ConsumeMessageOrderlyService.this.submitConsumeRequest((List)null, processQueue, messageQueue, true); } }, timeMillis, TimeUnit.MILLISECONDS); } public boolean processConsumeResult(List<MessageExt> msgs, ConsumeOrderlyStatus status, ConsumeOrderlyContext context, ConsumeMessageOrderlyService.ConsumeRequest consumeRequest) { boolean continueConsume = true; long commitOffset = -1L; if (context.isAutoCommit()) { switch(status) { case COMMIT: case ROLLBACK: log.warn("the message queue consume result is illegal, we think you want to ack these message {}", consumeRequest.getMessageQueue()); case SUCCESS: commitOffset = consumeRequest.getProcessQueue().commit(); this.getConsumerStatsManager().incConsumeOKTPS(this.consumerGroup, consumeRequest.getMessageQueue().getTopic(), (long)msgs.size()); break; case SUSPEND_CURRENT_QUEUE_A_MOMENT: this.getConsumerStatsManager().incConsumeFailedTPS(this.consumerGroup, consumeRequest.getMessageQueue().getTopic(), (long)msgs.size()); if (this.checkReconsumeTimes(msgs)) { consumeRequest.getProcessQueue().makeMessageToCosumeAgain(msgs); this.submitConsumeRequestLater(consumeRequest.getProcessQueue(), consumeRequest.getMessageQueue(), context.getSuspendCurrentQueueTimeMillis()); continueConsume = false; } else { commitOffset = consumeRequest.getProcessQueue().commit(); } } } else { switch(status) { case COMMIT: commitOffset = consumeRequest.getProcessQueue().commit(); break; case ROLLBACK: consumeRequest.getProcessQueue().rollback(); this.submitConsumeRequestLater(consumeRequest.getProcessQueue(), consumeRequest.getMessageQueue(), context.getSuspendCurrentQueueTimeMillis()); continueConsume = false; break; case SUCCESS: this.getConsumerStatsManager().incConsumeOKTPS(this.consumerGroup, consumeRequest.getMessageQueue().getTopic(), (long)msgs.size()); break; case SUSPEND_CURRENT_QUEUE_A_MOMENT: this.getConsumerStatsManager().incConsumeFailedTPS(this.consumerGroup, consumeRequest.getMessageQueue().getTopic(), (long)msgs.size()); if (this.checkReconsumeTimes(msgs)) { consumeRequest.getProcessQueue().makeMessageToCosumeAgain(msgs); this.submitConsumeRequestLater(consumeRequest.getProcessQueue(), consumeRequest.getMessageQueue(), context.getSuspendCurrentQueueTimeMillis()); continueConsume = false; } } } if (commitOffset >= 0L && !consumeRequest.getProcessQueue().isDropped()) { this.defaultMQPushConsumerImpl.getOffsetStore().updateOffset(consumeRequest.getMessageQueue(), commitOffset, false); } return continueConsume; } public ConsumerStatsManager getConsumerStatsManager() { return this.defaultMQPushConsumerImpl.getConsumerStatsManager(); } private int getMaxReconsumeTimes() { return this.defaultMQPushConsumer.getMaxReconsumeTimes() == -1 ? 2147483647 : this.defaultMQPushConsumer.getMaxReconsumeTimes(); } private boolean checkReconsumeTimes(List<MessageExt> msgs) { boolean suspend = false; if (msgs != null && !msgs.isEmpty()) { Iterator var3 = msgs.iterator(); while(var3.hasNext()) { MessageExt msg = (MessageExt)var3.next(); if (msg.getReconsumeTimes() >= this.getMaxReconsumeTimes()) { MessageAccessor.setReconsumeTime(msg, String.valueOf(msg.getReconsumeTimes())); if (!this.sendMessageBack(msg)) { suspend = true; msg.setReconsumeTimes(msg.getReconsumeTimes() + 1); } } else { suspend = true; msg.setReconsumeTimes(msg.getReconsumeTimes() + 1); } } } return suspend; } public boolean sendMessageBack(MessageExt msg) { try { Message newMsg = new Message(MixAll.getRetryTopic(this.defaultMQPushConsumer.getConsumerGroup()), msg.getBody()); String originMsgId = MessageAccessor.getOriginMessageId(msg); MessageAccessor.setOriginMessageId(newMsg, UtilAll.isBlank(originMsgId) ? msg.getMsgId() : originMsgId); newMsg.setFlag(msg.getFlag()); MessageAccessor.setProperties(newMsg, msg.getProperties()); MessageAccessor.putProperty(newMsg, "RETRY_TOPIC", msg.getTopic()); MessageAccessor.setReconsumeTime(newMsg, String.valueOf(msg.getReconsumeTimes())); MessageAccessor.setMaxReconsumeTimes(newMsg, String.valueOf(this.getMaxReconsumeTimes())); newMsg.setDelayTimeLevel(3 + msg.getReconsumeTimes()); this.defaultMQPushConsumer.getDefaultMQPushConsumerImpl().getmQClientFactory().getDefaultMQProducer().send(newMsg); return true; } catch (Exception var4) { log.error("sendMessageBack exception, group: " + this.consumerGroup + " msg: " + msg.toString(), var4); return false; } } //线程上锁逻辑实现 class ConsumeRequest implements Runnable { private final ProcessQueue processQueue; private final MessageQueue messageQueue; public ConsumeRequest(ProcessQueue processQueue, MessageQueue messageQueue) { this.processQueue = processQueue; this.messageQueue = messageQueue; } public ProcessQueue getProcessQueue() { return this.processQueue; } public MessageQueue getMessageQueue() { return this.messageQueue; } public void run() { if (this.processQueue.isDropped()) { ConsumeMessageOrderlyService.log.warn("run, the message queue not be able to consume, because it's dropped. {}", this.messageQueue); } else { //上锁 Object objLock = ConsumeMessageOrderlyService.this.messageQueueLock.fetchLockObject(this.messageQueue); synchronized(objLock) { if (!MessageModel.BROADCASTING.equals(ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl.messageModel()) && (!this.processQueue.isLocked() || this.processQueue.isLockExpired())) { if (this.processQueue.isDropped()) { ConsumeMessageOrderlyService.log.warn("the message queue not be able to consume, because it's dropped. {}", this.messageQueue); } else { ConsumeMessageOrderlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 100L); } } else { long beginTime = System.currentTimeMillis(); boolean continueConsume = true; while(true) { while(continueConsume) { if (this.processQueue.isDropped()) { ConsumeMessageOrderlyService.log.warn("the message queue not be able to consume, because it's dropped. {}", this.messageQueue); return; } if (MessageModel.CLUSTERING.equals(ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl.messageModel()) && !this.processQueue.isLocked()) { ConsumeMessageOrderlyService.log.warn("the message queue not locked, so consume later, {}", this.messageQueue); ConsumeMessageOrderlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10L); return; } if (MessageModel.CLUSTERING.equals(ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl.messageModel()) && this.processQueue.isLockExpired()) { ConsumeMessageOrderlyService.log.warn("the message queue lock expired, so consume later, {}", this.messageQueue); ConsumeMessageOrderlyService.this.tryLockLaterAndReconsume(this.messageQueue, this.processQueue, 10L); return; } long interval = System.currentTimeMillis() - beginTime; if (interval > ConsumeMessageOrderlyService.MAX_TIME_CONSUME_CONTINUOUSLY) { ConsumeMessageOrderlyService.this.submitConsumeRequestLater(this.processQueue, this.messageQueue, 10L); return; } int consumeBatchSize = ConsumeMessageOrderlyService.this.defaultMQPushConsumer.getConsumeMessageBatchMaxSize(); List<MessageExt> msgs = this.processQueue.takeMessags(consumeBatchSize); if (!msgs.isEmpty()) { ConsumeOrderlyContext context = new ConsumeOrderlyContext(this.messageQueue); ConsumeOrderlyStatus status = null; ConsumeMessageContext consumeMessageContext = null; if (ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl.hasHook()) { consumeMessageContext = new ConsumeMessageContext(); consumeMessageContext.setConsumerGroup(ConsumeMessageOrderlyService.this.defaultMQPushConsumer.getConsumerGroup()); consumeMessageContext.setMq(this.messageQueue); consumeMessageContext.setMsgList(msgs); consumeMessageContext.setSuccess(false); consumeMessageContext.setProps(new HashMap()); ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl.executeHookBefore(consumeMessageContext); } long beginTimestamp = System.currentTimeMillis(); ConsumeReturnType returnType = ConsumeReturnType.SUCCESS; boolean hasException = false; try { this.processQueue.getLockConsume().lock(); if (this.processQueue.isDropped()) { ConsumeMessageOrderlyService.log.warn("consumeMessage, the message queue not be able to consume, because it's dropped. {}", this.messageQueue); return; } status = ConsumeMessageOrderlyService.this.messageListener.consumeMessage(Collections.unmodifiableList(msgs), context); } catch (Throwable var23) { ConsumeMessageOrderlyService.log.warn("consumeMessage exception: {} Group: {} Msgs: {} MQ: {}", new Object[]{RemotingHelper.exceptionSimpleDesc(var23), ConsumeMessageOrderlyService.this.consumerGroup, msgs, this.messageQueue}); hasException = true; } finally { this.processQueue.getLockConsume().unlock(); } if (null == status || ConsumeOrderlyStatus.ROLLBACK == status || ConsumeOrderlyStatus.SUSPEND_CURRENT_QUEUE_A_MOMENT == status) { ConsumeMessageOrderlyService.log.warn("consumeMessage Orderly return not OK, Group: {} Msgs: {} MQ: {}", new Object[]{ConsumeMessageOrderlyService.this.consumerGroup, msgs, this.messageQueue}); } long consumeRT = System.currentTimeMillis() - beginTimestamp; if (null == status) { if (hasException) { returnType = ConsumeReturnType.EXCEPTION; } else { returnType = ConsumeReturnType.RETURNNULL; } } else if (consumeRT >= ConsumeMessageOrderlyService.this.defaultMQPushConsumer.getConsumeTimeout() * 60L * 1000L) { returnType = ConsumeReturnType.TIME_OUT; } else if (ConsumeOrderlyStatus.SUSPEND_CURRENT_QUEUE_A_MOMENT == status) { returnType = ConsumeReturnType.FAILED; } else if (ConsumeOrderlyStatus.SUCCESS == status) { returnType = ConsumeReturnType.SUCCESS; } if (ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl.hasHook()) { consumeMessageContext.getProps().put("ConsumeContextType", returnType.name()); } if (null == status) { status = ConsumeOrderlyStatus.SUSPEND_CURRENT_QUEUE_A_MOMENT; } if (ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl.hasHook()) { consumeMessageContext.setStatus(status.toString()); consumeMessageContext.setSuccess(ConsumeOrderlyStatus.SUCCESS == status || ConsumeOrderlyStatus.COMMIT == status); ConsumeMessageOrderlyService.this.defaultMQPushConsumerImpl.executeHookAfter(consumeMessageContext); } ConsumeMessageOrderlyService.this.getConsumerStatsManager().incConsumeRT(ConsumeMessageOrderlyService.this.consumerGroup, this.messageQueue.getTopic(), consumeRT); continueConsume = ConsumeMessageOrderlyService.this.processConsumeResult(msgs, status, context, this); } else { continueConsume = false; } } return; } } } } } } }
看源码其实能够知道,顺序消费的原理就是同一个消息队列只允许Consumer中的一个消费线程拉取消费,Consumer中有个消费线程池,多个线程会同时消费消息,在顺序消费的场景下,消费线程请求到Broker时会先申请上锁,获得锁的才能够消费,具体看一下上面源码,可以搜索"class ConsumeRequest",上面我写了注释的就是上锁处理的逻辑
消息消费成功之后,会想Broker提交消费进度,更新消费位点信息,避免下次再次消费,顺序消费中如果消息线程在监听器中进行业务处理时抛出异常,则不会提交消费进度,消费进度会阻塞在当前这条消息,并不会继续消费该队列中后续的消息,从而保证了顺序消费,在顺序消费的场景下,特别需要注意对异常的处理,如果重试还是失败,就会一直阻塞在那里,知道超出最大重试次数,从而在很长一段时间内,无法消费后续消息造成队列消息堆积
并发消费的底层原理
消息消费有两种方式,一个是顺序消费,上面已经讲了,另一个就是并发模式,并发模式是默认的,也是平常开发用的比较多的一个模式,那他的原理跟顺序消费多多少少有些背驰而行了,那原理是怎么样的呢?
就是同一个消息队列提供给Consumer中的多个消费线程拉取消费,Consumer中会维护一个消费线程池,多个消费线程可以并发去同一个消息队列中拉取消息,如果出现异常了,那当前消费线程拉取的消息可能会重试,不影响其他消费线程和消费队列的消费进度,消费成功的线程正常提交消费进度
需要注意的是,并发消费没有上锁的过程,所以在消费上的效率比顺序消费要快上很多
消息幂等性
上面说到顺序消费,消费成功之后会提交消费进度,更新位点信息,为了防止消息重复被消费,上面也说了业务代码中通常可以认为一条信息只会进行一次业务逻辑,如果被多次执行可能会有一定的数据问题,RocketMQ自身不保证消息不被重复消费,所以如果业务对重复这件事很敏感,就必须在业务逻辑中进行幂等性处理,可以通过分布式锁来实现幂等性
在所有消息系统中,消息的消费有三种模式:
- at-most-once(最多一次):消息投递后不论是成功还是失败,不会在重复投递,有可能会导致消息丢失或者消息未被消费,但是RocketMQ没有沿用这个机制
- at-least-once(最少一次):消息投递并完成消费后,想服务器返回ACK,也就是消费确认机制,没有消费则一定不会返回ACK消息,可能会出现网络异常、客户端等原因,服务器未能及时或没有收到客户端返回的ACK,服务器就会再次投递,这就会有可能导致消费重复,RocketMQ会通过ACK来确保消息至少被消费一次
- exactly-only-once(精确仅一次):①发送消息阶段,不允许发送重复的消息,②消费阶段,不允许消息重复被消费,只有满足这两种条件才能认为消息满足exactly-only-once,在分布式环境下,如果要实现该模式,会有比较大的成本消耗,RocketMQ为了追求高性能,并不会保证此特性,所以就无法避免消息被重复使用,就只能在业务实现逻辑上做幂等性处理