Jeesuite框架是如何消费Kafka消息的

Jeesuite lib库版本包含了同样版本的Jeesuite-kafka.

<parent>
   <groupId>com.jeesuite</groupId>
   <artifactId>jeesuite-libs</artifactId>
   <version>1.1.9</version>
</parent>

jeesuite-kafka工程的目录

消费者线程的启动 

Jeesuite-Kafka使用消费者的方式非常简单.1是配置消费者参数和生产者参数等2.在消费者类上添加@ConsumerHandler注解,并指定topic.比如

@Component
@ConsumerHandler(topic = AppConstants.TOPIC_WORKFLOW_TASK_OPERATE_PROCESS)

 这样就能消费消息了.我们知道Kafka是有分客户端和broker的,客户端与broker建立长连接.循环消费信息.

TopicConsumerSpringProvider就是干这事的.TopicConsumerSpringProvider实现了
InitializingBean, DisposableBean,ApplicationContextAware,PriorityOrdered四个接口.在afterPropertiesSet()中干了三件事

1.读取需要扫描的包配置参数为jeesuite.kafka.consumer.scanPackages

处理所有标注了@ConsumerHandler的类

2.设置参数,其中enable.auto.commit为false时,会自动提交,我这里设置了false.

3.启动消费者线程

@Override
public void afterPropertiesSet() throws Exception {
		//1.解析标注了@ConsumerHandler的类
		if(StringUtils.isNotBlank(scanPackages)){
			String[] packages = org.springframework.util.StringUtils.tokenizeToStringArray(this.scanPackages, ConfigurableApplicationContext.CONFIG_LOCATION_DELIMITERS);
			scanAndRegisterAnnotationTopics(packages);
		}
		
		Validate.isTrue(topicHandlers != null && topicHandlers.size() > 0, "at latest one topic");
        //2.设置参数
 		//当前状态
		if(status.get() > 0)return;
		
		routeEnv = StringUtils.trimToNull(ResourceUtils.getProperty(KafkaConst.PROP_ENV_ROUTE));
		
		if(routeEnv != null){
			logger.info("current route Env value is:",routeEnv);
			Map<String, MessageHandler> newTopicHandlers = new HashMap<>();
			for (String origTopicName : topicHandlers.keySet()) {
				newTopicHandlers.put(routeEnv + "." + origTopicName, topicHandlers.get(origTopicName));
			}
			topicHandlers = newTopicHandlers;
		}
		
		//make sure that rebalance.max.retries * rebalance.backoff.ms > zookeeper.session.timeout.ms.
		configs.put("rebalance.max.retries", "5");  
		configs.put("rebalance.backoff.ms", "1205"); 
		configs.put("zookeeper.session.timeout.ms", "6000"); 
		
		configs.put("key.deserializer",StringDeserializer.class.getName());  
		
		if(!configs.containsKey("value.deserializer")){
        	configs.put("value.deserializer", KyroMessageDeserializer.class.getName());
        }
		//在参数中配置jeesuite.kafka.consumer.useNewAPI
		if(useNewAPI){
			if("smallest".equals(configs.getProperty("auto.offset.reset"))){
				configs.put("auto.offset.reset", "earliest");
			}else if("largest".equals(configs.getProperty("auto.offset.reset"))){
				configs.put("auto.offset.reset", "latest");
			}
		}else{			
			//强制自动提交
			configs.put("enable.auto.commit", "true");
		}

		//同步节点信息
		groupId = configs.get(org.apache.kafka.clients.consumer.ConsumerConfig.GROUP_ID_CONFIG).toString();
		
		logger.info("\n===============KAFKA Consumer group[{}] begin start=================\n",groupId);
		
		consumerId = NodeNameHolder.getNodeId();
		//
		configs.put("consumer.id", consumerId);
		
		//kafka 内部处理 consumerId = groupId + "_" + consumerId
		consumerId = groupId + "_" + consumerId;
		//
		if(!configs.containsKey("client.id")){
			configs.put("client.id", consumerId);
		}
		//3.启动消费者线程
    	start();
    	
    	logger.info("\n===============KAFKA Consumer group[{}],consumerId[{}] start finished!!=================\n",groupId,consumerId);
    }

我们来看一下消费者是如何启动的

/**
	 * 启动
	 */
	private void start() {
		if (independent) {
			logger.info("KAFKA 启动模式[independent]");
			new Thread(new Runnable() {
				@Override
				public void run() {
					registerKafkaSubscriber();
				}
			}).start();
		} else {
			registerKafkaSubscriber();
		}
	}

这里分独立线程启动还是主线程启动.都是调用相同的方法registerKafkaSubscriber()这里给每个topic分配单个线程处理

@Override
public void start() {
		//重置offset
		if(consumerContext.getOffsetLogHanlder() != null){	
			resetCorrectOffsets();
		}
		Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
		for (String topicName : consumerContext.getMessageHandlers().keySet()) {
			int nThreads = 1;
			topicCountMap.put(topicName, nThreads);
			logger.info("topic[{}] assign fetch Threads {}",topicName,nThreads);
		}
		
		StringDecoder keyDecoder = new StringDecoder(new VerifiableProperties());
		MessageDecoder valueDecoder = new MessageDecoder(deserializer);

		Map<String, List<KafkaStream<String, Object>>> consumerMap = this.connector.createMessageStreams(topicCountMap,
				keyDecoder, valueDecoder);

		for (String topicName : consumerContext.getMessageHandlers().keySet()) {
			final List<KafkaStream<String, Object>> streams = consumerMap.get(topicName);

			for (final KafkaStream<String, Object> stream : streams) {
				MessageProcessor processer = new MessageProcessor(topicName, stream);
				this.fetchExecutor.execute(processer);
			}
		}
		//
		runing.set(true);
	}

可以看到按handler数分配单个任务MessageProcessor 去处理.当消费者线程启动完后,就等着消费消息了.

消息消费

消息消费是在MessageProcessor 线程内完成的,通过while循环流式处理消息,使用到了kafka-stream.

@Override
		public void run() {
 
			logger.info("MessageProcessor [{}] start, topic:{}",Thread.currentThread().getName(),topicName);

			ConsumerIterator<String, Object> it = stream.iterator();
			// 没有消息的话,这里会阻塞
			while (it.hasNext()) {
				try {					
					MessageAndMetadata<String, Object> messageAndMeta = it.next();
					Object _message = messageAndMeta.message();
					DefaultMessage message = null;
					try {
						message = (DefaultMessage) _message;
					} catch (ClassCastException e) {
						message = new DefaultMessage(messageAndMeta.key(),(Serializable) _message);
					}
					message.setTopicMetadata(messageAndMeta.topic(), messageAndMeta.partition(), messageAndMeta.offset());
					consumerContext.updateConsumerStats(messageAndMeta.topic(),1);
					//把offset记录到redis
					consumerContext.saveOffsetsBeforeProcessed(messageAndMeta.topic(), messageAndMeta.partition(), messageAndMeta.offset());
					//第一阶段处理
					messageHandler.p1Process(message);
					//第二阶段处理
					submitMessageToProcess(topicName,messageAndMeta,message);
				} catch (Exception e) {
					logger.error("received_topic_error,topic:"+topicName,e);
				}
				
				//如果拉取消息暂停
				while(!consumerContext.fetchEnabled()){
					try {Thread.sleep(1000);} catch (Exception e) {}
				}
				
				//当处理线程满后,阻塞处理线程
				while(true){
					if(defaultProcessExecutor.getMaximumPoolSize() > defaultProcessExecutor.getSubmittedTasksCount()){
						break;
					}
					try {Thread.sleep(100);} catch (Exception e) {}
				}
				
			}
		
		}

这里先记录offset到redis,然后执行p1Process方法,然后在submitMessageToProcess()方法中执行了第二阶段的处理.

第二阶段处理是另起一个线程执行p2Process,然后把redis的offset加一.如果需要手动提交,还会执行手动提交


附上配置参数

#kafka producer
jeesuite.kafka.producer.defaultAsynSend=true
jeesuite.kafka.producer.producerGroup=taxplan-workflow
jeesuite.kafka.producer.delayRetries=0
kafka.producer.acks=1
kafka.producer.retries=1
kafka.producer.value.serializer=org.apache.kafka.common.serialization.StringSerializer

#kafka consumer
jeesuite.kafka.consumer.useNewAPI=false
jeesuite.kafka.consumer.processThreads=1
jeesuite.kafka.consumer.scanPackages=com.workflow.mq
kafka.consumer.group.id=taxplan-workflow
kafka.consumer.enable.auto.commit=true
kafka.consumer.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

飞翔的咩咩

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值