springboot集成kafka动态支持多数据源

        本方法不破坏springboot原始kafka集成的方式.并且不需要每增加一个数据源就写一套配置.只需要添加对应的配置即可. 配置项与springboot 原生支持kafka的配置项一模一样,省去研究别人代码的烦恼.

一.多数据源配置

kafka:
  multiple:
    demo1:   #数据源1名称
      topic: 
      bootstrap-servers: 
      properties:
        sasl.mechanism: PLAIN
        sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username=  password=;
        security.protocol: SASL_PLAINTEXT
      consumer:
        group-id: 
        max-poll-records: 
        enable-auto-commit: 
      listener:
        ack-mode: manual
        type: batch
        concurrency:    
    demo2:   #数据源2名称
      topic: 
      bootstrap-servers: 
      consumer:
        group-id: 
        max-poll-records: 10
        enable-auto-commit: false
      listener:
        ack-mode: manual
        type: batch
        concurrency: 2

每个数据源单独的配置项与springboot默认支持的格式一致.

二. 配置读取

@Data
@Configuration
@ConfigurationProperties("kafka")
public class KafkaMultipleProperties {

    private Map<String, KafkaCustomProperties> multiple;

}




@EqualsAndHashCode(callSuper = true)
@Data
public class KafkaCustomProperties extends KafkaProperties {

}

三.多数据源装配

@Slf4j
@Configuration
public class KafkaCustomConfiguration implements InstantiationAwareBeanPostProcessor, BeanFactoryAware {
    @Resource
    private KafkaMultipleProperties kafkaMultipleProperties;
    @Resource
    private DefaultListableBeanFactory beanFactory;

    @Override
    public void setBeanFactory(BeanFactory beanFactory) throws BeansException {
        if (!(beanFactory instanceof ConfigurableListableBeanFactory)) {
            return;
        }
        ConfigurableListableBeanFactory configurableListableBeanFactory = (ConfigurableListableBeanFactory) beanFactory;
        KafkaCustomConfiguration propertyLoader = configurableListableBeanFactory.getBean(KafkaCustomConfiguration.class);
        propertyLoader.initCustomBeans();
    }
	
    private void initCustomBeans() {
        if (ObjectUtil.isEmpty(kafkaMultipleProperties.getMultiple())) {
            return;
        }
        kafkaMultipleProperties.getMultiple().forEach((datasourceName, datasource) -> {
            ConcurrentKafkaListenerContainerFactory<String, String> factory = buildContainerFactory(datasource);
            String containerFactoryName = datasourceName + "KafkaListenerContainerFactory";
            if (ObjectUtil.isEmpty(beanFactory.getSingleton(containerFactoryName))) {
                beanFactory.registerSingleton(containerFactoryName, factory);
            }

            KafkaTemplate<String, Object> kafkaTemplate = buildProducerFactory(datasource);
            String kafkaTemplateName = datasourceName + "KafkaTemplate";
            if (ObjectUtil.isEmpty(beanFactory.getSingleton(kafkaTemplateName))) {
                beanFactory.registerSingleton(kafkaTemplateName, kafkaTemplate);
            }
        });
    }
		
    private KafkaTemplate<String, Object> buildProducerFactory(KafkaProperties properties) {
        DefaultKafkaProducerFactory<String, Object> factory = new DefaultKafkaProducerFactory<>(properties.buildProducerProperties());
        return new KafkaTemplate<>(factory);
    }
	
    private ConcurrentKafkaListenerContainerFactory<String, String> buildContainerFactory(KafkaProperties properties) {
        ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
        ConsumerFactory<String, String> consumerFactory = buildConsumerFactory(properties);
        factory.setConsumerFactory(consumerFactory);
        factory.setBatchListener(Boolean.TRUE); 
        if (properties.getListener() != null && properties.getListener().getAckMode() != null) {
            factory.getContainerProperties().setAckMode(properties.getListener().getAckMode());
        }
        if (properties.getListener() != null && properties.getListener().getConcurrency() != null) {
            factory.setConcurrency(properties.getListener().getConcurrency());
        }
        return factory;
    }

    private ConsumerFactory<String, String> buildConsumerFactory(KafkaProperties properties) {
        return new DefaultKafkaConsumerFactory<>(properties.buildConsumerProperties());
    }
	
}

四.消费者示例

@Component
public class DataConsumer {


    @KafkaListener(topics = "${kafka.multiple.demo1.topic}", groupId = "${kafka.multiple.demo1.consumer.group-id}",
            containerFactory = "demo1KafkaListenerContainerFactory")
    public void demo1RecordConsumer(@Payload List<String> messages, Acknowledgment ack) {
        
    }


}

五.生产者消费示例

@Component
public class KafkaMsgSendDemo {
	
	@Autowired
    @Qualifier("demo1KafkaTemplate")
    private KafkaTemplate demo1KafkaTemplate;


	private void messageSendToKafka() {
        demo1KafkaTemplate.send("你的topic","你的消息");
    }


}

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值