Q1:生产者连接工厂,发送信息失败
原因:pom依赖有问题
只有生产者时,依赖用:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.9.0.0</version>
</dependency>
生产者+消费者时,依赖用:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
排查方法:
1、利用kafkaTool查看,发现发送者的topic并未创建成功。
因此,将发送者的topic修改到一个已有的topic,但是kafkaTool仍然收到发送的message
2、将发送者用
kafkaProducer.send(new ProducerRecord<String, String>("test.topic", message)).get()
注意这个get!!!可以同步返回发送结果,然后我们发现生产者并未发送成功,显示连接超时,因此断定生产者连接有问题!!
Q2 生产者实例
//创建生产者
public Producer<String, String> ProducerConfigs() {
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "1.1.1.1:9029");
props.put(ProducerConfig.ACKS_CONFIG, kafkaProducerAcks);
props.put(ProducerConfig.RETRIES_CONFIG, kafkaProducerRetries);
props.put(ProducerConfig.LINGER_MS_CONFIG, kafkaProducerLingerMs);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, kafkaProducerBatchSize);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
KafkaProducer<String, String> producer = new KafkaProducer<>(props);
this.kafkaProducer = producer;
return producer;
}
//发送信息
public void sendKafkaMessage(String message) {
if (kafkaProducer == null) {
ProducerConfigs();
}
kafkaProducer.send(new ProducerRecord<String, String>("test.topic", message));
}
Q3 消费者实例
@Bean("defaultConsumerFactory")
public DefaultKafkaConsumerFactory defaultConsumerFactory() {
try {
return new DefaultKafkaConsumerFactory(consumerProperties());
} catch (BusinessException e) {
e.printStackTrace();
}
}
@Bean("batchListenerContainerFactory")
public ConcurrentKafkaListenerContainerFactory BatchListenerContainerFactory(DefaultKafkaConsumerFactory
consumerFactory) {
ConcurrentKafkaListenerContainerFactory factory = new ConcurrentKafkaListenerContainerFactory();
//指定使用DefaultKafkaConsumerFactory
factory.setConsumerFactory(consumerFactory);
// 连接池中消费者数量
factory.setConcurrency(evplanKafkaListenerConcurrency);
// 开启并发消费
factory.setBatchListener(Boolean.TRUE);
HIK_LOG.info("Kafka batchListenerContainerFactory build successfully!!");
return factory;
}
/**
* 构造消费者属性map
*
* @return
*/
private Map<String, Object> consumerProperties() {
Map<String, Object> props = new HashMap<>();
//groupID:一个字符串用来指示一组consumer所在的组。相同的groupID表示在一个组里。相同的groupID消费记录offset时,记录的是同一个offset
props.put(ConsumerConfig.GROUP_ID_CONFIG, KafkaConsumerGroupId);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, KafkaConsumerEnableAutoCommit);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, KafkaConsumerAutoOffsetReset);
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, KafkaConsumerAutoCommitInterval);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, KafkaConsumerSessionTimeoutMsConfig);
props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, KafkaConsumerHeartbeatIntervalMsConfig);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
//分区分配策略
props.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, RoundRobinAssignor.class.getName());
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, KafkaConsumerMaxPollRecordsConfig);
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "1.1.1.1:9029");
return props;
}
//消费者监听
@KafkaListener(topics = "test.topic", containerFactory = "batchListenerContainerFactory")
public void receiveCCTVSubSysQueue(List<ConsumerRecord<?, String>> records){
records.forEach(message->{
JSONObject jsonObject = JSONObject.parseObject(message.value());
//CCTV子系统事件信息需要单独进行转换
CCTVAlarmInfoDTO cctvAlarmInfoDTO = JSONObject.toJavaObject(jsonObject, CCTVAlarmInfoDTO.class);
SubEventInfoDTO subEventInfoDTO = cctvAlarmInfoDTO2subEventInfoDTO(cctvAlarmInfoDTO);
if (null != subEventInfoDTO){
kafkaMessageProcess(subEventInfoDTO);
}
});
}
注意:
由于消费者配置中开启了并发消费,因此在接收的时候,用List<ConsumerRecord<?, String>> records
若没有开启并发消费,则接收直接用ConsumerRecord<?, String> records