示例目录结构及流程说明
gw为控制层入口,data为生产者,info为消费者,common为公共服务
首先在gw层通过请求调用data中的方法触发生产者生产消息,info模块为消费者,如果能监听到消息则证明集成OK
生产者
依赖
<dependencies>
<dependency>
<groupId>com.zlz</groupId>
<artifactId>data_service</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.apache.dubbo</groupId>
<artifactId>dubbo-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>com.github.mxsm</groupId>
<artifactId>zkClient</artifactId>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-framework</artifactId>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-recipes</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
</dependencies>
配置
首先,因为此服务和gw之前存在dubbo调用,所以要有dubbo的一些配置。
再有就是kafka配置
@EnableKafka
@Configuration
@PropertySource("/kafka.properties")
public class KafkaConfig {
@Value("${kafka.boots.server}")
private String bootsServer;
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootsServer);
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, 4096);
props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return props;
}
public ProducerFactory<String, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
@Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<String, String>(producerFactory());
}
}
此处可以将一些常用配置都放在kafka,properties中的,只是为了演示就只将服务地址放到了配置文件
示例
构造一个方法,只要此方法被调用便可触发生产者生产消息
@DubboService
@Slf4j
public class KafkaTestServiceImpl implements IKafkaTestService {
@Resource
private KafkaTemplate<String, String> kafkaTemplate;
@Value("${user.info.topic:user.info}")
private String userInfoTopic;
@Override
public void sendUserInfo() {
User user = new User();
user.setAge(18);
user.setName("翠花");
user.setCity("成都");
String jsonUser = JSON.toJSONString(user);
kafkaTemplate.send(userInfoTopic, jsonUser);
log.info("send msg success! msg is {}", jsonUser);
}
}
消费者
依赖
依赖和生产者都一样
配置
spring:
kafka:
# 指定kafka代理地址
bootstrap-servers: 192.168.200.130:9091,192.168.200.131:9091,192.168.200.132:9091
consumer:
group-id: user-info
auto-offset-reset: earliest
enable-auto-commit: true
auto-commit-interval: 100
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
消费者采用的是yaml配置方式
示例
通过监听指定topic,如果消息发送成功被消费者接收便可成功打印输出
@Component
@Slf4j
public class UserInfoHandler {
@KafkaListener(topics = {"user.info"})
public void consumer(ConsumerRecord<String, String> consumerRecord){
Optional kafkaMsg= Optional.ofNullable(consumerRecord.value());
if (kafkaMsg.isPresent()){
Object msg= kafkaMsg.get();
User user = JSONObject.parseObject(msg.toString(), User.class);
log.info("get user info:" + user);
}
}
}
测试
通过接口调用生产者中的方法触发消费
-
调用接口
-
查看生产者是否发送消息
-
查看消费者是否接收到消息