1.声明
当前的内容用于本人学习和简单的使用kafka,内容主要包括,SpringCloudBus整合Kafka,向Eureka中注册Kafka服务,使用Kafka发送数据信息
关于当前的SpringCloudBus的简单理解
一个消息服务总线,就是将所有的消息都使用SpringCloudBus进行整合,使用这一个就可以了
2.pom依赖
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.2.RELEASE</version>
<relativePath />
</parent>
<groupId>SpringCloud-Bus-Demo</groupId>
<artifactId>SpringCloud-Bus-Demo</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>SpringCloud-Bus-Demo</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version><!-- 更新SpringCloud的版本 -->
<spring-cloud.version>Dalston.RELEASE</spring-cloud.version>
<!-- <springboot.version>1.3.7.RELEASE</springboot.version> -->
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<!-- <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-dependencies</artifactId>
<version>${springboot.version}</version> <type>pom</type> <scope>import</scope>
</dependency> -->
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.retry</groupId>
<artifactId>spring-retry</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bus-kafka</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
3.application.properties配置文件
spring.application.name=kafka-service-provider
server.port=8080
将kafka向eureka中注册
eureka.client.serviceUrl.defaultZone=http://localhost:1111/eureka/
spring.cloud.config.discovery.enable=true
spring.cloud.config.discovery.serviceId=kafka-service-provider
#============== kafka ===================
spring.kafka.bootstrap-servers=192.168.126.130:9092
#=============== provider =======================
spring.kafka.producer.retries=0
spring.kafka.producer.batch-size=16384
spring.kafka.producer.buffer-memory=33554432
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
#=============== consumer =======================
spring.kafka.consumer.group-id=test-consumer-group
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=true
spring.kafka.consumer.auto-commit-interval=100
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
#kafka的访问路径的配置
spring.cloud.stream.kafka.binder.brokers=192.168.126.130:9092
spring.cloud.stream.kafka.binder.defaultBrokerPort=9092
#ZooKeeper的基本配置
spring.cloud.stream.kafka.binder.zkNodes=192.168.126.130:2181
spring.cloud.stream.kafka.binder.defaultZkPort=2181
4.创建简单的Kafka消息的消费者和生产者
基本的topic的配置
/**
* @description kafka基本配置类
* @author hy
* @date 2020-05-03
*/
public class KafkaConfig {
public static final String DEFAULT_TOPIC = "testTopic";
}
kafka的消息消费者
/**
* @description kafka消息的消费者
* @author hy
* @date 2020-05-03
*/
@Component
@Slf4j
public class KafkaConsumer {
private AtomicInteger count = new AtomicInteger(0);
@KafkaListener(topics = { KafkaConfig.DEFAULT_TOPIC })
public void consumer(ConsumerRecord<?, ?> consumerRecord) {
// 判断是否为null
Optional<?> kafkaMessage = Optional.ofNullable(consumerRecord.value());
log.info("消费者开始消费消息:" + kafkaMessage);
if (kafkaMessage.isPresent()) {
// 得到Optional实例中的值
Object msg = kafkaMessage.get();
log.info("消费者接收第【" + count.getAndIncrement() + "】条消息 :" + msg);
}
}
}
kafka的消息生产者
/**
* @description kafka消息的生产者
* @author hy
* @date 2020-05-03
*/
@Component
@Slf4j
public class KafkaProvider {
@Autowired
private KafkaTemplate<String, String> kafkaTemplate;
private Gson gson = new GsonBuilder().create();
private AtomicInteger count = new AtomicInteger(0);
public void send(String msg) {
log.info("消息提供者发送消息 " + count.incrementAndGet() + " >>>>{}<<<<<==========", gson.toJson(msg));
kafkaTemplate.send(KafkaConfig.DEFAULT_TOPIC, gson.toJson(msg));
}
}
5.创建简单的入口类以及测试
@RestController
@EnableDiscoveryClient
@SpringBootApplication
public class KafkaApplication {
@Autowired
private KafkaProvider kafkaProvider;
@RequestMapping("/send")
public void sendMsg(String msg) {
kafkaProvider.send(msg);
}
public static void main(String[] args) {
SpringApplication.run(KafkaApplication.class, args);
}
}
测试
- 首先先启动Linux中的Zookeeper、然后启动Kafka
- 启动项目的eureka服务注册中心,然后启动当前的KafkaApplication
结果
开始发送消息
6.总结
1.注意如果需要将当前的Kafka的服务注册到当前的Eureka的服务治理中心,需要开启eureka.client.serviceUrl.defaultZone=http://localhost:1111/eureka/
spring.cloud.config.discovery.enable=true
spring.cloud.config.discovery.serviceId=kafka-service-provider
2.使用Spring Cloud Bus的Kafka的时候很容易实现,只需要使用@KafkaListener(topics = { KafkaConfig.DEFAULT_TOPIC })用于消费消息,使用kafkaTemplate生产消息
3.注意提供deserializer
以上纯属个人见解,如有问题请联系本人!