(一)添加依赖
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
(二)配置文件
spring:
kafka:
bootstrap-servers: 192.168.10.130:9092,192.168.10.130:9093,192.168.10.130:9094
#生产者 “值” 序列化(自定义ObjectSerializer类)
producer:
value-serializer: org.pc.serializer.ObjectSerializer
consumer:
group-id: cluster-group
#消费者 “值” 反序列化(自定义ObjectDeserializer类)
value-deserializer: org.pc.deserializer.ObjectDeserializer
(三)注册KafkaTemplate
Springboot自动装载了KafkaTemplate,无需手动注册
(四)使用Kafka生产者客户端生产消息
@RestController
public class KafkaController {
private KafkaTemplate<String, Object> kafkaTemplate;
@Autowired
public KafkaController(KafkaTemplate<String, Object> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
@GetMapping("/message/send")
public String sendMessage(@RequestParam String message){
//向kafka集群中的"cluster-topic"发送消息
kafkaTemplate.send("cluster-topic", 0, "message", message);
return message;
}
}
(五)使用Kafka消费者客户端消费消息
/**
* 消费者监听器
* 作用:只要加上@KafkaListener,设置好topics,就可以订阅该主题产生的消息(相当于消费者客户端)
*/
@Component
public class ConsumerListener {
@KafkaListener(topics = "cluster-topic")
public void consume(ConsumerRecord<String, Object> consumerRecord){
System.out.println(consumerRecord.value());
}
}
(六)出现问题及其解决
1、当发送Object类型消息时,报错"Can’t convert value of class org.pc.entity.User to class org.apache.kafka.common.serialization.StringSerializer specified in value.serializer"?
原因解析:在生产消息时,Kafka的Java客户端默认使用的是StringSerializer序列化类,但是他只能序列化String类型对象,要是其他类型对象,就会无法序列化。同理,在消费消息时,需要反序列化,也会遇到这个问题。
解决办法:自定义Kafka“生产者客户端的序列化类”和“消费者客户端的反序列化类”,目的实现对Object对象的序列化与反序列化。
(1)生产者客户端 “值” 的序列化:
/**
* Object 序列化
* @author 咸鱼
* @date 2018/10/14 8:43
*/
public class ObjectSerializer implements Serializer<Object> {
@Override
public void configure(Map configs, boolean isKey) {
}
@Override
public byte[] serialize(String topic, Object data) {
byte[] dataArray = null;
try {
//1、创建OutputStream对象
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
//2、创建OutputStream的包装对象ObjectOutputStream,PS:对象将写到OutputStream流中
ObjectOutputStream objectOutputStream = new ObjectOutputStream(outputStream);
//3、将对象写到OutputStream流中
objectOutputStream.writeObject(data);
//4、将OutputStream流转换成字节数组
dataArray = outputStream.toByteArray();
} catch (IOException e) {
e.printStackTrace();
}
return dataArray;
}
@Override
public void close() {
}
}
(2)消费者客户端 “值” 的反序列化:
/**
* Object 反序列化
* @author 咸鱼
* @date 2018/10/14 9:13
*/
public class ObjectDeserializer implements Deserializer<Object> {
@Override
public void configure(Map configs, boolean isKey) {
}
@Override
public Object deserialize(String topic, byte[] data) {
Object object = null;
try {
ByteArrayInputStream inputStream = new ByteArrayInputStream(data);
ObjectInputStream objectInputStream = new ObjectInputStream(inputStream);
object = objectInputStream.readObject();
} catch (Exception e) {
throw new RuntimeException(e);
}
return object;
}
@Override
public void close() {
}
}
(3)修改配置文件
spring:
kafka:
bootstrap-servers: 192.168.10.130:9092,192.168.10.130:9093,192.168.10.130:9094
#生产者 “值” 序列化
producer:
value-serializer: org.pc.serializer.ObjectSerializer
consumer:
group-id: cluster-group
#消费者 “值” 反序列化
value-deserializer: org.pc.deserializer.ObjectDeserializer
(4)测试
生产消息:
@PostMapping("/user")
public User saveUser(@RequestBody User user){
kafkaTemplate.send("object-topic", 0, "message", user);
return user;
}
消费消息:
@KafkaListener(topics = "object-topic")
public void consume(ConsumerRecord<String, Object> consumerRecord){
System.out.println(consumerRecord.value());
}
2、报错如下:javax.management.InstanceAlreadyExistsException: kafka.consumer:type=app-info,id=gx-test-20170629?
原因解析:如果使用了ConcurrentMessageListenerContainer 的实现,并且配置了并发度大于1,同时配置了kafka的 client.id属性则会出现上述问题,而当你配置为1的时候不会出现上述log。
解决办法:不配置client.id这一项,kakfa中会默认为多个线程生成id
3、报错如下:The message is 1330537 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
原因解析:生产者默认 max.request.size 配置为1004537,需要更改消费者配置项:
spring.kafka.producer.properties.max.request.size=2097152
4、org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept.
spring.kafka.consumer.properties.max.partition.fetch.bytes=2097152