kafka发送自定义消息体(对象、数组)

在前面简单搭建了Windows上的kafka环境,并使用命令行测试可以运行之后(环境请参考:http://blog.csdn.net/u014104286/article/details/75040932)我们会考虑怎么使用kafka;先试着发送一个简单的消息,发送成功之后是否需要发送自定义的消息类尼?怎么发送自定义的消息类,如果我要发送一个集合呢?下面我们来一一解决我们的问题。


准备工作:

1.需要搭建并测试成功的kafka环境,并启动zookeeper和kafka服务。

2.创建一个可用的maven项目

3.添加开发kafkaka的依赖:


 
 
  1. <!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka_2.11 -->
  2. <dependency>
  3. <groupId>org.apache.kafka </groupId>
  4. <artifactId>kafka_2.11 </artifactId>
  5. <version>0.10.2.0 </version>
  6. </dependency>


准备工作完成。

1.首先我们要发送第一个消息,消息类型为String:

Producer发送消息类:


 
 
  1. public class SimpleProducer {
  2. public static void main(String[] args) throws Exception{
  3. //Assign topicName to string variable
  4. String topicName = "newtest001";
  5. // create instance for properties to access producer configs
  6. Properties props = new Properties();
  7. //Assign localhost id
  8. props.put( "bootstrap.servers", "localhost:9092");
  9. //Set acknowledgements for producer requests.
  10. props.put( "acks", "all");
  11. //If the request fails, the producer can automatically retry,
  12. props.put( "retries", 0);
  13. //Specify buffer size in config
  14. props.put( "batch.size", 16384);
  15. //Reduce the no of requests less than 0
  16. props.put( "linger.ms", 1);
  17. //The buffer.memory controls the total amount of memory available to the producer for buffering.
  18. props.put( "buffer.memory", 33554432);
  19. props.put( "key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
  20. props.put( "value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
  21. Producer<String, String> producer = new KafkaProducer<String, String>(props);
  22. for( int i = 0; i < 10; i++)
  23. producer.send( new ProducerRecord<String, String>(topicName, Integer.toString(i), Integer.toString(i)));
  24. System.out.println( "Message sent successfully");
  25. producer.close();
  26. }
  27. }

Consumer接收消息类:


 
 
  1. public class SimpleConsumer {
  2. public static void main(String[] args) throws Exception {
  3. //Kafka consumer configuration settings
  4. String topicName = "newtest001";
  5. Properties props = new Properties();
  6. props.put( "bootstrap.servers", "localhost:9092");
  7. props.put( "group.id", "test");
  8. props.put( "enable.auto.commit", "true");
  9. props.put( "auto.commit.interval.ms", "1000");
  10. props.put( "session.timeout.ms", "30000");
  11. props.put( "key.deserializer",
  12. "org.apache.kafka.common.serialization.StringDeserializer");
  13. props.put( "value.deserializer",
  14. "org.apache.kafka.common.serialization.StringDeserializer");
  15. @SuppressWarnings( "resource")
  16. KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
  17. //Kafka Consumer subscribes list of topics here.
  18. consumer.subscribe(Arrays.asList(topicName));
  19. //print the topic name
  20. System.out.println( "Subscribed to topic "+ topicName);
  21. while ( true) {
  22. ConsumerRecords<String, String> records = consumer.poll( 100);
  23. for (ConsumerRecord<String, String> record : records)
  24. // print the offset,key and value for the consumer records.
  25. System.out.printf( "offset = %d, key = %s, value = %s\n",
  26. record.offset(), record.key(), record.value());
  27. }
  28. }
  29. }
以上内容参考: https://www.w3cschool.cn/apache_kafka/apache_kafka_simple_producer_example.html

启动Consumers类,等待并接收Producer发送的消息:运行Producer发送消息,Consumers接收到的消息:


说明能成功发送和接收消息。上面发送的消息都是字符,我们如果需要发送一个PerSon这样JavaBean那怎么做呢?

我们可以先观察上面的Producer和Consumers,他们在实例化一个Producer和一个Consumers的时候需要一些参数,其中有: props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
       props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

     props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
     props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");可见是对消息的key和value的序列化指定类。我们到org.apache.kafka.common.serialization.StringSerializer中可以看见这个类实现了

org.apache.kafka.common.serialization.Deserializer<T>和org.apache.kafka.common.serialization.Serializer<T>


我们也分别实现序列化和反序列化的借口即可:

DecodeingKafka类:


 
 
  1. import java.util.Map;
  2. import org.apache.kafka.common.serialization.Deserializer;
  3. import com.ys.test.SpringBoot.zktest.util.BeanUtils;
  4. public class DecodeingKafka implements Deserializer<Object> {
  5. @Override
  6. public void configure(Map<String, ?> configs, boolean isKey) {
  7. }
  8. @Override
  9. public Object deserialize(String topic, byte[] data) {
  10. return BeanUtils.byte2Obj(data);
  11. }
  12. @Override
  13. public void close() {
  14. }
  15. }

EncodeingKafka类:


 
 
  1. import java.util.Map;
  2. import org.apache.kafka.common.serialization.Serializer;
  3. import com.ys.test.SpringBoot.zktest.util.BeanUtils;
  4. public class EncodeingKafka implements Serializer<Object> {
  5. @Override
  6. public void configure(Map configs, boolean isKey) {
  7. }
  8. @Override
  9. public byte[] serialize(String topic, Object data) {
  10. return BeanUtils.bean2Byte(data);
  11. }
  12. /*
  13. * producer调用close()方法是调用
  14. */
  15. @Override
  16. public void close() {
  17. System.out.println( "EncodeingKafka is close");
  18. }
  19. }

之后我们需要定义JavaBean对象怎么序列化和反序列化,我们使用ObjectOutputStream和ObjectInputStream实现。(大家也可以考虑更高效的序列化方法)

BeanUtils类:


 
 
  1. import java.io.ByteArrayInputStream;
  2. import java.io.ByteArrayOutputStream;
  3. import java.io.IOException;
  4. import java.io.ObjectInputStream;
  5. import java.io.ObjectOutputStream;
  6. public class BeanUtils {
  7. private BeanUtils() {}
  8. /**
  9. * 对象序列化为byte数组
  10. *
  11. * @param obj
  12. * @return
  13. */
  14. public static byte[] bean2Byte(Object obj) {
  15. byte[] bb = null;
  16. try (ByteArrayOutputStream byteArray = new ByteArrayOutputStream();
  17. ObjectOutputStream outputStream = new ObjectOutputStream(byteArray)){
  18. outputStream.writeObject(obj);
  19. outputStream.flush();
  20. bb = byteArray.toByteArray();
  21. } catch (IOException e) {
  22. e.printStackTrace();
  23. }
  24. return bb;
  25. }
  26. /**
  27. * 字节数组转为Object对象
  28. *
  29. * @param bytes
  30. * @return
  31. */
  32. public static Object byte2Obj(byte[] bytes) {
  33. Object readObject = null;
  34. try (ByteArrayInputStream in = new ByteArrayInputStream(bytes);
  35. ObjectInputStream inputStream = new ObjectInputStream(in)){
  36. readObject = inputStream.readObject();
  37. } catch (Exception e) {
  38. e.printStackTrace();
  39. }
  40. return readObject;
  41. }
  42. }

PerSon.java:


 
 
  1. public class PerSon implements Serializable{
  2. /**
  3. *
  4. */
  5. private static final long serialVersionUID = 1L;
  6. private long userid;
  7. private String name;
  8. private int age;
  9. private String addr;
  10. private String eMail;
  11. private String userRole;
  12. private IDCard card;
	set... get...
 
 

SimpleProducerPerSon.java:消息生产者:


 
 
  1. import java.util.Arrays;
  2. import java.util.List;
  3. //import util.properties packages
  4. import java.util.Properties;
  5. import org.apache.kafka.clients.producer.Callback;
  6. //import KafkaProducer packages
  7. import org.apache.kafka.clients.producer.KafkaProducer;
  8. //import simple producer packages
  9. import org.apache.kafka.clients.producer.Producer;
  10. //import ProducerRecord packages
  11. import org.apache.kafka.clients.producer.ProducerRecord;
  12. import org.apache.kafka.clients.producer.RecordMetadata;
  13. import com.ys.test.SpringBoot.model.IDCard;
  14. import com.ys.test.SpringBoot.model.PerSon;
  15. public class SimpleProducerPerson {
  16. public static void main(String[] args) throws Exception{
  17. //Assign topicName to string variable
  18. String topicName = "mypartition001";
  19. // create instance for properties to access producer configs
  20. Properties props = new Properties();
  21. //Assign localhost id
  22. props.put( "bootstrap.servers", "localhost:9092");
  23. //Set acknowledgements for producer requests.
  24. props.put( "acks", "all");
  25. //If the request fails, the producer can automatically retry,
  26. props.put( "retries", 0);
  27. props.put( "metadata.fetch.timeout.ms", 30000);
  28. //contorller the send method :sync or async default : sync
  29. //Specify buffer size in config
  30. props.put( "batch.size", 16384);
  31. //Reduce the no of requests less than 0
  32. props.put( "linger.ms", 1);
  33. //The buffer.memory controls the total amount of memory available to the producer for buffering.
  34. props.put( "buffer.memory", 33554432);
  35. props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
  36. props.put( "value.serializer", "com.ys.test.SpringBoot.zktest.encode.EncodeingKafka");
  37. // props.put("partitioner.class", "继承了Partition的类,实现的是根据指定的算法把消息推送到指定的分区中com.ys.test.SpringBoot.zktest.util.MyPartition");
  38. Producer<String, Object> producer = new KafkaProducer<String, Object>(props);
  39. long startTimes = System.currentTimeMillis();
  40. System.out.println();
  41. for( int i = 0; i < 2; i++){
  42. final int index = i;
  43. PerSon perSon = new PerSon();
  44. perSon.setAge(i);
  45. perSon.setAddr( "My Producer TEST001"+i);
  46. perSon.setName( "MyTest "+i);
  47. IDCard card = new IDCard();
  48. card.setCardName( "MyTest"+i+ "'s idCard");
  49. card.setCardid( 10000000000L);
  50. perSon.setCard(card);
  51. List<PerSon> asList = Arrays.asList(perSon,perSon);
  52. // producer.send(new ProducerRecord<String, Object>(topicName,Integer.toString(i),asList));
  53. // producer.send(new ProducerRecord<String, Object>(topicName, Integer.toString(i), perSon));
  54. producer.send( new ProducerRecord<String, Object>(topicName, Integer.toString(i), asList), new Callback() {
  55. @Override
  56. public void onCompletion(RecordMetadata metadata, Exception exception) {
  57. if (metadata != null) {
  58. System.out.println(index+ " 发送成功:"+ "checksum: "+metadata.checksum()+ " offset: "+metadata.offset()+ " partition: "+metadata.partition()+ " topic: "+metadata.topic());
  59. }
  60. if (exception != null) {
  61. System.out.println(index+ "异常:"+exception.getMessage());
  62. }
  63. }
  64. });
  65. }
  66. producer.close();
  67. }
  68. }

SimpleConsumersPerSon.java 消息接收者:


 
 
  1. import java.util.Arrays;
  2. import java.util.Properties;
  3. import org.apache.kafka.clients.consumer.ConsumerRecord;
  4. import org.apache.kafka.clients.consumer.ConsumerRecords;
  5. import org.apache.kafka.clients.consumer.KafkaConsumer;
  6. public class SimpleConsumerPerSon {
  7. public static void main(String[] args) throws Exception {
  8. String topicName = "mypartition001";
  9. Properties props = new Properties();
  10. props.put( "bootstrap.servers", "localhost:9092");
  11. props.put( "group.id", "partitiontest05");
  12. props.put( "enable.auto.commit", "true");
  13. props.put( "auto.commit.interval.ms", "1000");
  14. props.put( "session.timeout.ms", "30000");
  15. //要发送自定义对象,需要指定对象的反序列化类
  16. props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
  17. props.put( "value.deserializer", "com.ys.test.SpringBoot.zktest.encode.DecodeingKafka");
  18. //使用String时可以使用系统的反序列化类
  19. // props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
  20. // props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
  21. @SuppressWarnings( "resource")
  22. KafkaConsumer<String, Object> consumer = new KafkaConsumer<String, Object>(props);
  23. //Kafka Consumer subscribes list of topics here.
  24. consumer.subscribe(Arrays.asList(topicName));
  25. //print the topic name
  26. System.out.println( "Subscribed to topic "+ topicName);
  27. while ( true) {
  28. ConsumerRecords<String, Object> records = consumer.poll( 100);
  29. for (ConsumerRecord<String, Object> record : records)
  30. // print the offset,key and value for the consumer records.
  31. // System.out.printf("offset = %d, key = %s, value = %s\n",
  32. // record.offset(), record.key(), record.value().toString());
  33. System.out.println(record.toString());
  34. }
  35. }
  36. }

以上的发送者和接收者他们的key的序列化类还是 StringDeserializer,但是value的序列化需要指定为我们自己的。

运行接收者和发送者,观察结果:(结果是发送一个集合和发送一个对象)

发送一个对象person接收的结果:

ConsumerRecord(topic = mypartition001, partition = 0, offset = 29, CreateTime = 1502457680160, checksum = 3691507046, serialized key size = 1, serialized value size = 391, key = 0, value = PerSon [userid=0, name=MyTest 0, age=0, addr=My Producer TEST0010, eMail=null, userRole=null, card=IDCard [cardid=10000000000, cardName=MyTest0's idCard]])
ConsumerRecord(topic = mypartition001, partition = 0, offset = 30, CreateTime = 1502457680175, checksum = 1443537499, serialized key size = 1, serialized value size = 391, key = 1, value = PerSon [userid=0, name=MyTest 1, age=1, addr=My Producer TEST0011, eMail=null, userRole=null, card=IDCard [cardid=10000000000, cardName=MyTest1's idCard]])

发送asList集合的结果:
ConsumerRecord(topic = mypartition001, partition = 0, offset = 31, CreateTime = 1502457689533, checksum = 3469353517, serialized key size = 1, serialized value size = 524, key = 0, value = [PerSon [userid=0, name=MyTest 0, age=0, addr=My Producer TEST0010, eMail=null, userRole=null, card=IDCard [cardid=10000000000, cardName=MyTest0's idCard]], PerSon [userid=0, name=MyTest 0, age=0, addr=My Producer TEST0010, eMail=null, userRole=null, card=IDCard [cardid=10000000000, cardName=MyTest0's idCard]]])
ConsumerRecord(topic = mypartition001, partition = 0, offset = 32, CreateTime = 1502457689552, checksum = 1930168239, serialized key size = 1, serialized value size = 524, key = 1, value = [PerSon [userid=0, name=MyTest 1, age=1, addr=My Producer TEST0011, eMail=null, userRole=null, card=IDCard [cardid=10000000000, cardName=MyTest1's idCard]], PerSon [userid=0, name=MyTest 1, age=1, addr=My Producer TEST0011, eMail=null, userRole=null, card=IDCard [cardid=10000000000, cardName=MyTest1's idCard]]])

这样我们不管是发送的是一个对象还是一个集合我们都可以正确发送和接收了。











  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值