在前面简单搭建了Windows上的kafka环境,并使用命令行测试可以运行之后(环境请参考:http://blog.csdn.net/u014104286/article/details/75040932)我们会考虑怎么使用kafka;先试着发送一个简单的消息,发送成功之后是否需要发送自定义的消息类尼?怎么发送自定义的消息类,如果我要发送一个集合呢?下面我们来一一解决我们的问题。
准备工作:
1.需要搭建并测试成功的kafka环境,并启动zookeeper和kafka服务。
2.创建一个可用的maven项目
3.添加开发kafkaka的依赖:
-
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka_2.11 -->
-
<dependency>
-
<groupId>org.apache.kafka
</groupId>
-
<artifactId>kafka_2.11
</artifactId>
-
<version>0.10.2.0
</version>
-
</dependency>
1.首先我们要发送第一个消息,消息类型为String:
Producer发送消息类:
-
public
class SimpleProducer {
-
public static void main(String[] args) throws Exception{
-
//Assign topicName to string variable
-
String topicName =
"newtest001";
-
// create instance for properties to access producer configs
-
Properties props =
new Properties();
-
//Assign localhost id
-
props.put(
"bootstrap.servers",
"localhost:9092");
-
//Set acknowledgements for producer requests.
-
props.put(
"acks",
"all");
-
//If the request fails, the producer can automatically retry,
-
props.put(
"retries",
0);
-
//Specify buffer size in config
-
props.put(
"batch.size",
16384);
-
//Reduce the no of requests less than 0
-
props.put(
"linger.ms",
1);
-
//The buffer.memory controls the total amount of memory available to the producer for buffering.
-
props.put(
"buffer.memory",
33554432);
-
props.put(
"key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
-
props.put(
"value.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
-
-
Producer<String, String> producer =
new KafkaProducer<String, String>(props);
-
-
for(
int i =
0; i <
10; i++)
-
producer.send(
new ProducerRecord<String, String>(topicName, Integer.toString(i), Integer.toString(i)));
-
System.out.println(
"Message sent successfully");
-
producer.close();
-
}
-
}
Consumer接收消息类:
-
public
class SimpleConsumer {
-
public static void main(String[] args) throws Exception {
-
//Kafka consumer configuration settings
-
String topicName =
"newtest001";
-
Properties props =
new Properties();
-
-
props.put(
"bootstrap.servers",
"localhost:9092");
-
props.put(
"group.id",
"test");
-
props.put(
"enable.auto.commit",
"true");
-
props.put(
"auto.commit.interval.ms",
"1000");
-
props.put(
"session.timeout.ms",
"30000");
-
props.put(
"key.deserializer",
-
"org.apache.kafka.common.serialization.StringDeserializer");
-
props.put(
"value.deserializer",
-
"org.apache.kafka.common.serialization.StringDeserializer");
-
@SuppressWarnings(
"resource")
-
KafkaConsumer<String, String> consumer =
new KafkaConsumer<String, String>(props);
-
-
//Kafka Consumer subscribes list of topics here.
-
consumer.subscribe(Arrays.asList(topicName));
-
-
//print the topic name
-
System.out.println(
"Subscribed to topic "+ topicName);
-
while (
true) {
-
ConsumerRecords<String, String> records = consumer.poll(
100);
-
for (ConsumerRecord<String, String> record : records)
-
-
// print the offset,key and value for the consumer records.
-
System.out.printf(
"offset = %d, key = %s, value = %s\n",
-
record.offset(), record.key(), record.value());
-
}
-
-
}
-
}
以上内容参考:
https://www.w3cschool.cn/apache_kafka/apache_kafka_simple_producer_example.html
启动Consumers类,等待并接收Producer发送的消息:运行Producer发送消息,Consumers接收到的消息:
说明能成功发送和接收消息。上面发送的消息都是字符,我们如果需要发送一个PerSon这样JavaBean那怎么做呢?
我们可以先观察上面的Producer和Consumers,他们在实例化一个Producer和一个Consumers的时候需要一些参数,其中有: props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");可见是对消息的key和value的序列化指定类。我们到org.apache.kafka.common.serialization.StringSerializer中可以看见这个类实现了
org.apache.kafka.common.serialization.Deserializer<T>和org.apache.kafka.common.serialization.Serializer<T>
我们也分别实现序列化和反序列化的借口即可:
DecodeingKafka类:
-
import java.util.Map;
-
import org.apache.kafka.common.serialization.Deserializer;
-
import com.ys.test.SpringBoot.zktest.util.BeanUtils;
-
-
public
class DecodeingKafka implements Deserializer<Object> {
-
-
@Override
-
public void configure(Map<String, ?> configs, boolean isKey) {
-
}
-
-
@Override
-
public Object deserialize(String topic, byte[] data) {
-
return BeanUtils.byte2Obj(data);
-
}
-
-
@Override
-
public void close() {
-
-
}
-
}
EncodeingKafka类:
-
import java.util.Map;
-
import org.apache.kafka.common.serialization.Serializer;
-
import com.ys.test.SpringBoot.zktest.util.BeanUtils;
-
public
class EncodeingKafka implements Serializer<Object> {
-
@Override
-
public void configure(Map configs, boolean isKey) {
-
-
}
-
@Override
-
public
byte[] serialize(String topic, Object data) {
-
return BeanUtils.bean2Byte(data);
-
}
-
/*
-
* producer调用close()方法是调用
-
*/
-
@Override
-
public void close() {
-
System.out.println(
"EncodeingKafka is close");
-
}
-
}
之后我们需要定义JavaBean对象怎么序列化和反序列化,我们使用ObjectOutputStream和ObjectInputStream实现。(大家也可以考虑更高效的序列化方法)
BeanUtils类:
-
import java.io.ByteArrayInputStream;
-
import java.io.ByteArrayOutputStream;
-
import java.io.IOException;
-
import java.io.ObjectInputStream;
-
import java.io.ObjectOutputStream;
-
-
public
class BeanUtils {
-
private BeanUtils() {}
-
/**
-
* 对象序列化为byte数组
-
*
-
* @param obj
-
* @return
-
*/
-
public
static
byte[] bean2Byte(Object obj) {
-
byte[] bb =
null;
-
try (ByteArrayOutputStream byteArray =
new ByteArrayOutputStream();
-
ObjectOutputStream outputStream =
new ObjectOutputStream(byteArray)){
-
outputStream.writeObject(obj);
-
outputStream.flush();
-
bb = byteArray.toByteArray();
-
}
catch (IOException e) {
-
e.printStackTrace();
-
}
-
return bb;
-
}
-
/**
-
* 字节数组转为Object对象
-
*
-
* @param bytes
-
* @return
-
*/
-
public static Object byte2Obj(byte[] bytes) {
-
Object readObject =
null;
-
try (ByteArrayInputStream in =
new ByteArrayInputStream(bytes);
-
ObjectInputStream inputStream =
new ObjectInputStream(in)){
-
readObject = inputStream.readObject();
-
}
catch (Exception e) {
-
e.printStackTrace();
-
}
-
return readObject;
-
}
-
}
PerSon.java:
-
public
class PerSon implements Serializable{
-
/**
-
*
-
*/
-
private
static
final
long serialVersionUID =
1L;
-
private
long userid;
-
private String name;
-
private
int age;
-
private String addr;
-
private String eMail;
-
private String userRole;
-
private IDCard card;
set... get...
SimpleProducerPerSon.java:消息生产者:
-
import java.util.Arrays;
-
import java.util.List;
-
//import util.properties packages
-
import java.util.Properties;
-
-
import org.apache.kafka.clients.producer.Callback;
-
//import KafkaProducer packages
-
import org.apache.kafka.clients.producer.KafkaProducer;
-
//import simple producer packages
-
import org.apache.kafka.clients.producer.Producer;
-
//import ProducerRecord packages
-
import org.apache.kafka.clients.producer.ProducerRecord;
-
import org.apache.kafka.clients.producer.RecordMetadata;
-
-
import com.ys.test.SpringBoot.model.IDCard;
-
import com.ys.test.SpringBoot.model.PerSon;
-
public
class SimpleProducerPerson {
-
public static void main(String[] args) throws Exception{
-
-
//Assign topicName to string variable
-
String topicName =
"mypartition001";
-
// create instance for properties to access producer configs
-
Properties props =
new Properties();
-
//Assign localhost id
-
props.put(
"bootstrap.servers",
"localhost:9092");
-
//Set acknowledgements for producer requests.
-
props.put(
"acks",
"all");
-
//If the request fails, the producer can automatically retry,
-
props.put(
"retries",
0);
-
props.put(
"metadata.fetch.timeout.ms",
30000);
-
//contorller the send method :sync or async default : sync
-
//Specify buffer size in config
-
props.put(
"batch.size",
16384);
-
//Reduce the no of requests less than 0
-
props.put(
"linger.ms",
1);
-
//The buffer.memory controls the total amount of memory available to the producer for buffering.
-
props.put(
"buffer.memory",
33554432);
-
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
-
props.put(
"value.serializer",
"com.ys.test.SpringBoot.zktest.encode.EncodeingKafka");
-
// props.put("partitioner.class", "继承了Partition的类,实现的是根据指定的算法把消息推送到指定的分区中com.ys.test.SpringBoot.zktest.util.MyPartition");
-
-
Producer<String, Object> producer =
new KafkaProducer<String, Object>(props);
-
long startTimes = System.currentTimeMillis();
-
System.out.println();
-
-
for(
int i =
0; i <
2; i++){
-
-
final
int index = i;
-
PerSon perSon =
new PerSon();
-
perSon.setAge(i);
-
perSon.setAddr(
"My Producer TEST001"+i);
-
perSon.setName(
"MyTest "+i);
-
IDCard card =
new IDCard();
-
card.setCardName(
"MyTest"+i+
"'s idCard");
-
card.setCardid(
10000000000L);
-
perSon.setCard(card);
-
-
List<PerSon> asList = Arrays.asList(perSon,perSon);
-
// producer.send(new ProducerRecord<String, Object>(topicName,Integer.toString(i),asList));
-
// producer.send(new ProducerRecord<String, Object>(topicName, Integer.toString(i), perSon));
-
producer.send(
new ProducerRecord<String, Object>(topicName, Integer.toString(i), asList),
new Callback() {
-
-
@Override
-
public void onCompletion(RecordMetadata metadata, Exception exception) {
-
if (metadata !=
null) {
-
System.out.println(index+
" 发送成功:"+
"checksum: "+metadata.checksum()+
" offset: "+metadata.offset()+
" partition: "+metadata.partition()+
" topic: "+metadata.topic());
-
}
-
if (exception !=
null) {
-
System.out.println(index+
"异常:"+exception.getMessage());
-
}
-
}
-
});
-
}
-
producer.close();
-
}
-
}
SimpleConsumersPerSon.java 消息接收者:
-
import java.util.Arrays;
-
import java.util.Properties;
-
-
import org.apache.kafka.clients.consumer.ConsumerRecord;
-
import org.apache.kafka.clients.consumer.ConsumerRecords;
-
import org.apache.kafka.clients.consumer.KafkaConsumer;
-
public
class SimpleConsumerPerSon {
-
public static void main(String[] args) throws Exception {
-
-
String topicName =
"mypartition001";
-
Properties props =
new Properties();
-
-
props.put(
"bootstrap.servers",
"localhost:9092");
-
props.put(
"group.id",
"partitiontest05");
-
props.put(
"enable.auto.commit",
"true");
-
props.put(
"auto.commit.interval.ms",
"1000");
-
props.put(
"session.timeout.ms",
"30000");
-
-
//要发送自定义对象,需要指定对象的反序列化类
-
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
-
props.put(
"value.deserializer",
"com.ys.test.SpringBoot.zktest.encode.DecodeingKafka");
-
-
//使用String时可以使用系统的反序列化类
-
// props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
-
// props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
-
@SuppressWarnings(
"resource")
-
KafkaConsumer<String, Object> consumer =
new KafkaConsumer<String, Object>(props);
-
//Kafka Consumer subscribes list of topics here.
-
consumer.subscribe(Arrays.asList(topicName));
-
//print the topic name
-
System.out.println(
"Subscribed to topic "+ topicName);
-
-
-
while (
true) {
-
ConsumerRecords<String, Object> records = consumer.poll(
100);
-
for (ConsumerRecord<String, Object> record : records)
-
// print the offset,key and value for the consumer records.
-
// System.out.printf("offset = %d, key = %s, value = %s\n",
-
// record.offset(), record.key(), record.value().toString());
-
-
System.out.println(record.toString());
-
}
-
-
}
-
}
以上的发送者和接收者他们的key的序列化类还是 StringDeserializer,但是value的序列化需要指定为我们自己的。
运行接收者和发送者,观察结果:(结果是发送一个集合和发送一个对象)
发送一个对象person接收的结果:
ConsumerRecord(topic = mypartition001, partition = 0, offset = 29, CreateTime = 1502457680160, checksum = 3691507046, serialized key size = 1, serialized value size = 391, key = 0, value = PerSon [userid=0, name=MyTest 0, age=0, addr=My Producer TEST0010, eMail=null, userRole=null, card=IDCard [cardid=10000000000, cardName=MyTest0's idCard]])
ConsumerRecord(topic = mypartition001, partition = 0, offset = 30, CreateTime = 1502457680175, checksum = 1443537499, serialized key size = 1, serialized value size = 391, key = 1, value = PerSon [userid=0, name=MyTest 1, age=1, addr=My Producer TEST0011, eMail=null, userRole=null, card=IDCard [cardid=10000000000, cardName=MyTest1's idCard]])
发送asList集合的结果:
ConsumerRecord(topic = mypartition001, partition = 0, offset = 31, CreateTime = 1502457689533, checksum = 3469353517, serialized key size = 1, serialized value size = 524, key = 0, value = [PerSon [userid=0, name=MyTest 0, age=0, addr=My Producer TEST0010, eMail=null, userRole=null, card=IDCard [cardid=10000000000, cardName=MyTest0's idCard]], PerSon [userid=0, name=MyTest 0, age=0, addr=My Producer TEST0010, eMail=null, userRole=null, card=IDCard [cardid=10000000000, cardName=MyTest0's idCard]]])
ConsumerRecord(topic = mypartition001, partition = 0, offset = 32, CreateTime = 1502457689552, checksum = 1930168239, serialized key size = 1, serialized value size = 524, key = 1, value = [PerSon [userid=0, name=MyTest 1, age=1, addr=My Producer TEST0011, eMail=null, userRole=null, card=IDCard [cardid=10000000000, cardName=MyTest1's idCard]], PerSon [userid=0, name=MyTest 1, age=1, addr=My Producer TEST0011, eMail=null, userRole=null, card=IDCard [cardid=10000000000, cardName=MyTest1's idCard]]])
这样我们不管是发送的是一个对象还是一个集合我们都可以正确发送和接收了。