一、上节讲到Java程序发送接收简单的String类型数据,那么发送接收对象怎么办呢?下面具体说一下。
二、要发送对象,首先得有个对象,那么先创建一个对象类,暂且定义成SQLData,发送对象要序列化,
import java.io.Serializable;
public class SQLData implements Serializable{
private static final long serialVersionUID = 1L;
private String time = "";
private String data = "";
public String getTime() {
return time;
}
public void setTime(String time) {
this.time = time;
}
public String getData() {
return data;
}
public void setData(String data) {
this.data = data;
}
@Override
public String toString() {
return "SQLData [time=" + time + ", data=" + data + "]";
}
}
三、kafka生产者producer端
1、先看一下配置文件,有key和value的序列化方式,一般默认是kafka自带的,如下:
key.serializer=org.apache.kafka.common.serialization.StringSerializer
value.serializer=org.apache.kafka.common.serialization.StringSerializer
但是这种序列化方式只能发送简单的String类型数据,发送对象是不行的,那么我们要自定义序列化方式。如下:
key.serializer=org.apache.kafka.common.serialization.StringSerializer
value.serializer=com.kafka.EncodeingKafka
那么这个com.kafka.EncodeingKafka类从哪来的呢?kafka官方给出了Serializer<Object>()接口,我们只需实现它并重写方法即可,代码如下:
import java.util.Map;
import org.apache.kafka.common.serialization.Serializer;
public class EncodeingKafka implements Serializer<Object>{
public EncodeingKafka() {
// TODO Auto-generated constructor stub
}
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
// TODO Auto-generated method stub
}
@Override
public byte[] serialize(String topic, Object data) {
//只需重写这个方法即可
return BeanUtils.beanToByte(data);
}
@Override
public void close() {
// TODO Auto-generated method stub
}
}
BeanUtils.beanToByte(data)方法如下:
/**
* 对象序列化为byte数组
*
* @param obj
* @return
*/
public static byte[] beanToByte(Object obj) {
byte[] bb = null;
try (ByteArrayOutputStream byteArray = new ByteArrayOutputStream();
ObjectOutputStream outputStream = new ObjectOutputStream(byteArray)){
outputStream.writeObject(obj);
outputStream.flush();
bb = byteArray.toByteArray();
} catch (IOException e) {
e.printStackTrace();
}
return bb;
}
2、生产者producer发送消息:
public static void send(String content) {
private final static String TOPIC1 = "appreportdata_700091";
Producer<String, Object> producer = null;
Properties props = null;
SQLData sqlData = null;v
try {
props = PropertyUtils.load("producer_config.properties");
producer = new KafkaProducer<>(props);
} catch (IOException e1) {
e1.printStackTrace();
}
try {
//发送一百条数据
for(int i=0;i++;i<100){
sqlData = new SQLData();
sqlData.setTime("time"+i);
sqlData.setData("sqldata"+i);
producer.send(new ProducerRecord<String, Object>(TOPIC1, sqlData));
}
} catch (Exception e1) {
e1.printStackTrace();
} finally {
try {
producer.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
四、生产者端完成后,下面开始消费端代码。PS:consumer端也需要建立同样的sqldata基类,且类完全限定路径需保持一致,否则会反序列化失败。导致数据接收失败。
1、producer端是序列化,那么consumer端接收后就需要反序列化。配置文件如下:
key_deserializer=org.apache.kafka.common.serialization.StringDeserializer
value_deserializer=com.utils.DecodeingKafka
com.utils.DecodeingKafka这个类同样要实现kafka的反序列化接口Deserializer(),代码如下:
import java.util.Map;
import org.apache.kafka.common.serialization.Deserializer;
public class DecodeingKafka implements Deserializer<Object>{
@Override
public void close() {
// TODO Auto-generated method stub
}
@Override
public void configure(Map<String, ?> arg0, boolean arg1) {
// TODO Auto-generated method stub
}
@Override
public Object deserialize(String topic, byte[] data) {
// 只需重写此方法即可
return BeanUtils.byteToObj(data);
}
}
BeanUtils.byteToObj(data);方法如下:
/**
* 字节数组转为Object对象
*
* @param bytes
* @return
*/
public static Object byteToObj(byte[] bytes) {
Object readObject = null;
try (ByteArrayInputStream in = new ByteArrayInputStream(bytes);
ObjectInputStream inputStream = new ObjectInputStream(in)){
readObject = inputStream.readObject();
} catch (Exception e) {
e.printStackTrace();
//反序列化失败,则直接结束程序,保证数据完整
System.err.println("反序列化对象失败!");
System.exit(0);
}
return readObject;
}
2、consumer接收代码如下:getProperties();方法上节有,可参考,这里不再多写。
public void get() {
private final static String TOPIC = "appreportdata_700091";
Properties props = null;
KafkaConsumer<String, Object> consumer = null;
try {
props = getProperties();
consumer = new KafkaConsumer<>(props);
// 订阅topic
consumer.subscribe(Arrays.asList(TOPIC));
} catch (Exception e) {
e.printStackTrace();
}
long offset = 1l;
SQLData sqlData = null;
int count=0;
while (true) {
try {
//每间隔500毫秒去kafka服务拉取一次数据
records = consumer.poll(500);
//每次拉取的消息条数
count=records.count();
System.err.println(count);
for (ConsumerRecord<String, Object> record : records) {
//将数据强转成sqldata对象
sqlData = (SQLData) record.value();
content = sqlData.getTime() + "\t" + sqlData.getData();
//获取偏移量
offset = record.offset();
//这里就可以对接收到的数据进行相关业务处理了
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
五、消费端不同于生产端,消费端启动后一直处于监听状态。
这节先说这些,下节说一下kafka的发送数据保证机制。希望能给大家带来帮助。