关于kafka原理介绍自行百度。
kafka一些参数配置参考官网:http://kafka.apache.org/0110/documentation/#configuration
引入jar:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.11.0.1</version>
</dependency>
部分配置:
private static Properties initProp() {
Properties prop = new Properties();
prop.put("bootstrap.servers", "47.98.37.251:9092");
//序列化
prop.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
prop.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
//0,1,all
prop.put("acks", "all");
//生产者可以使用内存的总字节来缓冲等待发送到服务器的记录
prop.put("buffer.memory", "33554432");
//压缩格式:none, gzip, snappy, lz4
prop.put("compression.type","snappy");
//可恢复异常,才retry
prop.put("retries","5");
prop.put("batch.size","16384");
//毫秒数
prop.put("linger.ms","0");
return prop;
}
下面将介绍kafka发送数据的三种方式:
FireAndForget
我们向服务器发送一个消息,并且不关心它是否成功到达。大多数情况下,它会成功地到达,因为Kafka是高度可用的,并且生产者会重新尝试自动发送消息。但是,使用这种方法会丢失一些消息。
@Slf4j
public class FireAndForgetSender {
public static void main(String[] args) {
Properties properties = initProp();
KafkaProducer<String, String> producer = new KafkaProducer<>(properties);
IntStream.range(0, 10).forEach(i -> {
ProducerRecord<String, String> producerRecord =
new ProducerRecord<>("FireAndForgetSender", ""+i, "hello" + i);
producer.send(producerRecord);
log.info("=========发送成功key:{}================",i);
});
producer.flush();
producer.close();
}
Synchronous
我们发送一条消息,send()方法返回一个Future对象,我们使用get()来等待,看看send()是否成功。
@Slf4j
public class SyncSender {
public static void main(String[] args) {
Properties properties = initProp();
KafkaProducer<String, String> producer = new KafkaProducer<>(properties);
IntStream.range(0, 10).forEach(i -> {
ProducerRecord<String, String> producerRecord =
new ProducerRecord<>("test", ""+i, "hello" + i);
Future<RecordMetadata> send = producer.send(producerRecord);
RecordMetadata recordMetadata = null;
try {
recordMetadata = send.get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
log.info("=========发送成功key:{},partition:{},offset:{}================",i,recordMetadata.partition(),recordMetadata.offset());
});
producer.flush();
producer.close();
}
Asynchronous
我们使用回调函数调用send()方法,当它接收到Kafka代理的响应时,会触发这个函数。
@Slf4j
public class AsyncSender {
public static void main(String[] args) {
Properties properties = initProp();
KafkaProducer<String, String> producer = new KafkaProducer<>(properties);
IntStream.range(0, 10).forEach(i -> {
ProducerRecord<String, String> producerRecord =
new ProducerRecord<>("test", "hh"+i, "hello" + i);
producer.send(producerRecord, new Callback() {
@Override
public void onCompletion(RecordMetadata metadata, Exception exception) {
if (exception==null){
log.info("=========发送成功key:{},partition:{},offset:{}================",i,metadata.partition(),metadata.offset());
}
}
});
});
producer.flush();
System.out.println();
producer.close();
}
package com.testRpc;
import com.alibaba.fastjson.JSON;
import com.google.common.io.Files;
import java.io.File;
public class FileTxt {
/**
* 创建写入文件
*
* @param fileName 生成文件名
* @param contents 写入文件内容
* @return 文件路径
* @throws Exception
*/
public static String createFile(final String fileName, final String contents) throws Exception {
String filePath = "D:\\ee\\" + fileName;
File file = new File(filePath);
File fileParent = file.getParentFile();
//文件不存在,创建文件
if (!fileParent.exists()) {
fileParent.mkdirs();
}
// 文件 写内容 完全不用去关心打开打开流/关闭流
Files.write(contents.getBytes(), file);
return filePath;
}
/**
* 创建写入文件
*
* @param fileName 生成文件名
* @param contents 写入文件内容
* @return 文件路径
* @throws Exception
*/
public static String createFile(final String fileName, final Object contents) throws Exception {
return createFile(fileName, JSON.toJSONString(contents));
}
}