kafka搭建可以按照官方示例来操作 http://kafka.apache.org/quickstart
说明:本次使用的版本为kafka_2.12-2.2.0
官方示例也有生产者,消费者的 shell版本示例,通常执行起来也很正常
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
但是接下来使用Java示例来模拟生产者,消费者,直接上代码。
事先说明Java代码本身是没有问题的,但是执行会报错,解决起来还是很痛苦的
package com.alioo;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
import java.util.Date;
import java.util.Properties;
public class KafkaProducerNew {
private final KafkaProducer<String, String> producer;
public final static String TOPIC = "test";
private KafkaProducerNew() {
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "mytest1:9092");
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
producer = new KafkaProducer<String, String>(props);
}
public void produce() {
int messageNo = 1;
final int COUNT = 10;
while(messageNo < COUNT) {
String key = String.valueOf(messageNo);
String data = String.format("hello KafkaProducer message(%s) %s",new Date(), key);
try {
// ProducerRecord record= new ProducerRecord<String, String>(TOPIC,1,null, data);
ProducerRecord record= new ProducerRecord<String, String>(TOPIC,null,null, data);
producer.send(record);
} catch (Exception e) {
e.printStackTrace();
}catch (Throwable e) {
System.out.println("Throwable...");
e.printStackTrace();
}
messageNo++;
}
producer.close();
}
public static void main(String[] args) {
new KafkaProducerNew().produce();
}
}
package com.alioo;
import org.apache.kafka.clients.consumer.*;
import java.util.Arrays;
import java.util.Properties;
public class KafkaConsumerNew {
private Consumer<String, String> consumer;
private static String group = "group-1";
private static String TOPIC = "test";
private KafkaConsumerNew() {
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "mytest1:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, group);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); // earliest latest
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"); // 自动commit
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000"); // 自动commit的间隔
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "30000");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
consumer = new KafkaConsumer<String, String>(props);
}
private void consume() {
consumer.subscribe(Arrays.asList(TOPIC)); // 可消费多个topic,组成一个list
while (true) {
ConsumerRecords<String, String> records = consumer.poll(10);
for (ConsumerRecord<String, String> record : records) {
System.out.printf("offset = %d, key = %s, value = %s \n", record.offset(), record.key(), record.value());
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
public static void main(String[] args) {
new KafkaConsumerNew().consume();
}
}
启动生产者KafkaProducerNew,报错信息如下:
[ERROR 2019-05-17 16:42:13.269] [kafka-producer-network-thread | producer-1] [org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:237)] [Producer clientId=producer-1] Uncaught error in kafka producer I/O thread:
java.lang.IllegalStateException: No entry found for connection 0
at org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:339) ~[kafka-clients-2.2.0.jar:?]
at org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:143) ~[kafka-clients-2.2.0.jar:?]
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:926) ~[kafka-clients-2.2.0.jar:?]
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:287) ~[kafka-clients-2.2.0.jar:?]
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:337) ~[kafka-clients-2.2.0.jar:?]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:310) ~[kafka-clients-2.2.0.jar:?]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) ~[kafka-clients-2.2.0.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_192]
[DEBUG 2019-05-17 16:42:13.269] [kafka-producer-network-thread | producer-1] [org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:241)] [Producer clientId=producer-1] Beginning shutdown of Kafka producer I/O thread, sending remaining records.
解决起来也很简单,改一行配置即可
config/server.properties
之前的内容
listeners=PLAINTEXT://:9092
修改后的内容
listeners=PLAINTEXT://10.3.114.70:9092
重启kafka即可生效(修改正确后可以通过netstat来看效果)
# netstat -tulnp|grep 9092
tcp6 0 0 :::9092 :::* LISTEN 210349/java
# netstat -tulnp|grep 9092
tcp6 0 0 10.3.114.70:9092 :::* LISTEN 210927/java
这个时候再执行Java生产者示例KafkaProducerNew 就可以正常运行了。
- 观察Java消费者示例KafkaConsumerNew 可以正常消费。
- 观察shell消费者示例bin/kafka-console-consumer.sh 可以正常消费。
说明:解决Java程序中的问题的关键是将日志配置好,便于将错误日志打印出来