官方文档:http://kafka.apache.org/documentation/
- 在zookeeper集群的基础上,搭建kafka集群,zookeeper参考:zookeeper:集群搭建操作步骤
- 创建
kafka
目录,下载kafka_2.12-2.2.1.tgz上传到目录并解压,进入解压目录(我有3台Linux,都进行了此操作) - 在某个目录下创建
kafkaLogs
目录,用于存放日志,比如/opt/kafka/kafka_2.12-2.2.1/kafkaLogs
- 编辑
kafka/kafka_2.12-2.2.1/config/server.properties
文件,如下有中文说明的部分需要修改,其他调优参数配置这里全删掉了,先使用默认
# 注释太多我去掉了部分
############################# Server Basics #############################
#改动1:这个是唯一的标识,每台Linux一个,我3台Linux分别是0,1,2
broker.id=0
############################# Socket Server Settings #############################
# listeners = PLAINTEXT://your.host.name:9092
# 改动2:改成当前Linux的ip地址
listeners=PLAINTEXT://192.168.1.103:19092
############################# Log Basics #############################
# 改动3:这个就是刚才创建的目录
#log.dirs=/tmp/kafka-logs
log.dirs=/opt/kafka/kafka_2.12-2.2.1/kafkaLogs
############################# Log Flush Policy #############################
############################# Log Retention Policy #############################
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
#改动4:这里分别配置3台Linux的ip:zookeeper端口
zookeeper.connect=192.168.1.103:12181,192.168.1.113:12181,192.168.1.114:12181
#zookeeper.connect=localhost:2181
############################# Group Coordinator Settings #############################
- 分别在3台Linux下启动kafka:
cd /opt/kafka/kafka_2.12-2.2.1 && bin/kafka-server-start.sh -daemon config/server.properties
,-daemon 是指定后台守护运行,同jps命令可以查看到进程,并且通过telnet 192.168.1.103 19092
可以看到刚才配置的kafka的端口已经在监听 - 查看操作:刚才配置文件已经配置连接到3台Linux的zookeeper,所有这里可以通过
bin/kafka-topics.sh --list --zookeeper 192.168.1.103:12181
( --zookeeper指定zookeeper的端口)来查看已存在的topic。或者通过bin/kafka-topics.sh --list --bootstrap-server 192.168.1.103:19092
(–bootstrap-server指定kafka的端口)来查看已创建的topic。如果还没创建topic这里是没有的。 - 查看操作:进入zookeeper的bin目录
/opt/zookeeper/zookeeper-3.4.14/bin
,通过命令和端口./zkCli.sh -server 127.0.0.1:12181
进入客户端,通过[zk: 127.0.0.1:12181(CONNECTED) 10] ls /brokers/topics
查看所有topics。通过命令[zk: 127.0.0.1:12181(CONNECTED) 7] get /brokers/ids/1
可以查看id为0的broker(配置中配了broker.id=0)的详细信息。 - 上面正常则Kafka集群正常启动了,下面则是kafka的相关操作,比如创建topic,生产消息,消费消息等。
- 创建topic:
bin/kafka-topics.sh --create --zookeeper localhost:12181 --replication-factor 2 --partitions 1 --topic test
(--zookeeper localhost:12181
可以换成--bootstrap-server 192.168.1.103:19092
,–replication-factor 2表示两个副本,partitions 1表示使用1个分区) - 在服务器上创建生产者,向topic中生产消息(需要一个窗口):
bin/kafka-console-producer.sh --broker-list 192.168.1.113:19092 --topic test
,输入命令后在>后面输入消息内容,自动加入消息队列,消费者可以消费。 - 在服务器端直接创建消费者监听消息(需要一个窗口):
bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.113:19092 --topic test --from-beginning
- 如何关闭kafka集群:在每个Linux中执行
cd /opt/kafka/kafka_2.12-2.2.1 && bin/kafka-server-stop.sh
- 创建topic,生产消息,消费消息这三步是客户端行为,可以通过其他语言客户端实现,比如Java:
maven依赖:
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.2.1</version>
</dependency>
测试类:
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Properties;
import java.util.concurrent.ExecutionException;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.CreateTopicsResult;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.consumer.ConsumerRebalanceListener;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.TopicPartition;
public class ProducerTest {
public static void main(String[] args) {
// createTopic();
consumer();
produce();
}
private static void createTopic() {
//创建topic
Properties props = new Properties();
props.put("bootstrap.servers", "192.168.1.113:19092");
AdminClient adminClient = AdminClient.create(props);
ArrayList<NewTopic> topics = new ArrayList<NewTopic>();
NewTopic newTopic = new NewTopic("topic-test", 1, (short) 2);
topics.add(newTopic);
CreateTopicsResult result = adminClient.createTopics(topics);
try {
result.all().get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
}
private static void produce() {
Properties props = new Properties();
props.put("bootstrap.servers", "192.168.1.113:19092");
props.put("acks", "all");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = new KafkaProducer<>(props);
for (int i = 0; i < 10; i++) {
producer.send(new ProducerRecord<String, String>("test", Integer.toString(i), Integer.toString(i)));
}
producer.close();
}
public static void consumer(){
Properties props = new Properties();
props.put("bootstrap.servers", "192.168.1.113:19092");
props.put("group.id", "test");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
final KafkaConsumer<String, String> consumer = new KafkaConsumer<String,String>(props);
consumer.subscribe(Arrays.asList("test"),new ConsumerRebalanceListener() {
public void onPartitionsRevoked(Collection<TopicPartition> collection) {
}
public void onPartitionsAssigned(Collection<TopicPartition> collection) {
//将偏移设置到最开始
consumer.seekToBeginning(collection);
}
});
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records)
System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
}
}
}