1. 官方资料
官方文档
下载地址
2. 单机模式
2.1. 部署
2.1.1. 解压
2.1.2. 修改配置
# 端口
listeners=PLAINTEXT://localhost:9092
# 日志地址
log.dirs=D:\\work\\kafka_2.12-3.8.0\\logs
# 协调组件zookeeper地址
zookeeper.connect=localhost:2181
2.1.3. 服务启动
2.1.3.1. 启动zookeeper
启动脚本 + 配置文件
2.1.3.2. 启动Kafka
启动脚本 + 配置文件
2.2. 基础命令
2.2.1. Topic
2.2.1.1. 查看Topic
kafka-topic --bootstrap-server localhost:9092 --list
2.2.1.2. 创建Topic
kafka-topics --bootstrap-server localhost:9092 --create --partitions 1 --replication-factor 1 --topic topic11
注意:副本数<=broker数,分区数<=副本数
2.2.2. 控制台
生产:kafka-console-producer --bootstrap-server localhost:9092 --topic topic11
消费:kafka-console-consumer --bootstrap-server localhost:9092 --topic topic11 --from-beginning
2.3. Java Kafka
2.3.1. pom
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.8.0</version>
</dependency>
2.3.2. SimpleProducer
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.util.Properties;
public class SimpleProducer {
public static void main(String[] args) throws InterruptedException {
Properties prop = new Properties();
//kafka地址
prop.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
//序列化
prop.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
prop.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
//生产的topic_name
String topic = "topic11";
//创建topic
KafkaProducer<String, String> producer = new KafkaProducer<>(prop);
//发送消息
for (int i = 0; i < 100; i++) {
producer.send(new ProducerRecord<>(topic, "hello kafka" + i));
System.out.println("生产消息:" + i);
Thread.sleep(200);
}
producer.close();
}
}
2.3.3. SimpleConsumer
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import java.time.Duration;
import java.util.Arrays;
import java.util.Properties;
public class SimpleConsumer {
public static void main(String[] args) {
Properties prop = new Properties();
//kafka地址
prop.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
prop.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
prop.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
//消费者组
prop.put(ConsumerConfig.GROUP_ID_CONFIG, "topic11_group_1");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(prop);
consumer.subscribe(Arrays.asList("topic11"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
for (ConsumerRecord<String, String> record : records) {
System.out.printf("partition = %d, offset = %d, key = %s, value = %s%n", record.partition(), record.offset(), record.key(), record.value());
}
}
}
}
3. 伪集群
3.1. 部署
3.1.1. 规划
zookeeper:localhost:4181
kafka:localhost:9092,localhost:9093,localhost:9094
3.1.2. 部署
3.1.2.1. zookeeper
## 数据文件位置
dataDir=D:\\work\\zookeeper\\data
## 端口
clientPort=4181
3.1.2.2. kafka
解压一次后复制两份,然后重命名,之后修改每一项的配置文件server.properties
kafka_1
## id
broker.id=1
## 服务地址
listeners=PLAINTEXT://localhost:9092
## 日志位置
log.dirs=D:\\work\\kafka\\logs\\1
## 注册zookeeper地址
zookeeper.connect=localhost:4181
kafka_2
## id
broker.id=2
## 服务地址
listeners=PLAINTEXT://localhost:9093
## 日志位置
log.dirs=D:\\work\\kafka\\logs\\2
## 注册zookeeper地址
zookeeper.connect=localhost:4181
kafka_3
## id
broker.id=3
## 服务地址
listeners=PLAINTEXT://localhost:9094
## 日志位置
log.dirs=D:\\work\\kafka\\logs\\3
## 注册zookeeper地址
zookeeper.connect=localhost:4181
3.1.3. 启动
3.1.3.1. 启动zookeeper
3.1.3.2. 启动Kafka_1,2,3
启动成功标识
3.2. Java Kafka
同单机无差