1、什么是Kafka
官网上:Kafka®用于构建实时数据管道和流式应用程序。它具有横向可扩展性、容错性、速度极快,在数千家公司的生产中运行。
2、集群搭建准备
JDK
Zookeeper集群(https://blog.csdn.net/qq_32695789/article/details/86435349)
防火墙的关闭(很重要不然在启动的时候会一直报连接错误)
服务器之间的互信配置(.ssh目录下的操作见https://blog.csdn.net/qq_32695789/article/details/83477825)
下载安装包:http://kafka.apache.org/downloads
3、安装
1、解压:tar -zxvf xxx.jar
2、修改名字 mv xxx kafka (这一步可省。我就是习惯性操作)
3、在安装目录下新建logs文件夹 mkdir logs
4、修改配置文件 kafka/config/server.properties
broker.id=0 每台机器不一样不能重复
delete.topic.enable=true 允许删除topic
num.network.threads=3 处理网络请求的线程数量
num.io.threads=8 处理磁盘IO的线程数量
log.dirs=/home/soft/kafka/logs 存放运行日志的目录 默认在/tmp zookeeper.connect=hadoop1:2181,hadoop2:2181,hadoop3:2181 连接ZK的配置
5、复制到各个服务器上
6、复制完成之后修改每个服务器里的配置文件中的broker.id
7、安装完成启动:(启动前记得把zk启动)
bin/kafka-server-start.sh config/server.properties
4、基本命令
启动 后台启动使用nohup关键字(略)
bin/kafka-server-start.sh config/server.properties
关闭
bin/kafka-server-stop.sh stop
创建topic
bin/kafka-topics.sh --zookeeper hadoop1:2181 --create --replication-factor 3 --partitions 1 --topic first
查看topic列表
bin/kafka-topics.sh --zookeeper hadoop1:2181 --list
删除topic
bin/kafka-topics.sh --zookeeper hadoop1:2181 --delete --topic second
生产消息(9092是kafka的默认端口)
bin/kafka-console-producer.sh --broker-list hadoop1:9092 --topic first
消费消息
bin/kafka-console-consumer.sh --bootstrap-server hadoop1:9092 --from-beginning --topic first
控制台测试生产消费时记得开两个窗口测试
5、java代码测试
1、引入pom依赖
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.11.0.2</version>
</dependency>
2、编写生产者代码
@Test
public void producer(){
Properties p = new Properties();
//kafka地址,多个地址用逗号分割
p.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop1:9092,hadoop2:9092,hadoop3:9092");
p.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
p.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
KafkaProducer<String, String> kafkaProducer = new KafkaProducer<String, String>(p);
String topic = "first";
try {
while (true) {
String msg = "Hello," + new Random().nextInt(100);
ProducerRecord<String, String> record = new ProducerRecord<String, String>(topic, msg);
kafkaProducer.send(record);
System.out.println("消息发送成功:" + msg);
Thread.sleep(10000);
}
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
kafkaProducer.close();
}
}
然后再控制台里面进行消费查看
3、编写消费代码
@Test
public void consumer(){
Properties p = new Properties();
p.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "hadoop1:9092,hadoop2:9092,hadoop3:9092");
p.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
p.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
p.put(ConsumerConfig.GROUP_ID_CONFIG, "test");
KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<String, String>(p);
String topic = "first";
kafkaConsumer.subscribe(Collections.singletonList(topic));// 订阅消息
while (true) {
ConsumerRecords<String, String> records = kafkaConsumer.poll(100);
for (ConsumerRecord<String, String> record : records) {
System.out.println(String.format("topic:%s,offset:%d,消息:%s", //
record.topic(), record.offset(), record.value()));
}
}
}
启动之后控制台打印如下
其他
本人QQ:806751350
github地址:https://github.com/linminlm