一、搭建zookeeper集群
1、下载zookeeper:Apache Download Mirrors
2、在conf目录在做配置:先拷贝一份 zoo_sample.cfg为zoo.cfg,然后编辑zoo.cfg:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/mnt/tools/zookeeper/apache-zookeeper-3.5.5/dataDir/
clientPort=2181
#不指定启动占用8080端口
admin.serverPort=8888
server.1=0.0.0.0:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
3、启动各节点:
bin/zkServer.sh start
4、查看zookeeper状态:
bin/zkServer.sh status
二、搭建kafka集群
1、下载地址:Apache Kafka
2、编辑config/server.properties文件,添加如下配置,各节点为各节点的ip
listeners=SASL_PLAINTEXT://10.7.2.201:9092
advertised.listeners=SASL_PLAINTEXT://10.7.2.201:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
#authorizer.class.name=kafka.security.authorizer.AclAuthorizer(3.1使用)
allow.everyone.if.no.acl.found=true
zookeeper.connect=node1:2181,node2:2181,node3:2181
3、config下新建 服务端密码和客户端连接密码
(1)kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-2019"
user_admin="admin-2019"
user_producer="prod-2019"
user_consumer="cons-2019";
};
(2)kafka_client_jaas.conf 该文件用于命令行使用
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-2019";
};
4、指定 kafka-server-start.sh 脚本的密码路径,在export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G 后边添加
-Djava.security.auth.login.config=/data/test_kafka/kafka_2.13-2.7.0/config/kafka_server_jaas.conf"
5、指定 kafka-console-producer.sh 和 kafka-console-consumer.sh 的密码路径,在export KAFKA_HEAP_OPTS="-Xmx512M后边添加
-Djava.security.auth.login.config=/data/test_kafka/kafka_2.13-2.7.0/config/kafka_client_jaas.conf"
6、启动各节点
bin/kafka-server-start.sh -daemon config/server.properties
7、连接生产者
./kafka-console-producer.sh --broker-list 192.168.50.208:9092 --topic test --producer-property security.protocol=SASL_PLAINTEXT --producer-property sasl.mechanism=PLAIN
8、连接消费者
./kafka-console-consumer.sh --bootstrap-server 192.168.50.208:9092 --topic test --from-beginning --consumer-property security.protocol=SASL_PLAINTEXT --consumer-property sasl.mechanism=PLAIN
三、springboot使用kafka
1、依赖导入
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
2、yaml文件配置
spring:
kafka:
bootstrap-servers: 192.168.50.208:9092
listener:
batch-listener: true #是否开启批量消费,true表示批量消费
concurrency: 10 #设置消费的线程数
poll-timeout: 1500 #自动提交设置,如果消息队列中没有消息,等待timeout毫秒后,调用poll()方法。如果队列中有消息,立即消费消息,每次消费的消息的多少可以通过max.poll.records配置。
template:
default-topic: konne_test
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
properties:
sasl.mechanism: PLAIN
security.protocol: SASL_PLAINTEXT
consumer:
group-id: group-2
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
properties:
sasl.mechanism: PLAIN
security.protocol: SASL_PLAINTEXT
auto:
offset:
reset: latest
enable:
auto:
commit: true
3、resources 新建 kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-2019";
};
4、启动类中添加
@SpringBootApplication
public class ProbeApplication {
//初始化系统属性
static {
ClassLoader loader = Thread.currentThread().getContextClassLoader();
System.setProperty("java.security.auth.login.config",
loader.getResource("").getPath()+File.separator+"kafka_client_jaas.conf");
}
public static void main(String[] args) {
SpringApplication.run(ProbeApplication.class, args);
}
}
5、生产者
@Autowired
private KafkaTemplate<String, String> template;
// 发送消息
@GetMapping("/kafka/normal/{message}")
public void sendMessage1(@PathVariable("message") String message) {
ListenableFuture<SendResult<String, String>> future = this.template.sendDefault(message);
future.addCallback(success -> LOG.info("KafkaMessageProducer 发送消息成功!消息内容是:" + message),
fail -> LOG.error("KafkaMessageProducer 发送消息失败!消息内容是:" + message));
}
6、消费者
@Component
public class Receiver {
@KafkaListener(topics = "probe2")
public void receiveMessage(ConsumerRecord<String, String> record) {
System.out.println("【*** 消费者开始接收消息 ***】key = " + record.key() + "、value = " + record.value());
//TODO,在这里进行自己的业务操作,例如入库
}
}
参考博客:
zookeeper和kafka的SASL认证以及生产实践-阿里云开发者社区
Kafka配置用户名密码访问-CSDN博客
kafka密码配置方式:
kafka+zookeeper集群模式配置SSL认证 – 俗话曰的博客