8、单机kafka部署开启 SASLPLAINTEXT 认证及服务化

8、单机kafka部署开启 SASL/PLAINTEXT 认证及服务化

8.1、单机版zookeeper部署

8.1.1、部署安装包

​ 上传apache-zookeeper-3.8.0-bin.tar.gz到/opt/

tar -xzf apache-zookeeper-3.8.0-bin.tar.gz
8.1.2、配置增改

上传zk-server.jaas.conf、zoo.cfg到/opt/apache-zookeeper-3.8.0-bin/conf/

[root@Centos7-Mode-V7 config]# cat zk-server.jaas.conf
Server {
        org.apache.zookeeper.server.auth.DigestLoginModule required
        username="admin"
        password="zookeeper-admin-pwd"
        user_kafka="kafka-zookeeper-pwd";
};
[root@Centos7-Mode-V7 conf]# cat zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/data/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpHost=0.0.0.0
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
# 开启认证功能,注意是 阿拉伯数字的 1,而不是英文字母 L。
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
# 设置认证方式为sasl
requireClientAuthScheme=sasl
# 客户端开启 SASL
zookeeper.sasl.client=true
jaasLoginRenew=3600000

上传zookeeper.service到/usr/lib/systemd/system/

[root@Centos7-Mode-V7 conf]# cat /usr/lib/systemd/system/zookeeper.service
[Unit]
Description=zookeeper
After=network.target

[Service]
User=root
Group=root
ExecStart=/opt/apache-zookeeper-3.8.0-bin/bin/zkServer.sh start
ExecStop=/opt/apache-zookeeper-3.8.0-bin/bin/zkServer.sh stop
PermissionsStartOnly=true
Type=forking

[Install]
WantedBy=multi-user.target

​ 修改/opt/apache-zookeeper-3.8.0-bin/bin下zkEnv.sh

export SERVER_JVMFLAGS=" -Djava.security.auth.login.config=/opt/apache-zookeeper-3.8.0-bin/conf/zk-server.jaas.conf"
8.1.3、启动zookeeper
systemctl daemon-reload
systemctl enable zookeeper.service
systemctl start zookeeper.service

8.2、单机版kafka部署

8.2.1、部署安装包

​ 2.2.1.1、上传kafka_2.13-3.4.0.gz到/opt/

tar -xzf kafka_2.13-3.4.0.gz
8.2.2、配置增改

8.2.2.1、上传kafka-server.jaas.conf到/opt/kafka_2.13-3.4.0/config,

[root@Centos7-Mode-V7 config]# cat kafka-server.jaas.conf
KafkaServer {
       org.apache.kafka.common.security.plain.PlainLoginModule required
       username="admin"
       password="kafka-admin-pwd"
       user_admin="kafka-admin-pwd"
       user_kafka_client="kafka-server-pwd";
};
Client {
       org.apache.kafka.common.security.plain.PlainLoginModule required
       username="kafka"
       password="kafka-zookeeper-pwd";
};

8.2.2.2、若需修改kafka服务admin用户认证登录密码可修改文件中如下值

       password="kafka-admin-pwd"
       user_admin="kafka-admin-pwd"

8.2.2.3、上传kafka.service到/usr/lib/systemd/system/

[root@Centos7-Mode-V7 config]# cat /usr/lib/systemd/system/kafka.service
[Unit]
Description=kafka
After=network.target
Documentation=http://kafka.apache.org/

[Service]
User=root
Group=root
ExecStart=/opt/kafka_2.13-3.4.0/bin/kafka-server-start.sh -daemon /opt/kafka_2.13-3.4.0/config/server.properties
ExecStop=/opt/kafka_2.13-3.4.0/bin/kafka-server-stop.sh
PermissionsStartOnly=true
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false

[Install]
WantedBy=multi-user.target

增加如下配置项到/opt/kafka_2.13-3.4.0/config/server.properties

#修改
log.dirs=/data/kafka-logs
listeners=SASL_PLAINTEXT://本机ip:9092
advertised.listeners=SASL_PLAINTEXT://本机ip:9092
#以下新增
# 用于在代理之间进行通信的安全协议。有效值为:PLAINTEXT、SSL、SASL_PLAINTEXT、SASL_SSL。
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
allow.everyone.if.no.acl.found=true
# super.users=User:*的值和kafka_server_jaas.conf中KafkaServer的username的值保持一致
super.users=User:admin
# 配置 kafka 使用 zookeeper 的 ACL
zookeeper.set.acl=true

增加如下配置到/opt/kafka_2.13-3.4.0/bin/kafka-server-start.sh的exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@"的上一行

dir="$(dirname $0)"/../config
export KAFKA_OPTS="-Djava.security.auth.login.config=$dir/kafka-server.jaas.conf"
8.2.3、启动kafka
systemctl daemon-reload
systemctl enable kafka.service
systemctl start kafka.service

8.3、若使用kafka自带zookeeper

8.3.1、部署安装包

​ 上传kafka_2.13-3.4.0.gz到/opt/

tar -xzf kafka_2.13-3.4.0.gz
8.3.2、配置增改

上传zk-server.jaas.conf到/opt/kafka_2.13-3.4.0/config

[root@Centos7-Mode-V7 config]# cat zk-server.jaas.conf
Server {
        org.apache.zookeeper.server.auth.DigestLoginModule required
        username="admin"
        password="zookeeper-admin-pwd"
        user_kafka="kafka-zookeeper-pwd";
};

上传zookeeper.service到/usr/lib/systemd/system/

[root@Centos7-Mode-V7 config]# cat /usr/lib/systemd/system/zookeeper.service
[Unit]
Description=zookeeper
After=network.target

[Service]
User=root
Group=root
ExecStart=/opt/kafka_2.13-3.4.0/bin/zookeeper-server-start.sh -daemon /opt/kafka_2.13-3.4.0/config/zookeeper.properties
ExecStop=/opt/apache-zookeeper-3.8.0-bin/bin/zookeeper-server-stop.sh
PermissionsStartOnly=true
Type=forking

[Install]
WantedBy=multi-user.target

增加如下配置项到/opt/kafka_2.13-3.4.0/config/zookeeper.properties


# 强制进行SASL认证
sessionRequireClientSASLAuth=true
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
zookeeper.sasl.client=true

增加如下配置到/opt/kafka_2.13-3.4.0/bin/zookeeper-server-start.sh的exec $base_dir/kafka-run-class.sh $EXTRA_ARGS org.apache.zookeeper.server.quorum.QuorumPeerMain "$@"的上一行

dir="$(dirname $0)"/../config
export KAFKA_OPTS="-Djava.security.auth.login.config=$dir/zk-server.jaas.conf"
8.3.3、启动zookeeper
systemctl daemon-reload
systemctl enable zookeeper.service
systemctl start zookeeper.service
9、更多内容可移步参考如下:

Kafka 开启 SASL/PLAINTEXT 认证及 ACL
https://blog.csdn.net/qq_41581031/article/details/125648498
kafka启动报Cluster ID不对
https://blog.csdn.net/Xu_XiaoXiao_Ji/article/details/129694477
https://blog.csdn.net/z1094219402/article/details/107968102
Zookeeper & Kafka 开启安全认证的配置
https://blog.csdn.net/lhjlhj123123/article/details/126257810

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
Kubernetes (K8s) 是一个开源容器编排平台,用于自动化部署、扩展和管理容器化应用程序。要在 Kubernetes 集群中部署单机 Kafka(Apache Kafka),你需要了解以下几个步骤: 1. **安装Kafka**: 首先,你需要在一台机器上安装 Apache Kafka。你可以从 Kafka 官方网站下载二进制包,然后按照官方文档配置和启动。 2. **创建Deployment** 或者 **StatefulSet**: 在K8s中,你可以使用 Deployment 对象来滚动更新应用,或使用 StatefulSet 来为每个实例分配稳定的网络名称(对于持久化的生产者/消费者群组)。模板文件通常包含 Kafka环境变量(如 `KAFKA_BROKER_ID`)和容器镜像信息。 ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: kafka-deployment spec: replicas: 1 selector: matchLabels: app: kafka template: metadata: labels: app: kafka spec: containers: - name: kafka image: confluentinc/cp-kafka:latest env: - name: KAFKA_BROKER_ID value: "0" ports: - containerPort: 9092 ``` 3. **配置服务** (Service): 创建一个 Service 对象来暴露集群内部的 Kafka 实例。这将提供一个外部访问点,并根据你的需求选择负载均衡策略。 ```yaml apiVersion: v1 kind: Service metadata: name: kafka- name: kafka-port port: 9092 targetPort: 9092 ``` 4. **验证部署** 和 **访问**:使用kubectl命令行工具检查资源是否已成功部署并可用。你可以通过 `kubectl get pods` 查看 pod 状态,通过 `kubectl get services` 查看服务状态。 5. **注意安全性和权限**:确保你的 K8s 集群和 Kafka 服务之间具有正确的网络规则和安全配置,比如使用 ingress controller 或者设置相应的 RBAC 规则。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值