Kafka集群的安全认证机构 SASL_SCRAM

Kafka集群的安全认证机构 SASL_SCRAM

Kafka集群的安全认证机构

Kafka加密认证机制中的SASL主要包括SASL_PLAINTEXT、SASL_GSSAPI、SASL_SCRAM,这里主要记录Kafka配置SASL_SCRAM认证环境

#config/server-sasl.properties文件修改

#node01
#安全认证监控服务
zookeeper.connect=node01:2181,node02:2181
listeners = SASL_PLAINTEXT://node01:9092
advertised.listeners=SASL_PLAINTEXT://node01:9092
security.inter.broker.protocol=SASL_PLAINTEXT  
sasl.enabled.mechanisms=SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
#对所有用户topic可见
allow.everyone.if.no.acl.found=true
#设置超级用户
super.users=User:admin
#listener.name.sasl_plaintext.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
#    username="admin" \
#    password="admin";

#node02
#安全认证监控服务
zookeeper.connect=node01:2181,node02:2181
listeners = SASL_PLAINTEXT://node02:9092
advertised.listeners=SASL_PLAINTEXT://node02:9092
security.inter.broker.protocol=SASL_PLAINTEXT  
sasl.enabled.mechanisms=SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
#对所有用户topic可见
allow.everyone.if.no.acl.found=true
#设置超级用户
super.users=User:admin
#listener.name.sasl_plaintext.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
#    username="admin" \
#    password="admin";
#node01和node02下创建kafka_server_jaas.conf文件
KafkaServer {
        org.apache.kafka.common.security.scram.ScramLoginModule required
        username="admin"
        password="admin";
};

Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
        username="kafka"
        password="kafka";
};
#node01和node02下创建bin/kafka-server-start.sh
export KAFKA_HEAP_OPTS="-Xmx2G -Xms2G -Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
##node01和node02下创建config/admin.conf
#security.protocol=SASL_PLAINTEXT
#sasl.mechanism=SCRAM-SHA-512
#sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin";
##node01和node02下创建config/testuc.conf
##认证配置文件
#security.protocol=SASL_PLAINTEXT
#sasl.mechanism=SCRAM-SHA-512
#sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="testuc" password="123456";
##node01和node02下创建config/testup.conf
##认证配置文件
#security.protocol=SASL_PLAINTEXT
#sasl.mechanism=SCRAM-SHA-512
#sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="testup" password="123456";
#查看所有topic
bin/kafka-topics.sh --list  --bootstrap-server  node01:9092  --command-config config/admin.conf

#创建topic 
bin/kafka-topics.sh --create  --bootstrap-server  node01:9092 --replication-factor 1 --partitions 1 --topic testtopic --command-config config/admin.conf


#新增用户,新建用户testu
bin/kafka-configs.sh  --zookeeper  node01:2181 --alter --add-config 'SCRAM-SHA-512=[password=123456]' --entity-type users --entity-name testup

#更新用户,更新testu的密码为mytest
bin/kafka-configs.sh --zookeeper  node01:2181 --alter --add-config 'SCRAM-SHA-512=[password=123123]' --entity-type users --entity-name testu

#查看凭证
bin/kafka-configs.sh --zookeeper node01:2181 --describe --entity-type users --entity-name testu

#删除凭证
bin/kafka-configs.sh --zookeeper node01:2181 --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name testu


#读取权限,设置用户testu的消费者权限
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=node01:2181 --add --allow-principal User:"testuc" --consumer --topic 'testtopic' --group '*'

#写入权限,设置用户testu的生产者权限
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=node01:2181 --add --allow-principal User:"testup" --producer --topic 'testtopic'

#查看权限
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=node01:2181 --list


#认证配置文件
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="testup" password="123456";


#生产者
bin/kafka-console-producer.sh --broker-list node01:9092 --topic testtopic --producer.config config/testup.conf

#消费者
bin/kafka-console-consumer.sh --bootstrap-server node01:9092 --topic testtopic --consumer.config config/testuc.conf


#查看所有groupid
bin/kafka-consumer-groups.sh --bootstrap-server node01:9092 --list  --command-config config/admin.conf

#查看offset
bin/kafka-consumer-groups.sh --bootstrap-server node01:9092 --describe --group  test-consumer-group --command-config config/admin.conf


#查看消息内容: 消息追踪
bin/kafka-run-class.sh kafka.tools.DumpLogSegments  --files /webapps/kafka/logs/mytest-0/00000000000000000000.log --print-data-log
#注
#Logstash对SASL/SCRAM仅为beta版支持
https://www.elastic.co/guide/en/beats/journalbeat/master/kafka-output.html#_sasl_mechanism
sasl.mechanismedit
This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.

The SASL mechanism to use when connecting to Kafka. It can be one of:

PLAIN for SASL/PLAIN.
SCRAM-SHA-256 for SCRAM-SHA-256.
SCRAM-SHA-512 for SCRAM-SHA-512.
If sasl.mechanism is not set, PLAIN is used if username and password are provided. Otherwise, SASL authentication is disabled.

To use GSSAPI mechanism to authenticate with Kerberos, you must leave this field empty, and use the kerberos options.

#filebeat仅master分支支持SCRAM-SHA-512
https://github.com/elastic/beats/pull/12867
https://github.com/elastic/beats/issues/8387
https://github.com/elastic/beats/issues/16723

#SCRAM
https://tools.ietf.org/html/rfc5802#section-9
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值