SASL_SSL jks鉴权验证方式

SASL_SSL jks鉴权验证方式

配置server.properties

在conf/server.properties中添加上

delete.topic.enable=true
auto.create.topics.enable=true

listeners=SASL_SSL://172.19.3.48:9093
advertised.listeners=SASL_SSL://172.19.3.48:9093
# inter.broker.listener.name=SASL_SSL

sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
#security.inter.broker.protocol=SASL_PLAINTEXT
security.inter.broker.protocol=SASL_SSL
ssl.endpoint.identification.algorithm=HTTPS

ssl.keystore.location=/opt/tool/server.keystore.jks
ssl.keystore.password=itc123
ssl.key.password=itc123
ssl.truststore.location=/opt/tool/server.truststore.jks
ssl.truststore.password=itc123

ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
ssl.secure.random.implementation=SHA1PRNG
#可以禁用服务器主机名验证,kafka2.0.x开始,将ssl.endpoint.identification.algorithm设置为了HTTPS,如需使用即
#ssl.endpoint.identification.algorithm=HTTPS
ssl.endpoint.identification.algorithm=

# ACL
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
#修改验证机制,kafka3.0之后版本弃用了SimpleAclAuthorizer验证
#authorizer.class.name=kafka.security.authorizer.AclAuthorizer 
allow.everyone.if.no.acl.found=true
super.users=User:admin
#super.users=User:admin  表示启动超级用户admin,注意:此用户名不允许更改,否则使用生产模式时,会有异常!

配置zookeeper.properties(zookeeper)

这里暂时不要用自己安装的zk,用kafka自带的zk,否则会有问题,在conf/zookeeper.properties中添加上

vim config/zookeeper.properties

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000

创建 SCRAM 证书

先要启动zk,在执行以下语句

/opt/kafka_2.12-3.1.0/bin/kafka-configs.sh --zookeeper localhost --alter --add-config 'SCRAM-SHA-256=[password=itc123],SCRAM-SHA-512=[password=itc123]' --entity-type users --entity-name admin

生成 SSL 证书

为每个Kafka broker生成SSL密钥和证书。

如我输入命令如下:依次是 密码—重输密码—名与姓—组织单位—组织名—城市—省份—国家两位代码—密码—重输密码—yes,后面告警不用管,此步骤要注意的是,名与姓这一项必须输入域名,如 “localhost”,切记不可以随意写,我曾尝试使用其他字符串,在后面客户端生成证书认证的时候一直有问题。

keytool -keystore server.keystore.jks -alias localhost -validity 3650 -genkey
##以下两行带验证,请勿使用
##keytool -keystore  server.keystore.jks -validity 3650 -alias localhost -genkey -ext SAN=IP:{IP_ADDRESS} 
##keytool -keystore  server.keystore.jks -validity 3650 -alias localhost -genkey -ext SAN=IP:172.19.3.48
Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]:  localhost
What is the name of your organizational unit?
[Unknown]:  wisentsoft
What is the name of your organization?
[Unknown]:  wisentsoft
What is the name of your City or Locality?
[Unknown]:  beijing
What is the name of your State or Province?
[Unknown]:  beijing
What is the two-letter country code for this unit?
[Unknown]:  CN
Is CN=localhost, OU=wisentsoft, O=wisentsoft, L=beijing, ST=beijing, C=CN correct?
[no]:  yes
Enter key password for <localhost>
	(RETURN if same as keystore password):  
Re-enter new password: 
Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore server.keystore.jks -destkeystore server.keystore.jks -deststoretype pkcs12".

解释:

keystore: 密钥仓库存储证书文件。密钥仓库文件包含证书的私钥(保证私钥的安全)。
validity: 证书的有效时间,天

(执行完第1步后爆出警告)执行以下命令

keytool -importkeystore -srckeystore server.keystore.jks -destkeystore server.keystore.jks -deststoretype pkcs12

查看证书list

keytool -list -v -keystore server.keystore.jks

创建自己的CA证书

openssl req -new -x509 -keyout ca-key -out ca-cert -days 3650

如果是kafka集群,这一步只需要在某一个节点执行,然后分发到其余节点,再执行一下命令(除了这步,其余的命令都需要每个节点执行)

其中PEM pass phrase就是之前输入的密库口令

具体要填写的如下

openssl req -new -x509 -keyout ca-key -out ca-cert -days 3650
Generating a 2048 bit RSA private key
..............................................................................+++
......................+++
writing new private key to 'ca-key'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:beijing
Locality Name (eg, city) [Default City]:beijing
Organization Name (eg, company) [Default Company Ltd]:wisentsoft
Organizational Unit Name (eg, section) []:wisentsoft
Common Name (eg, your name or your server's hostname) []:localhost
Email Address []:suohechuan@wisentsoft.com

创建信任库

将生成的CA添加到**clients' truststore(客户的信任库)**,以便client可以信任这个CA:

keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert

keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert

执行到这里我的目录下有这些文件

签名证书

用步骤2.2 生成的CA签名 步骤2.1生成的证书。首先导出请求文件:

keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file

cert-file: 出口,服务器的未签名证书
然后用CA签名:

#{validity}是CA签名生效时间天数,推荐3650,{ca-password}密码和之前itc123一样
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password}
#openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 3650 -CAcreateserial -passin pass:

最后,你需要导入CA的证书和已签名的证书到密钥仓库:

keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed

zookeeper_jaas.conf(zookeeper)

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ cat config/zookeeper_jaas.conf
Server {
        org.apache.zookeeper.server.auth.DigestLoginModule required
        user_super="admin-secret"
        user_kafka="kafka-secret";
 };

kafka_server_jaas.conf

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ cat config/kafka_server_jaas.conf
KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="admin-secret"
    user_admin="admin-secret"
    user_现场用户名="密码";
 };

Client {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="kafka"
    password="kafka-secret";
 };

客户端配置

vim config/kafka_client_jaas.conf
KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="现场用户名"
  password="密码";
};
Client {
  org.apache.zookeeper.server.auth.DigestLoginModule required
  username="kafka"
  password="kafka-secret";
};

重启zk(zookeeper)

bin/zookeeper-server-stop.sh -daemon config/zookeeper.properties
#/KAFKA_HOME要替换
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka_2.12-3.1.0/config/zookeeper_jaas.conf"
bin/zookeeper-server-start.sh -daemon config/zookeeper.properties

启动kakfa

#/KAFKA_HOME要替换
export KAFKA_OPTS="-Djava.security.auth.login.config=/KAFKA_HOME/config/kafka_server_jaas.conf"
#export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka_2.12-3.1.0/config/kafka_server_jaas.conf"
bin/kafka-server-start.sh config/server.properties
bin/kafka-server-start.sh -daemon config/server.properties

生产者

vim producer.properties
bootstrap.servers=localhost:9093
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";
#sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";
#修改ssl.truststore.location
ssl.truststore.location=/opt/tool/server.truststore.jks
ssl.truststore.password=itc123
ssl.keystore.password=itc123
#修改ssl.keystore.location
ssl.keystore.location=/opt/tool/server.keystore.jks
#可以禁用服务器主机名验证,kafka2.0.x开始,将ssl.endpoint.identification.algorithm设置为了HTTPS,如需使用即
#ssl.endpoint.identification.algorithm=HTTPS
ssl.endpoint.identification.algorithm=

消费者配置

vim consumer.properties
sasl.mechanism=PLAIN
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";
#修改ssl.truststore.location
ssl.truststore.location=/opt/tool/server.truststore.jks
ssl.truststore.password=itc123
#可以禁用服务器主机名验证,kafka2.0.x开始,将ssl.endpoint.identification.algorithm设置为了HTTPS,如需使用即
#ssl.endpoint.identification.algorithm=HTTPS
ssl.endpoint.identification.algorithm=

测试

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ bin/kafka-console-producer.sh --broker-list localhost:9093 --topic test1 --producer.config config/producer.properties
#bin/kafka-console-producer.sh --broker-list 172.19.3.48:9093 --topic test1 --producer.config config/producer.properties
>sd
>sd
>sd
>sd
>sfd
>sfdf

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test1 --from-beginning --consumer.config config/consumer.properties
#bin/kafka-console-consumer.sh --bootstrap-server 172.19.3.48:9093 --topic test1 --from-beginning --consumer.config config/consumer.properties
sdf
sadf

如果没有问题环境搭建完成.

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值