基于 CenterOS 7 kerberos + zookeeper + kafka 单节点配置
重要提示
**本文使用的kafka集成包版本为 2.12-2.1.1**
**kafka 自行配置的 principal 名不要用 大写 不然不能识别**
以下为本文host 配置:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
127.0.0.1 localhost localhost
127.0.0.1 ecs-18c7 ecs-18c7
192.168.1.212 kdc-server kdc-server
一.安装 Kerberos
1.主节点安装
yum install krb5-server -y
2.其他子节点安装krb5-devel、krb5-workstation
yum install krb5-devel krb5-workstation -y
3.修改/etc/krb5.conf 为以下内容:
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = EXAMPLE.COM
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = rc4-hmac
default_tkt_enctypes = rc4-hmac
permitted_enctypes = rc4-hmac
udp_preference_limit = 1
kdc_timeout = 3000
[realms]
EXAMPLE.COM = {
kdc = kdc-server
admin_server = kdc-server
}
[domain_realm]
kafka = EXAMPLE.COM
zookeeper = EXAMPLE.COM
clients = EXAMPLE.COM
192.168.1.212 = EXAMPLE.COM
4.修改 /var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
EXAMPLE.COM = {
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
max_renewable_life = 7d
max_life = 1d
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
default_principal_flags = +renewable, +forwardable
}
5.创建Kerberos数据库
kdb5_util create -r EXAMPLE.COM -s
6.设置服务启动服务,在主节点上执行
chkconfig --level 35 krb5kdc on
chkconfig --level 35 kadmin on
service krb5kdc start
service kadmin start
7.基本操作命令
kadmin.local -q "list_principals"
kadmin.local -q "addprinc zookeeper/kdc-server"
kadmin.local -q "xst -k keytab.keytab zookeeper/kdc-server@EXAMPLE.COM"
kinit -k -t keytab.keytab zookeeper/kdc-server@EXAMPLE.COM
klist -e -k -t keytab.keytab
二.安装 kafka + zookeeper
1.下载安装包 官网 这里使用的zookeeper 集成在 kafka安装包内的 kafka版本为2.12-2.1.1
2.解压安装包
3.kerberos生成kafka和zookeeper的 principal 以及keytab文件
1.创建 principal
sudo /usr/sbin/kadmin.local -q 'addprinc -randkey zookeeper/kdc-server@EXAMPLE.COM'
sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/kdc-server@EXAMPLE.COM'
sudo /usr/sbin/kadmin.local -q 'addprinc -randkey clients/kdc-server@EXAMPLE.COM'
2.生成keytab 文件 ** 这里为了方便全部生成到同一个文件里面
sudo /usr/sbin/kadmin.local -q "ktadd -k /var/kerberos/krb5kdc/keytab.keytab kafka/kdc-server@EXAMPLE.COM"
sudo /usr/sbin/kadmin.local -q "ktadd -k /var/kerberos/krb5kdc/keytab.keytab zookeeper/kdc-server@EXAMPLE.COM"
sudo /usr/sbin/kadmin.local -q "ktadd -k /var/kerberos/krb5kdc/keytab.keytab clients/kdc-server@EXAMPLE.COM"
4.创建 zookeeper 使用的 config/zookeeper_jaas.conf 文件
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=false
keyTab="/var/kerberos/krb5kdc/keytab.keytab"
principal="zookeeper/kdc-server@EXAMPLE.COM";
};
5.创建 kafka 使用的 config/server_jaas.conf 文件
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/var/kerberos/krb5kdc/keytab.keytab"
principal="kafka/kdc-server@EXAMPLE.COM";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/var/kerberos/krb5kdc/keytab.keytab"
principal="kafka/kdc-server@EXAMPLE.COM";
};
6.创建 kafka 使用的 config/client_jaas.conf 文件
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/var/kerberos/krb5kdc/keytab.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="clients/kdc-server@EXAMPLE.COM";
};
7.修改 config/server.properties 文件添加以下内容
listeners=SASL_PLAINTEXT://kdc-server:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
8.为config/zookeeper.properties 文件添加以下内容 没有就新建一个
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
9.为config/producer.properties和consumer.properties文件添加以下内容
security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
10.创建 zookeeper_start.sh 启动脚本放到kafka根目录 这里需要改为自己设置的jaas_conf 文件路径
export KAFKA_HEAP_OPTS='-Xmx1024M'
export KAFKA_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/data/kafka_2.12-2.1.1/config/zookeeper_jaas.conf'
bin/zookeeper-server-start.sh config/zookeeper.properties >zookeeper.log&
11.创建 kafka_start.sh 启动脚本放到kafka根目录 这里需要改为自己设置的jaas_conf 文件路径
export KAFKA_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/data/kafka_2.12-2.1.1/config/server_jaas.conf'
bin/kafka-server-start.sh config/server.properties > kafka.log&
12.启动 zookeeper 服务
sh zookeeper_start.sh
13.启动 kafka 服务
sh kafka_start.sh
14.可以从根目录 zookeeper.log 和 kafka.log 验证是否启动成功