由于项目要求需要增加Hbase集群的认证机制,通过研究决定采用Kerberos认证服务进行部署。本文使用 mit 的 kerberos 来作为 kdc 服务。
一、Kerberos安装:
安装分布如下:
角色 | 主机名 | IP地址 |
---|---|---|
Kerberos 服务器/客户端 | cluster-01 | 172.168.1.80 |
Kerberos 客户端 | cluster-02 | 172.168.1.81 |
Kerberos 客户端 | cluster-03 | 172.168.1.82 |
1.1.服务端安装:
以下只需要在krb5规划的服务器中安装即可;
yum install -y krb5-server krb5-libs krb5-workstation krb5-devel krb5-auth-dialog
1.1.1.配置文件:
vim /etc/krb5.conf;编辑完成后需要同步到其他两个节点;
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = HADOOP.COM
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
# default_ccache_name = KEYRING:persistent:%{uid}
[realms]
HADOOP.COM = { #这里是自定义的域
kdc = cluster-01:20788 #代表KDC服务的主机名和端口,可自定义端口
admin_server = cluster-01:20749 #代表KDC服务的主机名和端口,可自定义端口
}
[domain_realm]
.hadoop.com = HADOOP.COM #这里是自定义的域
hadoop.com = HADOOP.COM 这里是自定义的域
1.1.2.证书生成:
cd /var/kerberos/krb5kdc/
vim kadm5.acl
*/admin@HADOOP.COM *
vim kdc.conf
[kdcdefaults]
kdc_ports = 20788
kdc_tcp_ports = 20788
[realms]
HADOOP.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
max_life = 25h
max_renewable_life = 8d
}
1.1.3.生成数据库:
kdb5_util create -r HADOOP.COM -
1.1.4.启动服务:
systemctl start krb5kdc && systemctl start kadmin
#加入开机启动
systemctl enable krb5kdc && systemctl enable kadmin
1.2.客户端安装:
需要在三个节点都进行安装;
yum -y install krb5-libs krb5-server openldap-clients