1.下载jce并解压至JAVA_HOME/jre/lib/security目录下,AMbari所有节点均需要
http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html
unzip -o -j -q jce_policy-8.zip -d $JAVA_HOME/jre/lib/security (避免软链接)
2.Kdc server 安装 kerberos ,其他节点不需要安装
yum install krb5-server krb5-libs krb5-workstation
3.kdc server 节点配置文件
(1) kerberos 服务日志
vim /etc/krb5.conf
[libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = GAI.COM
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
GAI.COM = {
admin_server = server.gai.test.com
kdc = server.gai.test.com
}
(2) kdc 秘钥配置文件
vim /var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
GAI.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
(3)keberos访问控制列表
vim /var/kerberos/krb5kdc/kadm5.acl
*/admin@GAI.COM *
*/dp@GAI.COM * #此处应用访问需要,配置时,把该注释去掉,避免影响配置文件加载出错
(4)修改上面配置文件中日志文件访问权限,避免权限问题:
chmod 666 /var/log/kadmind.log
chmod 666 /var/log/krb5kdc.log
3.非kdc server 从节点(或者从kdc server节点进行scp拷贝到 /etc 下面): 如果启动时,发现错误,反复检查,配置是否正确(防火墙,selinux是否关闭)
# GAI.COM 与下面[realms]保持一致
vim /etc/krb5.conf
[libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = GAI.COM
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
GAI.COM = {
admin_server = server.gai.test.com
kdc = server.gai.test.com
}
4.创建数据库:(如果/var/kerberos/krb5kdc有principal* 开头文件,全部删除;或者创建过程中报错,删除重建,如果仍然不行:在ambari上Disabled kerberos ,删除下每个节点 /etc/security/keytabs/下的所有文件)
cd /etc/security/keytabs/
rm -rf *
kdb5_util create –s -r GAI.COM
5.创建管理员
命令进入:kadmin.local
执行命令:addprinc admin/admin@GAI.COM
6.在kdc server重启 kadmin,krb5kdc, 集群上重启 ambari(ambari-server,ambari-agent)
kdc server所在节点:
service kadmin restart
service krb5kdc restart
停止:
ambari-agent节点:
ambari-agent stop
ambari-server节点:
ambari-server stop
启动:
ambari-server节点:
ambari-server start
ambari-agent节点
ambari-agent start
## 可以设置 kdc开机启动(生产一般不会停机)
chkconfig krb5kdc on
chkconfig kadmin on
7.在ambari启动kerberos,并测试
kadmin -p admin/admin
8.kerkeros 客户端所在节点(非KDC server 节点)安装kerberos 客户端软件
yum install krb5-workstation
9.ambari 启动 Kerberos配置
a.右上角找到admin选项:
b.选择已存在,现有MIT KDC(已经安装了KDC)
c.输入 kdc hosts(安装KDC server的hostname), Realm name
d.点击下一步:
10.测试
1.登录 kadmin.local (非kdc host 使用 kadmin -p admin/admin ,密码是之前创建kdbutil的密码)
创建用户:
addprinc dp/admin@GAI.COM (本地访问秘钥)
addprinc HTTP/admin@GAI.COM (HTTP 代理访问)
2.生成秘钥(有过期时间):
xst -k dp.admin.keytab dp/admin
xst -k http.admin.keytab HTTP/admin
(也可以指定编码方式,用于无法生成正确加密方式的秘钥,根据其他系统服务秘钥的加密种类生成秘钥:
指定加密,解密方式:
xst -e aes128-cts-hmac-sha1-96,arcfour-hmac,des-cbc-md5,des3-cbc-sha1,aes256-cts-hmac-sha1-96 -k smokeuser.headless.keytab -q ambari-qa-geotmt@EXAMPLE.COM)
3.本地初始化秘钥,生成Ticket Cache:
kinit -kt dp.admin.keytab dp/admin
4.测试hdfs,发现不再报错:
hdfs dfs -ls /
11.遇到的问题
ambari test kerberos client时日志文件权限问题:
1.Couldn't open log file /var/log/kadmind.log: 权限不够
修改权限:
chmod 777 /var/log/kadmind.log
如果不慎把keytab 删除(删除了 /etc/security/keytabs下的文件),在禁用kerberos时出现报错Key table file '/etc/security/keytabs/smokeuser.headless.keytab' not found while getting initial credentials
Client 'ambari-qa-geotmt@GAI.COM' not found in Kerberos database while getting initial credentials
解决:
kadmin.local 进入创建该keytab
xst -k smokeuser.headless.keytab ambari-qa-geotmt@GAI.COM
hadoop定位错误:以前自己经常用的,时间长了也会混忘: /var/log/hadoop/hdfs/hadoop-hdfs-datanode-snamenode.gai.test.com.out 真正的日志在 /var/log/hadoop/hdfs/hadoop-hdfs-datanode-snamenode.gai.test.com.log
out
12:ambar使用hdfs超级用户修改所有用户的创建文件的属主,属组,相当于Linux的root
su - hdfs
hdfs dfs -chown -R gai:gai hdfs://192.168.0.1:8020/usr/gai/