在集群hadoop2.2.0( 无HA)+zookeeper-3.3.5+hbase-0.96.0-hadoop2
正常运行的情况下,添加kerberos认证。
一.部署
目录/opt用于存放*.keytab文件
mkdir -p /opt
节点部署
Master1 |
Master2 |
Master3 |
Master4 |
Master5 |
Slave1 |
… |
Slave15 |
client |
client |
client |
client |
KDC |
client |
… |
client |
|
|
HMaster |
HMaster |
|
HRegionserver |
… |
… |
namenode |
Secondary namenode |
RM |
JHS |
|
Datanode |
… |
Datanode |
|
|
zookeeper |
zookeeper |
zookeeper |
|
… |
|
集群用户设置
确保不同进程是由不同的Unix用户启动的,其中datanode由root用户通过jsvc启动,推荐他们都属于同一个组如Hadoop。
User:Group |
Daemons |
hdfs:hadoop |
NameNode, Secondary NameNode, Checkpoint Node, Backup Node, DataNode(root) |
yarn:hadoop |
ResourceManager, NodeManager |
mapred:hadoop |
JobHistory Server |
zookeeper:hadoop |
QuorumPeerMain |
hbase:hadoop |
HMaster,HRegionServer |
二.配置kerberos服务
1.JCE
对于使用centos5.6及以上的系统,默认使用AES-256来加密的,这就要求所有节点安装JCE, Java Cryptography Extension (JCE) Unlimited StrengthJurisdiction Policy Files 7 ,将下载的包解压并放到该目录下:
$JAVA_HOME/jre/lib/security
2.安装kdc
kdc服务器上安装 kerberos-server ,centos默认已经安装了kerberos 的客户端以及命令工具,只需要在kdc服务器上安装kerberos的server:
服务端:
krb5-libs-1.10.3-10.el6_4.6.x86_64
krb5-workstation-1.10.3-10.el6_4.6.x86_64
krb5-server-1.10.3-10.el6_4.6.x86_64
pam_krb5-2.3.11-9.el6.x86_64
krb5-devel-1.10.3-10.el6_4.6.x86_64
客户端:
krb5-libs-1.10.3-10.el6_4.6.x86_64
krb5-workstation-1.10.3-10.el6_4.6.x86_64
pam_krb5-2.3.11-9.el6.x86_64
3.配置kdc
kdc服务器涉及到三个配置文件:
/etc/krb5.conf、
/var/kerberos/krb5kdc/kdc.conf、
/var/kerberos/krb5kdc/kadm5.acl
hadoop集群中其他服务器涉及到的kerberos配置文件:/etc/krb5.conf。
将kdc中的/etc/krb5.conf拷贝到集群中其他服务器即可。
集群如果开启selinux了,拷贝后可能需要执行restorecon -R -v/etc/krb5.conf
/etc/krb5.conf (分发到各客户端)黑色加粗为自定义部分
[logging]
default = FILE:/var/log/krb5libs.log
kdc= FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm =FOR_HADOOP
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 2d
forwardable = true
renewable = true
[realms]
FOR_HADOOP = {
kdc=master5:88
admin_server=master5:749
}
[domain_realm]
[kdc]
profile=/var/kerberos/krb5kdc/kdc.conf
FOR_HADOOP是数据库名,也是kerberos的域名
[libdefaults]中的defalt_realm表示在不给出域的时候,默认采用这个
[logging]中的是指定日志的位置
默认的安装位置在/var/kerberos/krb5kdc
/var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
FOR_HADOOP={
database_name=/var/kerberos/krb5kdc/principal
master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab= /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normaldes3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normaldes-cbc-md5:normal des-cbc-crc:normal
}
配置操作权限:
/var/kerberos/krb5kdc/kadm5.acl
*/admin@FOR_HADOOP *
hadoop/admin@FOR_HADOOP *
(kdc上)添加kerberos自启动及重启服务
# chkconfig --level 35 krb5kdc on
# chkconfig --level 35 kadmin on
# service krb5kdc restart
# service kadmin restart
4.初始化操作
(1)(kdc上)创建KDC数据库
#kdb5_util create -r FOR_HADOOP -s
该命令会在/var/kerberos/krb5kdc/目录下创建principal数据库。
密码ttteee
开始为KDC设置初始用户信息,这里需要在KDC上执行kadmin.local命令(该命令仅能在KDC上运行,如果你需要在其他机器上管理kerberos的话,直接运行kadmin)
服务器端操作:#kadmin.local