Kerberos(1.9)Apache Hadoop(1.0.4)配置

主机: mcw-cc-nachuang

节点: mcw-cc-node

用户名都为 nachuang

环境:ubuntu 12.04, jdk 1.7

注:确定 Hadoop 1.0.4 集群可以正常使用,然后stop-all。

安装配置 Kerberos 1.9

下载网址: http://web.mit.edu/kerberos/

tar -xvf krb5-1.9.signed.tar 
生成krb5-1.9.tar.gz 和krb5-1.9.tar.gz.asc 
解压tar zxvf krb5-1.9.tar.gz 

cd krb5-1.9/src   

./configure 

make 

make install

需要添加并配置三个文件:

/etc/krb5.conf:

[logging]
default = FILE:/usr/local/var/log/krb5libs.log
kdc = FILE:/usr/local/var/log/krb5kdc.log
admin_server = FILE:/usr/local/var/log/kadmind.log
[libdefaults]
default_realm = hdfs.server
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
forwardable = yes
[realms]
hdfs.server = {
  kdc = mcw-cc-nachuang:88
  admin_server = mcw-cc-nachuang:749
}
[domain_realm]
.hdfs.server = mcw-cc-nachuang
hdfs.server = mcw-cc-nachuang
[appdefaults]
pam = {
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false
}
[kdc]
profile=/usr/local/var/krb5kdc/kdc.conf

(mcw-cc-nachuang是主机域名,hdfs.server是数据库名也是Kerberos的域名)

/usr/local/var/krb5kdc/kdc.conf:

[kdcdefaults]
  kdc_ports = 749,88
  kdc_tcp_ports = 88
  [realms]
  hdfs.server = {
  master_key_type = aes256-cts
  max_life = 25h
  max_renewable_life = 4w
  acl_file = /usr/local/var/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /usr/local/var/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}

(这里添加了aes加密,也可以不加,看官网,kdc.conf的路径和krb5.conf中的配置对应)

/usr/local/var/krb5kdc/kadm5.acl:

*/admin@hdfs.server  *   (配置操作权限)

主机操作:    sudo kadmin.local

配置完后,创建数据库: kdb5_util create -r hdfs.server -s

创建root用户:                  kadmin.local: addprinc root/admin@hdfs.server

创建管理员:                     kadmin.local:  addprinc admin/admin@hdfs.server

生成admin的keytab:      kadmin.local:  ktadd -k /usr/local/var/krb5kdc/kadm5.keytab kadmin/admin kadmin/changepw

添加管理员:                     kadmin.local: addprinc nachuang(ubuntu的用户名)/admin@hdfs.server

启动krb5kdc和kadmind:  sudo krb5kdc; sudo kadmind

客户端操作:  kadmin

复制刚刚在主机配置好的文件到客户端,用nachuang用户启动kadmin。


在客户机上登录kdc 服务器 新建本机用户的keytab:

kadmin: addprinc -randkey host/域名                          // 生产本机的host 随机key 用于https请求

kadmin: addprinc -randkey hadoop/本机域名            // 生产本机的hadoop 随机key 用户启动datanode

kadmin: ktadd -k 本地路径/nachuang.keytab nachuang/mcw-cc-nachuang host/mcw-cc-nachuang nachuang/mcw-cc-node host/mcw-cc-node

Hadoop 配置

加入以下配置:
core-site.xml
<property>
  <name>hadoop.security.authorization</name>
  <value>true</value>
</property>

<property>
  <name>hadoop.security.authentication</name>
  <value>kerberos</value>
</property>
hdfs-size.xml
<property>
  <name>dfs.https.address</name>
  <value>mcw-cc-nachuang:50470</value>
</property>
<property>
  <name>dfs.https.port</name>
  <value>50470</value>
</property>
<property>
  <name>dfs.block.access.token.enable</name>
  <value>true</value>
</property>
<property>
  <name>dfs.namenode.keytab.file</name>
  <value>/home/nachuang/Workspace/Hadoop/hadoop-1.0.4/conf/nachuang.keytab</value>
</property>
<property>
  <name>dfs.namenode.kerberos.principal</name>
  <value>nachuang/_HOST@hdfs.server</value>
</property>
<property>
  <name>dfs.namenode.kerberos.https.principal</name>
  <value>host/_HOST@hdfs.server</value>
</property>
<property>
  <name>dfs.secondary.http.address</name>
  <value>mcw-cc-nachuang:50090</value>
</property>
<property>
  <name>dfs.secondary.https.address</name>
  <value>0.0.0.0:50495</value>
</property>
<property>
  <name>dfs.secondary.https.port</name>
  <value>50495</value>
</property>
<property>
  <name>dfs.secondary.namenode.keytab.file</name>
  <value>/home/nachuang/Workspace/Hadoop/hadoop-1.0.4/conf/nachuang.keytab</value> 
</property>
<property>
  <name>dfs.secondary.namenode.kerberos.principal</name>
  <value>nachuang/_HOST@hdfs.server</value>
</property>
<property>
  <name>dfs.secondary.namenode.kerberos.https.principal</name>
  <value>host/_HOST@hdfs.server</value>
</property>
<property>
  <name>dfs.datanode.data.dir.perm</name>
  <value>700</value>
</property>
<property>
  <name>dfs.datanode.address</name>
  <value>0.0.0.0:1004</value>
</property>
<property>
  <name>dfs.datanode.http.address</name>
  <value>0.0.0.0:1006</value>
</property>
<property>
  <name>dfs.datanode.keytab.file</name>
  <value>/home/nachuang/Workspace/Hadoop/hadoop-1.0.4/conf/nachuang.keytab</value> 
</property>
<property>
  <name>dfs.datanode.kerberos.principal</name>
  <value>nachuang/_HOST@hdfs.server</value>
</property>
<property>
  <name>dfs.datanode.kerberos.https.principal</name>
  <value>host/_HOST@hdfs.server</value>
</property>

mapred-site.xml

<property>
  <name>mapreduce.jobtracker.kerberos.principal</name>
  <value>nachuang/_HOST@hdfs.server</value>
</property>
<property>
  <name>mapreduce.jobtracker.kerberos.https.principal</name>
  <value>host/_HOST@hdfs.server</value>
</property>
<property>
  <name>mapreduce.jobtracker.keytab.file</name>
  <value>/home/nachuang/Workspace/Hadoop/hadoop-1.0.4/conf/nachuang.keytab</value> 
</property>
<property>
  <name>mapreduce.tasktracker.kerberos.principal</name>
  <value>nachuang/_HOST@hdfs.server</value>
</property>
<property>
  <name>mapreduce.tasktracker.kerberos.https.principal</name>
  <value>host/_HOST@hdfs.server</value>
</property>
<property>
  <name>mapreduce.tasktracker.keytab.file</name>
  <value>/home/nachuang/Workspace/Hadoop/hadoop-1.0.4/conf/nachuang.keytab</value> 
</property>
<property>
  <name>mapred.task.tracker.task-controller</name>
  <value>org.apache.hadoop.mapred.DefaultTaskController</value>
</property>
<property>
  <name>mapreduce.tasktracker.group</name>
  <value>nachuang</value>
</property>

hadoop-env.sh
export hadoop_secure_dn_user=nachuang

taskcontroller.cfg
hadoop.log.dir=/home/nachuang/Workspace/Hadooop/hadoop-1.0.4/logs
mapred.tasktracker.tasks.sleeptime-before-sigkill=2
mapreduce.tasktracker.group=nachuang

另外,要安装jsvc和aes。

sudo apt-get install jsvc

JCE下载地址: http://www.oracle.com/technetwork/java/javase/downloads/index.html

Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files 7

将下载的包解压并放到该目录下:
$JAVA_HOME/jre/lib/security


验证:

先启动KDC: sudo krb5kdc
如果要用kadmin登录的话, 还要启动: sudo kadmind

启动HDFS:
namenode:  bin/hadoop-daemon.sh start namenode
datenode需要用root用户: sudo bin/hadoop-daemons.sh start datanode

测试: bin/hadoop fs -ls 
如果Kerberos配置成功,应该显示以下错误,表示当前用户没有TGT:

ERROR security.UserGroupInformation: PriviledgedActionException as:nachuang cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]

kinit -k -t nachuang.keytab nachuang/mcw-cc-nachuang@hdfs.server

执行后会生成/tmp/krb5cc_1000(TGT和Key)文件,缓存之后将不需要再一次让Kerberos验证Client的身份,只需将TGT交给Kerberos生成Ticket即可。

常见错误:

kadmin是显示:
kadmin: Client not found in Kerberos database while initializing kadmin interface
说明当前用户没有在kerberos 的数据库, 在kadmin.local中,用命令addprinc加入你想
登录的用户名。

启动服务:
ERROR: java.io.IOException: Login failure for nachuang/mcw-cc-nachuang@hdfs.server from keytab /home/nachuang/Workspace/Hadoop/hadoop-1.0.4/conf/nachuang.keytab
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab
基本上是keytab的问题,重新tadd一个keytab。

参考:

Kerberos错误消息:



  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值