hadoop2.7.2开启kerberos认证

环境介绍:
一共三台机器:
hadoop11: 192.168.230.11 namenode 、kerberos client
hadoop12: 192.168.230.12 datanode 、kerberos client
hadoop13: 192.168.230.13 datanode 、kerberos server(KDC)
保证安装kerberos 之前能正常开启hadoop集群(已安装集群)

一、介绍安装kerberos服务器

1、在hadoop13安装kerberos server相关组件:
yum install krb5-server krb5-libs krb5-auth-dialog krb5-workstation
2、安装上述组件之后, 会有如下配置文件:
/var/kerberos/krb5kdc/kdc.conf:KDC的相关信息
配置示例:

[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88
[realms]
 HADOOP.COM = {
  #master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  max_renewable_life = 7d
  supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }

注意:把aes256-cts去掉
/etc/krb5.conf: 包含Kerberos的配置信息。例如,KDC的位置,Kerberos的admin的realms 等)
配置示例:

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log
[libdefaults]
 default_realm = HADOOP.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
[realms]
 HADOOP.COM = {
  kdc = hadoop13
  admin_server = hadoop13
 }
[domain_realm]
 .hadoop.com = HADOOP.COM
 hadoop.com = HADOOP.COM

注意:/etc/krb5.conf保证和hadoop11、hadoop12的配置文件一致
3、创建/初始化Kerberos database

/usr/sbin/kdb5_util create -s -r HADOOP.COM

注意:这个步骤执行的时间较长,需要设定database管理员的密码(切记)
当Kerberos database创建好后,可以看到目录 /var/kerberos/krb5kdc 下生成了几个文件:
kadm5.acl
kdc.conf
principal
principal.kadm5
principal.kadm5.lock
principal.ok
4、 添加database administrator

/usr/sbin/kadmin.local -q "addprinc admin/admin"

5、为database administrator设置ACL权限
将文件/var/kerberos/krb5kdc/kadm5.acl的内容编辑为
*/admin@HADOOP.COM代表名称匹配/admin@HADOOP.COM 都认为是admin,权限是 。代表全部权限。
6、在hadoop13启动Kerberos daemons
手工启动:
service krb5kdc start
service kadmin start
设置开机自动启动:
chkconfig krb5kdc on
chkconfig kadmin on
7、在hadoop11、hadoop12 安装kerberos client相关组件:

yum install krb5-workstation krb5-libs krb5-auth-dialog

配置/etc/krb5.conf,保证和hadoop13上的配置文件一致

scp /etc/krb5.conf root@hadoop11:/etc
scp /etc/krb5.conf root@hadoop12:/etc

8、kerberos的日常操作与常见问题
通过本地直接登录:kadmin.local
在这里插入图片描述
list_principals 可以查看所有凭据
在这里插入图片描述
常用操作命令如下:
登录Kerberos: kadmin.local
查看已存在凭据:list_principals, listprincs, get_principals, getprincs
添加凭据:add_principal, addprinc, ank
修改凭据密码:change_password, cpw
删除凭据:delete_principal, delprinc
认证用户:kinit
查看当前认证用户:klist
删除当前认证的缓存:kdestroy
生成keytab:使用xst命令或者ktadd命令
通过凭证文件认证:

kinit -k -t /opt/module/hadoop-2.7.2/etc/hadoop/root.keytab root/hadoop12@HADOOP.COM

另外:配置JCE,这是因为CentOS6.5及以上系统默认使用AES-256加密,因此需要所有节点安装并配置JCE(jce_policy-8.zip),解压后覆盖至:
$JAVA_HOME/jre/lib/security。
到目前安装kerberos服务完成。

二:hdfs开启kerberos认证

1、在hadoop13上创建认证凭证

kadmin.local -q "addprinc -randkey root/hadoop13@HADOOP.COM"
kadmin.local -q "addprinc -randkey root/hadoop11@HADOOP.COM"
kadmin.local -q "addprinc -randkey root/hadoop12@HADOOP.COM"
kadmin.local -q "addprinc -randkey HTTP/hadoop12@HADOOP.COM"
kadmin.local -q "addprinc -randkey HTTP/hadoop13@HADOOP.COM"
kadmin.local -q "addprinc -randkey HTTP/hadoop11@HADOOP.COM"

2、创建认证凭证文件

mkdir /home/kerberos
cd /home/kerberos
 kadmin.local -q "xst -k root-unmerged.keytab root/hadoop13@HADOOP.COM"
kadmin.local -q "xst -k root-unmerged.keytab root/hadoop11@HADOOP.COM"
kadmin.local -q "xst -k root-unmerged.keytab root/hadoop12@HADOOP.COM"

合并keytab文件
[root@hadoop13 kerberos]# ktutil
ktutil: rkt hdfs-unmerged.keytab
ktutil: rkt HTTP.keytab
ktutil: wkt hdfs.keytab
ktutil: q
验证认证:

kinit -k -t root.keytab root/hadoop11@HADOOP.COM

3、拷贝Kerberos Keytab文件到各自的机器

scp root.keytab root@hadoop11:/opt/module/hadoop-2.7.2/etc/hadoop/
scp root.keytab root@hadoop12:/opt/module/hadoop-2.7.2/etc/hadoop/

4、hadoop配置文件配置:kerberos
修改core-site.xml,增加如下配置:

<property>
  <name>hadoop.security.authentication</name>
  <value>kerberos</value>
 </property>

 <property>
  <name>hadoop.security.authorization</name>
  <value>true</value>
 </property>

修改hdfs-site.xml,增加如下配置:

<!--kerberos security-->
 <property>
  <name>dfs.block.access.token.enable</name>
  <value>true</value>
 </property>
 <property>
  <name>dfs.datanode.data.dir.perm</name>
  <value>700</value>
 </property>
 <property>
  <name>dfs.namenode.keytab.file</name>
  <value>/opt/module/hadoop-2.7.2/etc/hadoop/root.keytab</value>
 </property>
 <property>
  <name>dfs.namenode.kerberos.principal</name>
  <value>root/_HOST@HADOOP.COM</value>
 </property>
 <property>
  <name>dfs.namenode.kerberos.https.principal</name>
  <value>HTTP/_HOST@HADOOP.COM</value>
 </property>
 <property>
  <name>dfs.datanode.keytab.file</name>
  <value>/opt/module/hadoop-2.7.2/etc/hadoop/root.keytab</value>
 </property>
 <property>
  <name>dfs.datanode.kerberos.principal</name>
  <value>root/_HOST@HADOOP.COM</value>
 </property>
 <property>
  <name>dfs.datanode.kerberos.https.principal</name>
  <value>HTTP/_HOST@HADOOP.COM</value>
 </property>
 <property>
  <name>dfs.datanode.address</name>
  <value>0.0.0.0:61004</value>
 </property>
 <property>
  <name>dfs.datanode.http.address</name>
  <value>0.0.0.0:61006</value>
 </property>
<!--webHDFS security-->
 <property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
 </property>
 <property>
  <name>dfs.web.authentication.kerberos.keytab</name>
  <value>/opt/module/hadoop-2.7.2/etc/hadoop/root.keytab</value>
 </property>
 <property>
  <name>dfs.web.authentication.kerberos.principal</name>
  <value>HTTP/_HOST@HADOOP.COM</value>
 </property>
<!-- datanode SASL-->
<property>
  <name>dfs.http.policy</name>
  <value>HTTPS_ONLY</value>
</property>
<property>
  <name>dfs.data.transfer.protection</name>
  <value>integrity</value>
</property>
<property>
     <name>dfs.permissions.supergroup</name>
     <value>supergroup</value>
     <description>The name of the group of
     super-users.</description>
</property>
<property>
<name>dfs.secondary.namenode.keytab.file</name>
<value>/opt/module/hadoop-2.7.2/etc/hadoop/root.keytab</value>
</property>
<property>
<name>dfs.secondary.namenode.kerberos.principal</name>
<value>root/_HOST@HADOOP.COM</value>
</property>
</configuration>

分别复制core-site.xml,hdfs-site.xml到hadoop11,hadoop12机器上

scp hdfs-site.xml root@hadoop13:/opt/module/hadoop-2.7.2/etc/hadoop/
scp hdfs-site.xml root@hadoop11:/opt/module/hadoop-2.7.2/etc/hadoop/
scp core-site.xml root@hadoop13:/opt/module/hadoop-2.7.2/etc/hadoop/
scp core-site.xml root@hadoop11:/opt/module/hadoop-2.7.2/etc/hadoop/

三、ssl配置。

如没配置,会有如下错误:
在这里插入图片描述
在hadoop13执行如下语句,生产CA证书,然后分别拷贝至hadoop11、hadoop12

openssl req -new -x509 -keyout ca_key -out ca_cert -days 9999 -subj '/C=CN/ST=hunan/L=changsha/O=dtdream/OU=security/CN=hadoop.com'

然后分别在hadoop11、hadoop12、hadoop13执行如下语句:

keytool -keystore keystore -alias localhost -validity 9999 -genkey -keyalg RSA -keysize 2048 -dname "CN=hadoop.com, OU=test, O=test, L=changsha, ST=hunan, C=cn"
keytool -keystore truststore -alias CARoot -import -file ca_cert
keytool -certreq -alias localhost -keystore keystore -file cert
openssl x509 -req -CA ca_cert -CAkey ca_key -in cert -out cert_signed -days 9999 -CAcreateserial -passin pass:changeit
keytool -keystore keystore -alias CARoot -import -file ca_cert
keytool -keystore keystore -alias localhost -import -file cert_signed

分别在每台机器进行kerberos认证:

kinit -k -t /opt/module/hadoop-2.7.2/etc/hadoop/root.keytab root/hadoop12@HADOOP.COM
kinit -k -t /opt/module/hadoop-2.7.2/etc/hadoop/root.keytab root/hadoop12@HADOOP.COM
kinit -k -t /opt/module/hadoop-2.7.2/etc/hadoop/root.keytab root/hadoop13@HADOOP.COM
/opt/module/hadoop-2.7.2/sbin,然后执行:./start-dfs.sh

cd /opt/module/hadoop-2.7.2/sbin,执行./start-dfs.sh
查看各个节点namenode,datanode节点正常。
然后在任意机器执行:hadoop fs -ls /
在这里插入图片描述
执行kdestroy,之后再执行hadoop fs -ls / 查看报错,kerberos认证生效。

在这里插入图片描述

四、yarn开启kerberos认证

因为之前配置hsfs认证已经设置了认证文件,因此我们这一步用之前的认证文件,只需要配置文件:yarn-site.xml,加入如下配置:

<property>
  <name>yarn.resourcemanager.keytab</name>
  <value>/opt/module/hadoop-2.7.2/etc/hadoop/root.keytab</value>
</property>
<property>
  <name>yarn.resourcemanager.principal</name> 
  <value>root/_HOST@HADOOP.COM</value>
</property>
 
<property>
  <name>yarn.nodemanager.keytab</name>
  <value>/opt/module/hadoop-2.7.2/etc/hadoop/root.keytab</value>
</property>
<property>
  <name>yarn.nodemanager.principal</name> 
  <value>root/_HOST@HADOOP.COM</value>
</property> 
<property>
  <name>yarn.nodemanager.container-executor.class</name>  
 <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
</property> 
<property>
  <name>yarn.nodemanager.linux-container-executor.group</name>
  <value>root</value>
</property>
<property>
  <name>yarn.http.policy</name>
  <value>HTTPS_ONLY</value>
</property>
</configuration>

配置container-executor.cfg

#configured value of yarn.nodemanager.linux-container-executor.group
yarn.nodemanager.linux-container-executor.group=root
#comma separated list of users who can not run applications
banned.users=bin
#Prevent other super-users
min.user.id=0
#comma separated list of system users who CAN run applications
allowed.system.users=root,nobody,impala,hive,hdfs,yarn

拷贝相应的配置文件到各节点:
另外需要配置如下权限,不然会报如下错误:

cd /opt/module/hadoop-2.7.2/bin
chmod 6050 container-executor

在这里插入图片描述
然后整个集群的kerberos认证完成。
参考:
https://blog.csdn.net/lovebomei/article/details/79807484
https://blog.csdn.net/forever19870418/article/details/68945850
https://blog.csdn.net/dxl342/article/details/55510659
https://blog.csdn.net/qq_27499099/article/details/77771253

  • 5
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值