CDH5.8.4-Hadoop2.6.0-hive-yarn-hbase 集群上配置集成 Kerberos认证

1.Hadoop 的认证机制

        简单来说,没有做 kerberos 认证的 Hadoop,只要有 client 端就能够连接上。而且,通过一个有 root 的权限的内网机器,通过创建对应的 Linux 用户,就能够得到 Hadoop 集群上对应的权限。而实行 Kerberos 后,任意机器的任意用户都必须现在 Kerberos 的 KDC 中有记录,才允许和集群中其它的模块进行通信。

2.HDFS 上配置 kerberos

2.1创建认证规则

        在 Kerberos 安全机制里,一个 principal 就是 realm 里的一个对象,一个 principal 总是和一个密钥(secret key)成对出现的。
        这个 principal 的对应物可以是 service,可以是 host,也可以是 user,对于 Kerberos 来说,都没有区别。
        Kdc(Key distribute center) 知道所有 principal 的 secret key,但每个 principal 对应的对象只知道自己的那个 secret key 。这也是“共享密钥“的由来。
       对于 hadoop,principals 的格式为 username/fully.qualified.domain.name@YOUR-REALM.COM
       通过 yum 源安装的 cdh 集群中,NameNode 和 DataNode 是通过 hdfs 启动的,故为集群中每个服务器节点添加两个principals:hdfs、HTTP。
       在 KCD server 上(这里是 cdh1)创建 hdfs 的principal,因为我不是用hdfs不是用hdfs启动hdfs的,我用的是hadoop用户,所以我创建的是hadoop用户的principal和HTTP principal以及host的principal即可:

[root@host151 ~]# kadmin.local
kadmin.local:addprinc -randkey hadoop/host151@NBDP.COM
kadmin.local:addprinc -randkey hadoop/host152@NBDP.COM
kadmin.local:addprinc -randkey hadoop/host153@NBDP.COM
kadmin.local:addprinc -randkey HTTP/host151@NBDP.COM
kadmin.local:addprinc -randkey HTTP/host152@NBDP.COM
kadmin.local:addprinc -randkey HTTP/host153@NBDP.COM
kadmin.local:addprinc -randkey host/host151@NBDP.COM
kadmin.local:addprinc -randkey host/host152@NBDP.COM
kadmin.local:addprinc -randkey host/host153@NBDP.COM

说明:-randkey 标志没有为新 principal 设置密码,而是指示 kadmin 生成一个随机密钥。之所以在这里使用这个标志,是因为此 principal 不需要用户交互。它是计算机的一个服务器帐户。

创建完成后,查看:
kadmin.local:  listprincs
HTTP/host151@NBDP.COM
HTTP/host152@NBDP.COM
HTTP/host153@NBDP.COM
hadoop/host151@NBDP.COM
hadoop/host152@NBDP.COM
hadoop/host153@NBDP.COM
host/host151@NBDP.COM
host/host152@NBDP.COM
host/host153@NBDP.COM

2.2创建keytab文件

        keytab 是包含 principals 和加密 principal key 的文件。

        keytab 文件对于每个 host 是唯一的,因为 key 中包含 hostname。keytab 文件用于不需要人工交互和保存纯文本密码,实现到 kerberos 上验证一个主机上的 principal。

       因为服务器上可以访问 keytab 文件即可以以 principal 的身份通过 kerberos 的认证,所以,keytab 文件应该被妥善保存,应该只有少数的用户可以访问

      创建包含 hdfs principal 和 host principal 的 hdfs keytab:

     xst -norandkey -k hdfs.keytab hdfs/fully.qualified.domain.name host/fully.qualified.domain.name

     创建包含 mapred principal 和 host principal 的 mapred keytab:

     xst -norandkey -k mapred.keytab mapred/fully.qualified.domain.name host/fully.qualified.domain.name

在 host150 节点,即 KDC server 节点上执行下面命令,分别创建启动用户和http以及host的keytab文件放到一起:

kadmin.local:ktadd -norandkey -k /home/keydir/hadoop/hadoop.keytab hadoop/host151@NBDP.COM
kadmin.local:ktadd -norandkey -k /home/keydir/hadoop/hadoop.keytab hadoop/host152@NBDP.COM
kadmin.local:ktadd -norandkey -k /home/keydir/hadoop/hadoop.keytab hadoop/host153@NBDP.COM
kadmin.local:ktadd -norandkey -k /home/keydir/hadoop/hadoop.keytab HTTP/host151@NBDP.COM
kadmin.local:ktadd -norandkey -k /home/keydir/hadoop/hadoop.keytab HTTP/host152@NBDP.COM
kadmin.local:ktadd -norandkey -k /home/keydir/hadoop/hadoop.keytab HTTP/host153@NBDP.COM
kadmin.local:ktadd -norandkey -k /home/keydir/hadoop/hadoop.keytab host/host151@NBDP.COM
kadmin.local:ktadd -norandkey -k /home/keydir/hadoop/hadoop.keytab host/host152@NBDP.COM
kadmin.local:ktadd -norandkey -k /home/keydir/hadoop/hadoop.keytab host/host153@NBDP.COM

       这样,就会在 /home/keydir/hadoop/目录下生成hadoop.keytab,默认目录生成keytab的目录是/var/kerberos/krb5kdc/ 

使用 klist 显示 hadoop.keytab 文件列表:
[root@host150 krb5kdc]klist -ket hadoop.keytab

2.3部署kerberos keytab文件

拷贝 hadoop.keytab 文件到其他节点的/home/keydir/hadoop/目录

[root@host150 krb5kdc]# cp root.keytab /home/keydir/hadoop/
[root@host150 krb5kdc]# scp root.keytab root@host152:/home/keydir/hadoop/
[root@host150 krb5kdc]# scp root.keytab root@host153:/home/keydir/hadoop/

进行权限的设置

[root@host151 krb5kdc]# chown hadoop:hadoop /home/keydir/hadoop
[root@host151 krb5kdc]#chmod 400 /home/keydir/hadoop/hadoop.keytab
[root@host151 krb5kdc]# ssh host152 "chown hadoop:hadoop /home/keydir/hadoop ;chmod 400 /home/keydir/hadoop/hadoop.keytab"
[root@host151 krb5kdc]# ssh host153 "chown hadoop:hadoop /home/keydir/hadoop  ;chmod 400 /home/keydir/hadoop/hadoop.keytab"

       由于 keytab 相当于有了永久凭证,不需要提供密码(如果修改kdc中的principal的密码,则该keytab就会失效),所以其他用户如果对该文件有读权限,就可以冒充 keytab 中指定的用户身份访问 hadoop,所以 keytab 文件需要确保只对 owner 有读权限(0400)

2.4ticket的获取

用hadoop用户登陆各个节点,根据生成分各到各个节点的hadoop.keytab分别获取ticket。

[root@host150 ~] kinit -k -t /home/keydir/hadoop/hadoop.keytab hadoop/host151@NBDP.COM
[root@host150 ~] kinit -k -t /home/keydir/hadoop/hadoop.keytab hadoop/host152@NBDP.COM
[root@host150 ~] kinit -k -t /home/keydir/hadoop/hadoop.keytab hadoop/host153@NBDP.COM

       如果出现错误:kinit: Key table entry not found while getting initial credentials,则上面的合并有问题,重新执行前面的操作。

3.修改 hdfs 配置文件

3.1停止集群

3.2修改集群配置

[root@host150 hadoop]# cd /root/bigdata/hadoop/etc/hadoop

在集群中所有节点的 core-site.xml 文件中添加下面的配置

    <property>
        <name>hadoop.security.authorization</name>
        <value>true</value>
    </property>
    <property>
        <name>hadoop.security.authentication</name>
        <value>kerberos</value>
    </property>

在集群中所有节点的 hdfs-site.xml 文件中添加下面的配置

    <property>
        <name>dfs.block.access.token.enable</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.namenode.kerberos.principal</name>
        <value>hadoop/_HOST@NBDP.COM</value>
    </property>
    <property>
        <name>dfs.namenode.keytab.file</name>
        <value>/home/keydir/hadoop/hadoop.keytab</value>
    </property>
        <property>
        <name>dfs.namenode.kerberos.https.principal</name>
        <value>HTTP/_HOST@NBDP.COM</value>
    </property>
        <property>
        <name>dfs.secondary.namenode.kerberos.principal</name>
        <value>hadoop/_HOST@NBDP.COM</value>
    </property>
        <property>
        <name>dfs.secondary.namenode.keytab.file</name>
        <value>/home/keydir/hadoop/hadoop.keytab</value>
    </property>
        <property>
        <name>dfs.secondary.namenode.kerberos.https.principal</name>
        <value>HTTP/_HOST@NBDP.COM</value>
    </property>
    <property>
        <name>dfs.datanode.kerberos.principal</name>
        <value>hadoop/_HOST@NBDP.COM</value>
    </property>
    <property>
        <name>dfs.datanode.keytab.file</name>
        <value>/home/keydir/hadoop/hadoop.keytab</value>
    </property>
        <property>
        <name>dfs.datanode.kerberos.https.principal</name>
        <value>HTTP/_HOST@NBDP.COM</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir.perm</name>
        <value>700</value>
    </property>
        <property>
        <name>dfs.datanode.address</name>
        <value>0.0.0.0:1004</value>
    </property>
    <property>
        <name>dfs.datanode.http.address</name>
        <value>0.0.0.0:1006</value>
    </property>

如果配置了 WebHDFS,则添加

    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.web.authentication.kerberos.principal</name>
        <value>HTTP/_HOST@NBDP.COM</value>
    </property>
    <property>
        <name>dfs.web.authentication.kerberos.keytab</name>
        <value>/home/keydir/hadoop/hadoop.keytab</value>
    </property>

如果想开启 SSL,添加https的配置,同时还要修改fs.datanode.address和dfs.datanode.http.address改成大端口,即端口大于1024,因为https配置比较麻烦,这里就不采用https的方式。

    <property>
        <name>dfs.http.policy</name>
        <value>HTTPS_ONLY</value>
    </property>
    <property>
        <name>dfs.data.transfer.protection</name>
        <value>integrity</value>
    </property>

如果 HDFS 配置了 QJM HA,则需要添加(另外,你还要在 zookeeper 上配置 kerberos)

	<property>
	   <name>dfs.journalnode.keytab.file</name>
	   <value>/home/keydir/hadoop/hadoop.keytab</value>
	</property>
	<property>
	   <name>dfs.journalnode.kerberos.principal</name>
	   <value>hadoop/_HOST@NBDP.COM</value>
	</property>
	<property>
	   <name>dfs.journalnode.kerberos.internal.spnego.principal</name>
	   <value>HTTP/_HOST@NBDP.COM</value>
	</property>

注意:
1. dfs.datanode.address表示 data transceiver RPC server 所绑定的 hostname 或 IP 地址,如果开启 security,端口号必须小于 1024(privileged port),否则的话启动 datanode 时候会报 Cannot start secure cluster without privileged resources 错误
2. principal 中的 instance 部分可以使用 _HOST 标记,系统会自动替换它为全称域名
3. 如果开启了 security, hadoop 会对 hdfs block data(由 dfs.data.dir 指定)做 permission check,方式用户的代码不是调用hdfs api而是直接本地读block data,这样就绕过了kerberos和文件权限验证,管理员可以通过设置 dfs.datanode.data.dir.perm 来修改 datanode 文件权限,这里我们设置为700

修改hadoop的配置文件mapred-site.xml ,添加配置如下:

    <property>
        <name>mapreduce.jobtracker.kerberos.principal</name>
        <value>hadoop/_HOST@NBDP.COM</value>
    </property>
    <property>
        <name>mapreduce.jobtracker.keytab.file</name>
        <value>/home/keydir/hadoop/hadoop.keytab</value>
    </property>
    <property>
        <name>mapreduce.tasktracker.kerberos.principal</name>
        <value>hadoop/_HOST@NBDP.COM</value>
    </property>
    <property>
        <name>mapreduce.tasktracker.keytab.file</name>
        <value>/home/keydir/hadoop/hadoop.keytab</value>
    </property>

修改hadoop的配置文件yarn-site.xml,添加配置文件如下:

    <property>
        <name>yarn.resourcemanager.principal</name>
        <value>hadoop/_HOST@NBDP.COM</value>
    </property>
    <property>
        <name>yarn.resourcemanager.keytab</name>
        <value>/home/keydir/hadoop/hadoop.keytab</value>
    </property>
    <property>
        <name>yarn.nodemanager.principal</name>
        <value>hadoop/_HOST@NBDP.COM</value>
    </property>
    <property>
        <name>yarn.nodemanager.keytab</name>
        <value>/home/keydir/hadoop/hadoop.keytab</value>
    </property>

3.3检查集群上的 HDFS 和本地文件的权限

请参考 Verify User Accounts and Groups in CDH 5 Due to Security:
https://docs.cloudera.com/documentation/enterprise/6/latest/topics/cm_sg_s1_install_cm_cdh.html

3.4启动集群

[hadoop@host151 sbin]# cd /home/hadoop/bigdata/hadoop/sbin
[hadoop@host151 sbin]# sh start-dfs.sh

查看各个节点下面的logs文件夹下的日志,发现只有NameNode节点启动了,DataNode没启动起来,DataNode日志报错如下:

http://commons.apache.org/proper/commons-daemon/download_daemon.cgi
java.lang.RuntimeException: Cannot start secure DataNode without configuring either privileged resources or SASL RPC data transfer protection and SSL for HTTP.  Using privileged resources in combination with SASL RPC data transfer protection is not supported.
        at org.apache.hadoop.hdfs.server.datanode.DataNode.checkSecureConfig(DataNode.java:1285)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1185)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:460)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2497)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2384)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2431)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2613)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2637)

         原因是:hadoop2.6.0以后hdfs集志Kerberos 需要认证需要安全环境,DataNode 需要通过 JSVC 启动。首先检查是否安装了 JSVC 命令,然后配置环境变量即可解决。

4.安装JSVC

jsvc下载地址:
http://commons.apache.org/proper/commons-daemon/download_daemon.cgi

下载:commons-daemon-1.2.2-src.zipcommons-daemon-1.2.2-bin.zip

解压源码包,切换到unix目录下,对jsvc源码进行编译,生成jsvc:

[hadoop@host151 jsvc]# unzip commons-daemon-1.2.2-src.zip
[hadoop@host151 jsvc]$ cd commons-daemon-1.2.2-src/src/native/unix
[hadoop@host151 jsvc]#./configure --with-java=/opt/jdk1.8.0_131
[hadoop@host151 jsvc]# make
[hadoop@host151 jsvc]# ll
total 396
-rw-rw-r-- 1 hadoop hadoop  16345 Feb  1 21:12 config.log
-rwxrwxr-x 1 hadoop hadoop     92 Feb  1 21:12 config.nice
-rwxrwxr-x 1 hadoop hadoop  24805 Feb  1 21:12 config.status
-rwxrwxr-x 1 hadoop hadoop 147290 Sep 30 16:51 configure
-rw-r--r-- 1 hadoop hadoop   5055 Sep 30 16:26 configure.in
-rw-r--r-- 1 hadoop hadoop   2594 Sep 30 16:26 INSTALL.txt
-rwxrwxr-x 1 hadoop hadoop 174664 Feb  1 21:12 jsvc
-rw-rw-r-- 1 hadoop hadoop   1175 Feb  1 21:12 Makedefs
-rw-r--r-- 1 hadoop hadoop   1081 Sep 30 16:26 Makedefs.in
-rw-rw-r-- 1 hadoop hadoop   1110 Feb  1 21:12 Makefile
-rw-r--r-- 1 hadoop hadoop   1110 Sep 30 16:26 Makefile.in
drwxr-xr-x 2 hadoop hadoop     58 Sep 30 16:26 man
drwxr-xr-x 2 hadoop hadoop   4096 Feb  1 21:12 native
drwxr-xr-x 2 hadoop hadoop    158 Sep 30 16:26 support

若执行configure命令报错如下,其实是缺少c++的依赖,yum安装即可:
configure: error: no acceptable C compiler found in $PATH

[hadoop@host151 jsvc]# yum install gcc

将生成的jsvc文件复制到各个安装节点的HADOOP_HOME的libexec目录下面:
[hadoop@host151 unix]$ cd /home/hadoop/bigdata/hadoop/libexec
[hadoop@host151 unix]# scp jsvc hadoop@host152:/home/hadoop/bigdata/hadoop/libexec
[hadoop@host151 unix]# scp jsvc hadoop@host153:/home/hadoop/bigdata/hadoop/libexec

修改hadoop-env.sh文件,找到对应的位置进行修改:
[root@host150 hadoop]# vim hadoop-env.sh
export HADOOP_SECURE_DN_USER=hadoop
export JSVC_HOME=/home/hadoop/bigdata/jsvc/commons-daemon-1.2.2-src/src/native/unix

把修改的hadoop-env.sh分发到各个集群的节点:
[hadoop@host151 hadoop]# scp hadoop-env.sh hadoop@151:/home/hadoop/bigdata/hadoop/etc/hadoop
[hadoop@host151 hadoop]# scp hadoop-env.sh hadoop@152:/home/hadoop/bigdata/hadoop/etc/hadoop

查看原来各个节点lib下面的commons-daemon-1.2.2.jar,若版本不对应,则下载commons-daemon-1.2.2-bin.zip,进行解压获得commons-daemon-1.2.2.jar,删除原来lib库不同版本的,上传与编译码版本一致的的lib库下面的。

查看commons-daemon-1.2.2.jar的版本,若与上面编译的版本不同就删除
[hadoop@host151 hadoop]# cd /home/hadoop/bigdata/hadoop/share/hadoop/hdfs/lib
[hadoop@host151 lib]# ll commons-daemon-*.jar
[hadoop@host151 lib]# rm commons-daemon-*.jar
[hadoop@host151 jsvc]# unzip commons-daemon-1.2.2-bin.zip
[hadoop@host151 commons-daemon-1.2.2]# cp commons-daemon-1.2.2.jar /home/hadoop/bigdata/hadoop/share/hadoop/hdfs/lib

若版本不一致,其他节点也删除原来的commons-daemon-*.jar,并把新的从当前节点scp到各个节点即可:
[hadoop@host151 lib]# scp commons-daemon-1.2.2.jar hadoop@host152:/home/hadoop/bigdata/hadoop/share/hadoop/hdfs/lib
[hadoop@host151 lib]# scp commons-daemon-1.2.2.jar hadoop​​​​@host153:/home/hadoop/bigdata/hadoop/share/hadoop/hdfs/lib

环境变量添加,生效后各个节点同样操作
[hadoop@host151 hadoop]# vim /home/hadoop/.bash_profile
export JSVC_HOME=/home/hadoop/bigdata/jsvc/commons-daemon-1.2.2-src/src/native/unix
[hadoop@host151 hadoop]# source /home/hadoop/.bash_profile

启动NameNode,用hadoop用户:
[hadoop@host151 sbin]# start-dfs.sh
启动DataNode,用root用户,一次性启动所有DataNode:
[root@host151 sbin]# start-secure-dns.sh

或者用root用户逐个DataNode节点启动也可以
[root@host152 sbin]# hadoop-daemon.sh start datanode
[root@host153 sbin]# hadoop-daemon.sh start datanode
      注意:开启Kerberos安全模式后,start-dfs.sh命令只会启动NamdeNode,数据节点要单独用配置的jsvc方式启动。只能用root用户启动。

5.hive集成kerberos认证

修改 hive-site.xml文件,添加如下配置即可:

    <property>
        <name>hive.server2.authentication</name>
        <value>KERBEROS</value>
    </property>
    <property>
        <name>hive.server2.authentication.kerberos.principal</name>
        <value>hadoop/_HOST@NBDP.COM</value>
    </property>
    <property>
        <name>hive.server2.authentication.kerberos.keytab</name>
        <value>/home/keydir/hadoop/hadoop.keytab</value>
    </property>
    <property>
        <name>hive.metastore.sasl.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>hive.metastore.kerberos.keytab.file</name>
        <value>/home/keydir/hadoop/hadoop.keytab</value>
    </property>
    <property>
        <name>hive.metastore.kerberos.principal</name>
        <value>hadoop/_HOST@NBDP.COM</value>
    </property>

添加kerberos认证后,beeline连接测试:

[hadoop@host151 conf]$ beeline
beeline> !connect jdbc:hive2://localhost:10000/default;principal=hadoop/host151@NBDP.COM hadoop
scan complete in 3ms
Connecting to jdbc:hive2://localhost:10000/default;principal=hadoop/host151@NBDP.COM
Enter password for jdbc:hive2://localhost:10000/default;principal=hadoop/host151@NBDP.COM: *****
Connected to: Apache Hive (version 1.1.0-cdh5.8.4)
Driver: Hive JDBC (version 1.1.0-cdh5.8.4)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000/default> show databases;
+----------------+--+
| database_name  |
+----------------+--+
| default        |
| smart_test     |
+----------------+--+
2 rows selected (0.15 seconds)

6.hbase集成kerberos配置

6.1zookeeper的的kerberos认证

      用root用户登陆kdc所在服务器,创建zookeeper的principal,必须以zookeeper开头  

[root@host151 ~]# kadmin.local
kadmin.local:  addprinc -randkey zookeeper/host151@NBDP.COM
kadmin.local:  addprinc -randkey zookeeper/host152@NBDP.COM
kadmin.local:  addprinc -randkey zookeeper/host153@NBDP.COM

把创建好的principal,追加到hadoop用户的keytab,hbase的与hadoop用户同用一个keytab就可以了。

kadmin.local:  ktadd -norandkey -k /home/keydir/hadoop/hadoop.keytab zookeeper/host151@NBDP.COM
kadmin.local:  ktadd -norandkey -k /home/keydir/hadoop/hadoop.keytab zookeeper/host152@NBDP.COM
kadmin.local:  ktadd -norandkey -k /home/keydir/hadoop/hadoop.keytab zookeeper/host153@NBDP.COM

查看生成的keytab是否添加成功

[root@host151 ~]# klist -ket  /home/keydir/hadoop/hadoop.keytab

把keytab分发到各个节点

[root@host151 ~]# scp /home/keydir/hadoop/hadoop.keytab hadoop@host152:/home/keydir/hadoop/
[root@host151 ~]# scp /home/keydir/hadoop/hadoop.keytab hadoop@host153:/home/keydir/hadoop/

各个节点重新获取票据

[hadoop@host151 conf]$ kinit -kt /home/keydir/hadoop/hadoop.keytab hadoop/host151@NBDP.COM
[hadoop@host152 sbin]$ kinit -kt /home/keydir/hadoop/hadoop.keytab hadoop/host152@NBDP.COM
[hadoop@host153 sbin]$ kinit -kt /home/keydir/hadoop/hadoop.keytab hadoop/host153@NBDP.COM

如果keytab的路径相同,可以在各个节点执行:kinit -kt /home/keydir/hadoop/hadoop.keytab $USER/$HOSTNAME

6.2修改zookeeper的相关配置

切换到zookeeper的conf目录,添加jaas.conf并配置如下,各个节点同步修改,记得修改对应的principal的hostname不能用_HOST代替

[hadoop@host151 conf]$ cd /home/hadoop/bigdata/zookeeper/conf
[hadoop@host151 conf]$ touch jaas.conf
Server {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/home/keydir/hadoop/hadoop.keytab" 
  storeKey=true
  useTicketCache=false
  principal="zookeeper/host151@NBDP.COM";
};

注意:principal的加粗位置 ,记得修改对应的principal的hostname,不能用_HOST代替。

当前目录再创建java.env文件,配置环境变量指向前面创建的jaas.conf文件,各个节点同步

[hadoop@host151 conf]$ touch java.env
[hadoop@host151 conf]$ vim java.env
export JVMFLAGS="-Djava.security.auth.login.config=/home/hadoop/bigdata/zookeeper/conf/jaas.conf"
export JAVA_HOME="/opt/jdk1.8.0_131"

修改zoo.cfg ,在未尾添加kerberos相关配置,各个节点同步修改

[hadoop@host151 conf]$ vim zoo.cfg 
# kerberos 配置
kerberos.removeHostFromPrincipal=true
kerberos.removeRealmFromPrincipal=true

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000

6.3hbase配置文件修改

添加zk-jaas.conf,配置如下,然后同步各个节点,记得修改对应的principal的hostname,不能用_HOST代替。

[hadoop@host151 conf]$ cd /home/hadoop/bigdata/hbase/conf
[hadoop@host151 conf]$ touch zk-jaas.conf
[hadoop@host151 conf]$ vim zk-jaas.conf
Client {
      com.sun.security.auth.module.Krb5LoginModule required
      useKeyTab=true
      useTicketCache=false
      keyTab="/home/keydir/hadoop/hadoop.keytab"
      principal="hadoop/host151@NBDP.COM";
   };

修改hbase-env.sh,配置如下,指向刚才添加的zk-jaas.conf

[hadoop@host151 conf]$ vim hbase-env.sh
export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC -Djava.security.auth.login.config=/home/hadoop/bigdata/hbase/conf/zk-jaas.conf"
export HBASE_MANAGES_ZK=false

修改hbase-site.xml ,添加kerberos相关配置如下,其中hostname可以用_HOST来代替,然后分发到各个节点即可。

[hadoop@host151 conf]$ vim hbase-site.xml

	<property>
		<name>hbase.security.authentication</name>
		<value>kerberos</value>
	</property>
	<property>
		<name>hbase.rpc.engine</name>
		<value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
	</property>
	<property>
		<name>hbase.coprocessor.region.classes</name>
		<value>org.apache.hadoop.hbase.security.token.TokenProvider</value>
	</property>
	<property>
		<name>hbase.regionserver.kerberos.principal</name>
		<value>hadoop/_HOST@NBDP.COM</value>
	</property>
	<property>
		<name>hbase.regionserver.keytab.file</name>
		<value>/home/keydir/hadoop/hadoop.keytab</value>
	</property>
	<property>
		<name>hbase.master.kerberos.principal</name>
		<value>hadoop/_HOST@NBDP.COM</value>
	</property>
	<property>
		<name>hbase.master.keytab.file</name>
		<value>/home/keydir/hadoop/hadoop.keytab</value>
	</property>

6.4hbase的启动

[hadoop@host151 conf]$ start-hbase.sh 

发现报错,主节点一直超时,从节点报错如下:

2020-02-03 11:58:15,504 WARN  [regionserver/host152/192.168.206.152:60020] security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
2020-02-03 11:58:15,505 WARN  [regionserver/host152/192.168.206.152:60020] ipc.RpcClientImpl: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
2020-02-03 11:58:15,505 FATAL [regionserver/host152/192.168.206.152:60020] ipc.RpcClientImpl: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
        at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:181)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1783)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1242)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
        at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:8982)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2295)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:911)
        at java.lang.Thread.run(Thread.java:748)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
        at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
        at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
        at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
        ... 18 more
2020-02-03 11:58:15,506 WARN  [regionserver/host152/192.168.206.152:60020] regionserver.HRegionServer: error telling master we are up
com.google.protobuf.ServiceException: java.io.IOException: Could not set up IO Streams to host151/192.168.206.151:60000
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:240)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
        at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:8982)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2295)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:911)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Could not set up IO Streams to host151/192.168.206.151:60000
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:785)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1242)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
        ... 5 more
Caused by: java.lang.RuntimeException: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'.
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:685)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1783)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:643)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:751)
        ... 9 more
Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
        at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:181)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1783)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740)
        ... 9 more
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
        at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
        at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
        at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
        ... 18 more

原因:可能是安全模式下不能用start-hbase.sh脚本进行启动,用以下方式单个节点启动即可

[hadoop@host151 bin]$ hbase-daemon.sh start master
[hadoop@host152 logs]$ hbase-daemon.sh start regionserver
[hadoop@host153 logs]$ hbase-daemon.sh start regionserver

然后执行hbase shell访问即可。

 

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值