【安全】Apache HDFS 上配置 kerberos

4.1 创建认证规则
在 Kerberos 安全机制里,一个 principal 就是 realm 里的一个对象,一个 principal 总是和一个密钥(secret key)成对出现的。

这个 principal 的对应物可以是 service,可以是 host,也可以是 user,对于 Kerberos 来说,都没有区别。

Kdc(Key distribute center) 知道所有 principal 的 secret key,但每个 principal 对应的对象只知道自己的那个 secret key 。这也是“共享密钥“的由来。

对于 hadoop,principals 的格式为 username/fully.qualified.domain.name@YOUR-REALM.COM。

通过 yum 源安装的 cdh 集群中,NameNode 和 DataNode 是通过 hdfs 启动的,故为集群中每个服务器节点添加两个principals:hdfs、HTTP。

在 KCD server 上(这里是 cdh1)创建 hdfs principal:

[root@cdh-server2 ~]# kadmin.local -q "addprinc -randkey hdfs/cdh-server1@YONG.COM"
Authenticating as principal root/admin@YONG.COM with password.
WARNING: no policy specified for hdfs/cdh-server1@YONG.COM; defaulting to no policy
Principal "hdfs/cdh-server1@YONG.COM" created.
[root@cdh-server2 ~]# kadmin.local -q "addprinc -randkey hdfs/cdh-server2@YONG.COM"
[root@cdh-server2 ~]# kadmin.local -q "addprinc -randkey hdfs/cdh-server3@YONG.COM"
[root@cdh-server2 ~]#

-randkey 标志没有为新 principal 设置密码,而是指示 kadmin 生成一个随机密钥。之所以在这里使用这个标志,是因为此 principal 不需要用户交互。它是计算机的一个服务器帐户。

创建 HTTP principal:

kadmin.local -q "addprinc -randkey HTTP/cdh-server1@YONG.COM"
kadmin.local -q "addprinc -randkey HTTP/cdh-server2@YONG.COM"
kadmin.local -q "addprinc -randkey HTTP/cdh-server3@YONG.COM"

创建完成后,查看:

[root@cdh-server2 ~]# kadmin.local -q "listprincs"
Authenticating as principal root/admin@YONG.COM with password.
HTTP/cdh-server1@YONG.COM
HTTP/cdh-server2@YONG.COM
HTTP/cdh-server3@YONG.COM
K/M@YONG.COM
admin/admin@YONG.COM
hdfs/cdh-server1@YONG.COM
hdfs/cdh-server2@YONG.COM
hdfs/cdh-server3@YONG.COM
kadmin/admin@YONG.COM
kadmin/cdh-server2@YONG.COM
kadmin/changepw@YONG.COM
kiprop/cdh-server2@YONG.COM
krbtgt/YONG.COM@YONG.COM
root/root@YONG.COM
[root@cdh-server2 ~]#

4.2 创建keytab文件
keytab 是包含 principals 和加密 principal key 的文件。

keytab 文件对于每个 host 是唯一的,因为 key 中包含 hostname。keytab 文件用于不需要人工交互和保存纯文本密码,实现到 kerberos 上验证一个主机上的 principal。

因为服务器上可以访问 keytab 文件即可以以 principal 的身份通过 kerberos 的认证,所以,keytab 文件应该被妥善保存,应该只有少数的用户可以访问

创建包含 hdfs principal 和 host principal 的 hdfs keytab:

xst -norandkey -k hdfs.keytab hdfs/fully.qualified.domain.name host/fully.qualified.domain.name
创建包含 mapred principal 和 host principal 的 mapred keytab:

xst -norandkey -k mapred.keytab mapred/fully.qualified.domain.name host/fully.qualified.domain.name

注意:
上面的方法使用了xst的norandkey参数,有些kerberos不支持该参数。
当不支持该参数时有这样的提示:Principal -norandkey does not exist.,需要使用下面的方法来生成keytab文件。

或者

[root@cdh-server2 ~]# xst -norandkey -k hdfs.keytab hdfs/fully.qualified.domain.name host/fully.qualified.domain.name
-bash: xst: command not found

在 dh-server2 节点,即 KDC server 节点上执行下面命令:

[root@cdh-server2 ~]# cd /var/kerberos/krb5kdc
[root@cdh-server2 krb5kdc]# ll
total 40
-rw------- 1 root root    19 Nov  1 19:08 kadm5.acl
-rw------- 1 root root   458 Nov  1 19:03 kdc.conf
-rw------- 1 root root 24576 Nov  2 09:39 principal
-rw------- 1 root root  8192 Nov  1 20:16 principal.kadm5
-rw------- 1 root root     0 Nov  1 20:16 principal.kadm5.lock
-rw------- 1 root root     0 Nov  2 09:39 principal.ok


[root@cdh-server2 krb5kdc]# kadmin.local -q "xst  -k hdfs-unmerged.keytab  hdfs/cdh-server1@YONG.COM"
[root@cdh-server2 krb5kdc]# kadmin.local -q "xst  -k hdfs-unmerged.keytab  hdfs/cdh-server2@YONG.COM"
[root@cdh-server2 krb5kdc]# kadmin.local -q "xst  -k hdfs-unmerged.keytab  hdfs/cdh-server3@YONG.COM"
 
[root@cdh-server2 krb5kdc]# kadmin.local -q "xst  -k HTTP.keytab  HTTP/cdh-server1@YONG.COM"
[root@cdh-server2 krb5kdc]# kadmin.local -q "xst  -k HTTP.keytab  HTTP/cdh-server2@YONG.COM"
[root@cdh-server2 krb5kdc]# kadmin.local -q "xst  -k HTTP.keytab  HTTP/cdh-server3@YONG.COM"

[root@cdh-server2 krb5kdc]# ll
total 48
-rw------- 1 root root  1454 Nov  2 09:44 HTTP.keytab
-rw------- 1 root root  1454 Nov  2 09:44 hdfs-unmerged.keytab
-rw------- 1 root root    19 Nov  1 19:08 kadm5.acl
-rw------- 1 root root   458 Nov  1 19:03 kdc.conf
-rw------- 1 root root 24576 Nov  2 09:44 principal
-rw------- 1 root root  8192 Nov  1 20:16 principal.kadm5
-rw------- 1 root root     0 Nov  1 20:16 principal.kadm5.lock
-rw------- 1 root root     0 Nov  2 09:44 principal.ok
[root@cdh-server2 krb5kdc]#

这样,就会在 /var/kerberos/krb5kdc/ 目录下生成 hdfs-unmerged.keytab 和 HTTP.keytab 两个文件,接下来使用 ktutil 合并者两个文件为 hdfs.keytab。

[root@cdh-server2 krb5kdc]# ktutil
ktutil:  rkt hdfs-unmerged.keytab
ktutil:  rkt HTTP.keytab
ktutil:  wkt hdfs.keytab
ktutil:  exit
[root@cdh-server2 krb5kdc]# ll
total 52
-rw------- 1 root root  1454 Nov  2 09:44 HTTP.keytab
-rw------- 1 root root  1454 Nov  2 09:44 hdfs-unmerged.keytab
-rw------- 1 root root  2906 Nov  2 09:49 hdfs.keytab
-rw------- 1 root root    19 Nov  1 19:08 kadm5.acl
-rw------- 1 root root   458 Nov  1 19:03 kdc.conf
-rw------- 1 root root 24576 Nov  2 09:44 principal
-rw------- 1 root root  8192 Nov  1 20:16 principal.kadm5
-rw------- 1 root root     0 Nov  1 20:16 principal.kadm5.lock
-rw------- 1 root root     0 Nov  2 09:44 principal.ok
[root@cdh-server2 krb5kdc]#

使用 klist 显示 hdfs.keytab 文件列表:

[root@cdh-server2 krb5kdc]#  klist -ket  hdfs.keytab
Keytab name: FILE:hdfs.keytab
KVNO Timestamp         Principal
---- ----------------- --------------------------------------------------------
   2 11/02/18 09:49:51 hdfs/cdh-server1@YONG.COM (aes128-cts-hmac-sha1-96)
   2 11/02/18 09:49:51 hdfs/cdh-server1@YONG.COM (des3-cbc-sha1)
   2 11/02/18 09:49:51 hdfs/cdh-server1@YONG.COM (arcfour-hmac)
   2 11/02/18 09:49:51 hdfs/cdh-server1@YONG.COM (camellia256-cts-cmac)
   2 11/02/18 09:49:51 hdfs/cdh-server1@YONG.COM (camellia128-cts-cmac)
   2 11/02/18 09:49:51 hdfs/cdh-server1@YONG.COM (des-hmac-sha1)
   2 11/02/18 09:49:51 hdfs/cdh-server1@YONG.COM (des-cbc-md5)
   2 11/02/18 09:49:51 hdfs/cdh-server2@YONG.COM (aes128-cts-hmac-sha1-96)
   2 11/02/18 09:49:51 hdfs/cdh-server2@YONG.COM (des3-cbc-sha1)
   2 11/02/18 09:49:51 hdfs/cdh-server2@YONG.COM (arcfour-hmac)
   2 11/02/18 09:49:51 hdfs/cdh-server2@YONG.COM (camellia256-cts-cmac)
   2 11/02/18 09:49:51 hdfs/cdh-server2@YONG.COM (camellia128-cts-cmac)
   2 11/02/18 09:49:51 hdfs/cdh-server2@YONG.COM (des-hmac-sha1)
   2 11/02/18 09:49:51 hdfs/cdh-server2@YONG.COM (des-cbc-md5)
   2 11/02/18 09:49:51 hdfs/cdh-server3@YONG.COM (aes128-cts-hmac-sha1-96)
   2 11/02/18 09:49:51 hdfs/cdh-server3@YONG.COM (des3-cbc-sha1)
   2 11/02/18 09:49:51 hdfs/cdh-server3@YONG.COM (arcfour-hmac)
   2 11/02/18 09:49:51 hdfs/cdh-server3@YONG.COM (camellia256-cts-cmac)
   2 11/02/18 09:49:51 hdfs/cdh-server3@YONG.COM (camellia128-cts-cmac)
   2 11/02/18 09:49:51 hdfs/cdh-server3@YONG.COM (des-hmac-sha1)
   2 11/02/18 09:49:51 hdfs/cdh-server3@YONG.COM (des-cbc-md5)
   2 11/02/18 09:49:51 HTTP/cdh-server1@YONG.COM (aes128-cts-hmac-sha1-96)
   2 11/02/18 09:49:51 HTTP/cdh-server1@YONG.COM (des3-cbc-sha1)
   2 11/02/18 09:49:51 HTTP/cdh-server1@YONG.COM (arcfour-hmac)
   2 11/02/18 09:49:51 HTTP/cdh-server1@YONG.COM (camellia256-cts-cmac)
   2 11/02/18 09:49:51 HTTP/cdh-server1@YONG.COM (camellia128-cts-cmac)
   2 11/02/18 09:49:51 HTTP/cdh-server1@YONG.COM (des-hmac-sha1)
   2 11/02/18 09:49:51 HTTP/cdh-server1@YONG.COM (des-cbc-md5)
   2 11/02/18 09:49:51 HTTP/cdh-server2@YONG.COM (aes128-cts-hmac-sha1-96)
   2 11/02/18 09:49:51 HTTP/cdh-server2@YONG.COM (des3-cbc-sha1)
   2 11/02/18 09:49:51 HTTP/cdh-server2@YONG.COM (arcfour-hmac)
   2 11/02/18 09:49:51 HTTP/cdh-server2@YONG.COM (camellia256-cts-cmac)
   2 11/02/18 09:49:51 HTTP/cdh-server2@YONG.COM (camellia128-cts-cmac)
   2 11/02/18 09:49:51 HTTP/cdh-server2@YONG.COM (des-hmac-sha1)
   2 11/02/18 09:49:51 HTTP/cdh-server2@YONG.COM (des-cbc-md5)
   2 11/02/18 09:49:51 HTTP/cdh-server3@YONG.COM (aes128-cts-hmac-sha1-96)
   2 11/02/18 09:49:51 HTTP/cdh-server3@YONG.COM (des3-cbc-sha1)
   2 11/02/18 09:49:51 HTTP/cdh-server3@YONG.COM (arcfour-hmac)
   2 11/02/18 09:49:51 HTTP/cdh-server3@YONG.COM (camellia256-cts-cmac)
   2 11/02/18 09:49:51 HTTP/cdh-server3@YONG.COM (camellia128-cts-cmac)
   2 11/02/18 09:49:51 HTTP/cdh-server3@YONG.COM (des-hmac-sha1)
   2 11/02/18 09:49:51 HTTP/cdh-server3@YONG.COM (des-cbc-md5)
[root@cdh-server2 krb5kdc]#

验证是否正确合并了key,使用合并后的keytab,分别使用hdfs和host principals来获取证书。

[root@cdh-server2 krb5kdc]# kinit -k -t hdfs.keytab hdfs/cdh-server1@YONG.COM
[root@cdh-server2 krb5kdc]# kinit -k -t hdfs.keytab HTTP/cdh-server1@YONG.COM
[root@cdh-server2 krb5kdc]#

如果出现错误:kinit: Key table entry not found while getting initial credentials,
则上面的合并有问题,重新执行前面的操作。

4.3 部署kerberos keytab文件

拷贝 hdfs.keytab 文件到其他节点的 /etc/hadoop/conf 目录

[root@cdh-server2 krb5kdc]# scp hdfs.keytab root@cdh-server1:/etc/hadoop-2.7.4/conf
[root@cdh-server2 krb5kdc]# scp hdfs.keytab root@cdh-server2:/etc/hadoop-2.7.4/conf
[root@cdh-server2 krb5kdc]# scp hdfs.keytab root@cdh-server3:/etc/hadoop-2.7.4/conf

并设置权限,分别在 cdh-server1、cdh-server2、cdh-server3 上执行:

[root@cdh-server2 krb5kdc]# ssh cdh-server1  "chown hdfs:hadoop /etc/hadoop-2.7.4/conf/hdfs.keytab ;chmod 400 /etc/hadoop-2.7.4/conf/hdfs.keytab"
[root@cdh-server2 krb5kdc]# ssh cdh-server2  "chown hdfs:hadoop /etc/hadoop-2.7.4/conf/hdfs.keytab ;chmod 400 /etc/hadoop-2.7.4/conf/hdfs.keytab"
[root@cdh-server2 krb5kdc]# ssh cdh-server3  "chown hdfs:hadoop /etc/hadoop-2.7.4/conf/hdfs.keytab ;chmod 400 /etc/hadoop-2.7.4/conf/hdfs.keytab"
[root@cdh-server2 krb5kdc]#

由于 keytab 相当于有了永久凭证,不需要提供密码(如果修改kdc中的principal的密码,则该keytab就会失效),所以其他用户如果对该文件有读权限,就可以冒充 keytab 中指定的用户身份访问 hadoop,所以 keytab 文件需要确保只对 owner 有读权限(0400)

4.4 命令测试

测试keytab可用性,如果啥都没有,那就是对了

[root@cdh-server1 soft]# kinit -k -t /etc/hadoop-2.7.4/conf/hdfs.keytab hdfs/cdh-server1@YONG.COM
[root@cdh-server1 soft]#

4.5 写个测试类测试一下


import org.apache.hadoop.conf.Configuration;

import java.io.IOException;

/**
 * @Author: chuanchuan.lcc
 * @CreateDate: 2018/11/23 AM10:53
 * @Version: 1.0
 * @Description: java类作用描述:
 */
public class LoginTest {
    public static void main(String[] args) throws IOException {

        String krb5ConfPath = args[0];
        String principal = args[1];
        String keytab = args[2];

        System.out.println("参数:krb5ConfPath "+krb5ConfPath);
        System.out.println("参数:principal "+principal);
        System.out.println("参数:keytab "+keytab);


        UserLogginUtil.setKrb5Conf(krb5ConfPath);

        KerberosEntity kerberosEntity = new KerberosEntity();
        kerberosEntity.setKrb5Conf(krb5ConfPath);
        kerberosEntity.setPrincipal(principal);
        kerberosEntity.setKeytab(keytab);

        Configuration configuration = new Configuration();

        UserLogginUtil.login(kerberosEntity,configuration);

        System.out.println("登陆成功");
    }
}


import lombok.Data;

/**
 * @Author: chuanchuan.lcc
 * @CreateDate: 2018/11/23 AM10:49
 * @Version: 1.0
 * @Description: java类作用描述:
 */
@Data
public class KerberosEntity {
    /**
     * krb5 配置信息
     */
    private String krb5Conf;
    /**
     * 服务的keytab文件
     */
    private String keytab;

    /**
     * 服务的principal
     */
    private String principal;
    /**
     * 是否需要更新登陆
     */
    private boolean needRenewTicket;

    /**
     * 最大票据更新时间
     */
    private long maxRenewTicketLife;

}


    public static void login(KerberosEntity kerberosEntity,Configuration conf) throws IOException{
        // 检查必须的参数信息
        Preconditions.checkArgument(StringUtils.isNotBlank(kerberosEntity.getKeytab()),"must specify the principal");
        Preconditions.checkArgument(StringUtils.isNotBlank(kerberosEntity.getPrincipal()),"must specify the keytab");
        Preconditions.checkArgument(StringUtils.isNotBlank(kerberosEntity.getKrb5Conf()),"must specify the krb5Conf");
        // 校验krb5Conf
        setKrb5Conf(kerberosEntity.getKrb5Conf());
        // 校验keytab
        File keytabConfigFile = new File(kerberosEntity.getKeytab());
        if(!keytabConfigFile.exists() || !keytabConfigFile.isFile()){
            throw new IOException(String.format("the specified hdfs-keytab file-%s does not exist, or is not a file",
                    kerberosEntity.getKeytab()));
        }
        // 设置配置信息,并以相应的principal和 principal登陆
        conf.set("hadoop.security.authentication", "kerberos");
        conf.set("hadoop.security.authorization", "true");
        UserGroupInformation.setConfiguration(conf);

        String keytab = keytabConfigFile.getAbsolutePath();
        try {
            UserGroupInformation.loginUserFromKeytab(kerberosEntity.getPrincipal(),keytab);
          
            }

        } catch (IOException e) {
            LOGGER.warn("failed to logIn in the principal-{} and keyTab-{},error:{}",
                    kerberosEntity.getPrincipal(),keytab, ExceptionUtils.getStackTrace(e));
            throw e;
        }
    }

打成带依赖的jar包 上传到安装kerberos client的机器上,然后执行

[root@cdh-server1 soft]# java -cp hadoop_kerberos-1.0-SNAPSHOT-jar-with-dependencies.jar com.lcc.hadoop.kerberos.LoginTest  /etc/krb5.conf   hdfs/cdh-server1@YONG.COM   /etc/hadoop-2.7.4/conf/hdfs.keytab
log4j:WARN No appenders could be found for logger (com.lcc.hadoop.kerberos.UserLogginUtil).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
登录成功

没有抛出任何异常,可以看到,现在kerberos是可以登录成功的。

4.5 修改 hdfs 配置文件

4.5.1 常规配置

在集群中所有节点的 core-site.xml 文件中添加下面的配置:

<property>
  <name>hadoop.security.authentication</name>
  <value>kerberos</value>
</property>
 
<property>
  <name>hadoop.security.authorization</name>
  <value>true</value>
</property>

在集群中所有节点的 hdfs-site.xml 文件中添加下面的配置:


        <property>
          <name>dfs.block.access.token.enable</name>
          <value>true</value>
        </property>
        <property>
          <name>dfs.datanode.data.dir.perm</name>
          <value>700</value>
        </property>
        <property>
          <name>dfs.namenode.keytab.file</name>
          <value>/etc/hadoop-2.7.4/conf/hdfs.keytab</value>
        </property>
        <property>
          <name>dfs.namenode.kerberos.principal</name>
          <value>hdfs/_HOST@YONG.COM</value>
        </property>
        <property>
          <name>dfs.namenode.kerberos.https.principal</name>
          <value>HTTP/_HOST@YONG.COM</value>
        </property>
                <property>
          <name>dfs.datanode.address</name>
          <value>0.0.0.0:1004</value>
        </property>
        <property>
          <name>dfs.datanode.http.address</name>
          <value>0.0.0.0:1006</value>
        </property>
        <property>
          <name>dfs.datanode.keytab.file</name>
          <value>/etc/hadoop-2.7.4/conf/hdfs.keytab</value>
        </property>
        <property>
          <name>dfs.datanode.kerberos.principal</name>
          <value>hdfs/_HOST@YONG.COM</value>
        </property>
        <property>
          <name>dfs.datanode.kerberos.https.principal</name>
          <value>HTTP/_HOST@YONG.COM</value>
        </property>

4.5.2 可选配置

如果想开启 SSL,请添加(本文不对这部分做说明):

<property>
  <name>dfs.http.policy</name>
  <value>HTTPS_ONLY</value>
</property>

4.5.2 可选配置

如果 HDFS 配置了 QJM HA,则需要添加(另外,你还要在 zookeeper 上配置 kerberos):

<property>
  <name>dfs.journalnode.keytab.file</name>
  <value>/etc/hadoop/conf/hdfs.keytab</value>
</property>
<property>
  <name>dfs.journalnode.kerberos.principal</name>
  <value>hdfs/_HOST@JAVACHEN.COM</value>
</property>
<property>
  <name>dfs.journalnode.kerberos.internal.spnego.principal</name>
  <value>HTTP/_HOST@JAVACHEN.COM</value>
</propert

4.5.3 可选配置

如果配置了 WebHDFS,则添加:

<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>
 
<property>
  <name>dfs.web.authentication.kerberos.principal</name>
  <value>HTTP/_HOST@JAVACHEN.COM</value>
</property>
 
<property>
  <name>dfs.web.authentication.kerberos.keytab</name>
  <value>/etc/hadoop/conf/hdfs.keytab</value>
</property>

4.5.4 注意点

配置中有几点要注意的:

  1. dfs.datanode.address表示 data transceiver RPC server 所绑定的 hostname 或 IP 地址,如果开启 security,端口号必须小于 1024(privileged port),否则的话启动 datanode 时候会报 Cannot start secure cluster without privileged resources 错误
  2. principal 中的 instance 部分可以使用 _HOST 标记,系统会自动替换它为全称域名
  3. 如果开启了 security, hadoop 会对 hdfs block data(由 dfs.data.dir 指定)做 permission check,方式用户的代码不是调用hdfs api而是直接本地读block data,这样就绕过了kerberos和文件权限验证,管理员可以通过设置 dfs.datanode.data.dir.perm 来修改 datanode 文件权限,这里我们设置为700

5. 测试启动

然后启动start-all.sh遇到第2个问题。hdfs/localhost@YONG.COM 这个localhost不知道是从哪里来的?
解决方法请看 错误2

重启报错 错误4,根据解决方法添加配置。

重启,NameNode启动成功,但是dataNode没有启动成功。

问题集锦

错误1

kinit: Client 'hdfs@YONG.COM' not found in Kerberos database while getting initial credentials

错误2

ava.io.IOException: Login failure for hdfs/localhost@YONG.COM from keytab /etc/hadoop-2.7.4/conf/hdfs.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user

原因:
我的配置文件中

[root@cdh-server1 soft]# vi  hadoop-2.7.4/etc/hadoop/core-site.xml
 		<property>
                <name>fs.defaultFS</name>
                <value>hdfs://localhost:9000</value>
        </property>

解决:
修改为主机名

 	<property>
                <name>fs.defaultFS</name>
                <value>hdfs://cdh-server1:9000</value>
        </property>

问题3

[root@cdh-server3 ~]# klist
klist: No credentials cache found (filename: /tmp/krb5cc_0)

问题4:

2018-11-23 17:56:39,871 ERROR org.apache.hadoop.http.HttpServer2: WebHDFS and security are enabled, but configuration property 'dfs.web.authentication.kerberos.principal' is not set.
2018-11-23 17:56:40,013 INFO org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler: Login using keytab /etc/hadoop-2.7.4/conf/hdfs.keytab, for principal ${dfs.web.authentication.kerberos.principal}
2018-11-23 17:56:40,021 WARN org.mortbay.log: failed org.apache.hadoop.hdfs.web.AuthFilter: javax.servlet.ServletException: javax.servlet.ServletException: Principal not defined in configuration
2018-11-23 17:56:40,021 WARN org.mortbay.log: Failed startup of context org.mortbay.jetty.webapp.WebAppContext@1e0f9063{/,file:/root/soft/hadoop-2.7.4/share/hadoop/hdfs/webapps/hdfs}
javax.servlet.ServletException: javax.servlet.ServletException: Principal not defined in configuration

原因
未知
提示
r日志里有下面这句话。但是我没有开启,不知道为啥还报错,经测试,把这个加上去,就不报错了

WebHDFS and security are enabled, but configuration property 'dfs.web.authentication.kerberos.principal' is not set.

解决
添加配置

<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>
 
<property>
  <name>dfs.web.authentication.kerberos.principal</name>
  <value>HTTP/_HOST@YONG.COM</value>
</property>
 
<property>
  <name>dfs.web.authentication.kerberos.keytab</name>
  <value>/etc/hadoop/conf/hdfs.keytab</value>
</property>

问题5:

018-11-23 19:11:04,759 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is cdh-server1
2018-11-23 19:11:04,763 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.lang.RuntimeException: Cannot start secure DataNode without configuring either privileged resources or SASL RPC data transfer protection and SSL for HTTP.  Using privileged resources in combination with SASL RPC data transfer protection is not supported.
        at org.apache.hadoop.hdfs.server.datanode.DataNode.checkSecureConfig(DataNode.java:1201)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1101)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:429)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2406)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2293)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2340)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2522)

原因
配置Kerberos启动HDFS报错,需对DataNode中SASL进行配置,参考官网已经解释了很清楚了:

As of version 2.6.0, SASL can be used to authenticate the data transfer protocol. In this configuration, it is no longer required for secured clusters to start the DataNode as root using jsvc and bind to privileged ports. To enable SASL on data transfer protocol, set dfs.data.transfer.protection in hdfs-site.xml, set a non-privileged port for dfs.datanode.address, set dfs.http.policy to HTTPS_ONLY and make sure the HADOOP_SECURE_DN_USER environment variable is not defined. Note that it is not possible to use SASL on data transfer protocol if dfs.datanode.address is set to a privileged port. This is required for backwards-compatibility reasons.

参考:http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SecureMode.html 里面的Secure DataNode部分

解决

个人配置:

dfs.data.transfer.protection=integrity
dfs.http.policy=HTTPS_ONLY(原值为HTTP_ONLY)
dfs.datanode.address(原值为0.0.0.0:1019)需改为非特权端口,即大于1024的端口,我这用的是61004
确保HADOOP_SECURE_DN_USER变量为空

参考:https://blog.csdn.net/bugzeroman/article/details/84312302
参考:https://ieevee.com/tech/2016/06/07/kerberos-1.html

参考:https://blog.csdn.net/zhouyuanlinli/article/details/78087395

问题6

java.io.FileNotFoundException: /root/.keystore (No such file or directory)
        at java.io.FileInputStream.open0(Native Method)
        at java.io.FileInputStream.open(FileInputStream.java:195)
        at java.io.FileInputStream.<init>(FileInputStream.java:138)
        at org.mortbay.resource.FileResource.getInputStream(FileResource.java:275)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

九师兄

你的鼓励是我做大写作的动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值