kerberos java_[Kerberos] Java client访问kerberos-secured cluste

使用java client访问kerberos-secured cluster,最重要的是先从admin那里拿到可用的keytab文件,用来作认证。接下来就是调整连接的配置问题。以下先用连接hdfs为例进行说明。

申请可用的keytab文件

keytab文件用来存储principal的key,有KDC那边生成的principal,最终可以存储在keytab文件中。

连接测试

可用通过执行Hadoop command来测试是否能连接成功。

1. kinit 认证

kinit -kt path-to-keytab principalName

先认证principalName是否合法。如果合法,KDC会返回initial TGT。该TGT有效期通常是几个小时。

2. 执行Hadoop命令

hadoop fs -ls hdfs://namenode1:8020

执行这个命令之后,会返回儿各种exception,按照exception的提示,逐步添加配置,可能的配置如下:

1) 配置使用kerberos认证

hadoop.security.authentication: kerberos

2)配置说明server principal

dfs.namenode.kerberos.principal

3)执行Hadoop command

正常情况下,配置完前两项,不要有大的问题,Hadoop command可以正常返回结果。如果一致返回 Server has invalid Kerberos principal,这个时候可以从以下几个三个方面考虑:

server principal是否配置对,正常情况下值设置成namenode configuration一致就可以了

check DSN resolver是否一致。The HDFS client will initiate an RPC call to the namenode to get the hdfs service principal. Then the client with compare the hostname from the service princpal to the canonical name of the namenode hostname. In this case the namenode canonical name on the client machine resolved to a different hostname then what was in DNS.

ipc传输数据加密问题。可以尝试添加 dfs.encrypt.data.transfer,dfs.encrypt.data.transfer.algorithm,dfs.trustedchannel.resolver.class,dfs.datatransfer.client.encrypt。

Java Kerberos认证代码

public classHadoopSecurityUtil {public static final String EAGLE_KEYTAB_FILE_KEY = "eagle.keytab.file";public static final String EAGLE_USER_NAME_KEY = "eagle.kerberos.principal";public static void login(Configuration kConfig) throwsIOException {if (kConfig.get(EAGLE_KEYTAB_FILE_KEY) == null || kConfig.get(EAGLE_USER_NAME_KEY) == null) return;

kConfig.setBoolean("hadoop.security.authorization", true);

kConfig.set("hadoop.security.authentication", "kerberos");

UserGroupInformation.setConfiguration(kConfig);

UserGroupInformation.loginUserFromKeytab(kConfig.get(EAGLE_USER_NAME_KEY), kConfig.get(EAGLE_KEYTAB_FILE_KEY));

}

}

HDFS & HBase配置

HDFS

{"fs.defaultFS":"hdfs://nameservice1","dfs.nameservices": "nameservice1","dfs.ha.namenodes.nameservice1":"namenode1,namenode2","dfs.namenode.rpc-address.nameservice1.namenode1": "hadoopnamenode01:8020","dfs.namenode.rpc-address.nameservice1.namenode2": "hadoopnamenode02:8020","dfs.client.failover.proxy.provider.apollo-phx-nn-ha": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider","eagle.keytab.file":"/EAGLE-HOME/.keytab/b_eagle.keytab_apd","eagle.kerberos.principal":""}

HBase

{"hbase.zookeeper.property.clientPort":"2181","hbase.zookeeper.quorum":"localhost","hbase.security.authentication":"kerberos","hbase.master.kerberos.principal":"","zookeeper.znode.parent":"/hbase","eagle.keytab.file":"/EAGLE-HOME/.keytab/eagle.keytab","eagle.kerberos.principal":""}

References

https://github.com/randomtask1155/HadoopDNSVerifier

https://support.pivotal.io/hc/en-us/articles/204391288-hdfs-ls-command-fails-with-Server-has-invalid-Kerberos-principal

原文:http://www.cnblogs.com/qingwen/p/5087196.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值