Authentication
验明身份的合法性,hadoop里需要验证加入集群的节点是合法的。要求链接的用户是合法的。
MIT Kerberos
MIT Kerberos 是一种中心化的认证工具,hadoop的认证体系默认采用 kerberos,简称 krb5
krb5 分为服务端与客户端,通过 Relam 组成网络
krb5 理解为一个数据库,里面的对象是 principal
principal 对应物可以是 service,可以是 host,也可以是 user,需要为其设置密码(这个密码也被称之为 Key)
krb5 安装与配置
准备工作:
==为集群内所有机器的JDK安装JCE:== unzip -o -j -q jce_policy-8.zip -d /usr/jdk64/jdk1.8.0_40/jre/lib/security/
服务端:
yum install krb5-server krb5-libs krb5-workstation ==vi /etc/krb5.conf== [libdefaults] renew_lifetime = 7d forwardable = true default_realm = HADOOP.COM ticket_lifetime = 24h dns_lookup_realm = false dns_lookup_kdc = false default_ccache_name = /tmp/krb5cc_%{uid} #default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 #default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 [domain_realm] hadoop.com = HADOOP.COM [logging] default = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log kdc = FILE:/var/log/krb5kdc.log [realms] HADOOP.COM = { admin_server = dev06.youedata kdc = dev06.youedata } ==vi /var/kerberos/krb5kdc/kdc.conf== default_realm = HADOOP.COM [kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 [realms] HADOOP.COM = { master_key_type = aes256-cts acl_file = /var/kerberos/krb5kdc/kadm5.acl dict_file = /usr/share/dict/words admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal } ==编辑权限控制表,使得管理员账户可以控制域的所有权限== vi /var/kerberos/krb5kdc/kadm5.acl */admin@HADOOP.COM *
客户端:
yum install krb5-workstation
拷贝服务端配置文件到客户端:
scp /etc/krb5.conf ambari01:/etc/krb5.conf
创建数据库:
kdb5_util create -r HADOOP.COM -s
启动服务:在服务端执行
chkconfig–level35krb5kdcon chkconfig –level 35 kadmin on
servicekrb5kdcstart service kadmin start创建 kerberos 管理员: 在服务端执行
kadmin.local -q “addprinc root/admin”
Ambari 启用 krb5
2. Log in to Ambari Web and Browse to Admin > Kerberos.
3. Click “Enable Kerberos” to launch the wizard.
4. Select the type of KDC you are using and confirm you have met the prerequisites.
==使用Automated Setup==
5. Provide information about the KDC and admin account.
a. In the KDC section, enter the following information:
• In the KDC Host field, the IP address or FQDN for the KDC host. Optionally a port
number may be included.
• In the Realm name field, the default realm to use when creating service principals.
• (Optional) In the Domains field, provide a list of patterns to use to map hosts in the
cluster to the appropriate realm.
==这里填/etc/krb5.conf 里面domain_realm 的内容就行 填完后点击测试按钮,如果不通过 看ambari后台的错误信息==
b. In the Kadmin section, enter the following information:
• In the Kadmin Host field, the IP address or FQDN for the KDC administrative host.
Optionally a port number may be included.
• The Admin principal and password that will be used to create principals and
keytabs.
• (Optional) If you have configured Ambari for encrypted passwords, the Save
Admin Credentials option will be enabled. With this option, you can have Ambari
store the KDC Admin credentials to use when making cluster changes. Refer to
Managing Admin Credentials for more information on this option.
==完成上述操作后 Ambari 会配置重启相关组件==
Java 远端访问 HDFS
==Ambari 为 hdfs 等组件的服务生成了密码文件 .keytab 存储在/etc/security/keytabs 下==
1、下载 /etc/krb5.conf , /etc/security/keytabs/hdfs.headless.keytab 到本地目录
2、通过Ambari 控制台下载所有组件的配置文件
3、将core-site.xml hdfs-site.xml 放入程序的构建路径上
4、代码如下:
System.setProperty("java.security.krb5.conf", "C:/Program Files/MIT/Kerberos/krb5.conf" ); //服务器端文件下载到本地的位置
Configuration conf = new Configuration();
UserGroupInformation.setConfiguration(conf);
try {
//hdfs-ue_ambari@YOUEDATA.COM 是Ambari创建的
//通过Ambari控制台可以查看每个组件的默认用户,到服务器上切换到相应用户,如 su - hdfs
//输入klist -e 可以看到当前使用的Principal
UserGroupInformation.loginUserFromKeytab("hdfs-ue_ambari@YOUEDATA.COM", "C:/Program Files/MIT/Kerberos/keytabs/hdfs.headless.keytab");
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
FileSystem fs;
try {
fs = FileSystem.get(conf);
fs.create(new Path("/kertest"));
FileStatus[] fsStatus = fs.listStatus(new Path("/"));
for(int i = 0; i < fsStatus.length; i++){
System.out.println(fsStatus[i].getPath().toString());
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
输出:
hdfs://xx:8020/app-logs
hdfs://xx:8020/apps
hdfs://xx:8020/ats
hdfs://xx:8020/hdp
hdfs://xx:8020/kertest