Hadoop web监控界面加上安全机制设置

1. 配置core-site.xml,并scp到其他节点

 

 

参数说明:

The following properties should be in the core-site.xml of all the nodes in the cluster.

hadoop.http.filter.initializers: add to this property the org.apache.hadoop.security.AuthenticationFilterInitializer initializer class.

hadoop.http.authentication.type: Defines authentication used for the HTTP web-consoles. The supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#. The dfeault value is simple.

hadoop.http.authentication.token.validity: Indicates how long (in seconds) an authentication token is valid before it has to be renewed. The default value is 36000.

hadoop.http.authentication.signature.secret.file: The signature secret file for signing the authentication tokens. The same secret should be used for all nodes in the cluster, JobTracker, NameNode, DataNode and TastTracker. The default value is $user.home/hadoop-http-auth-signature-secret. IMPORTANT: This file should be readable only by the Unix user running the daemons.

hadoop.http.authentication.cookie.domain: The domain to use for the HTTP cookie that stores the authentication token. In order to authentiation to work correctly across all nodes in the cluster the domain must be correctly set. There is no default value, the HTTP cookie will not have a domain working only with the hostname issuing the HTTP cookie.

IMPORTANT: when using IP addresses, browsers ignore cookies with domain settings. For this setting to work properly all nodes in the cluster must be configured to generate URLs with hostname.domain names on it.

hadoop.http.authentication.simple.anonymous.allowed: Indicates if anonymous requests are allowed when using 'simple' authentication. The default value is true

hadoop.http.authentication.kerberos.principal: Indicates the Kerberos principal to be used for HTTP endpoint when using 'kerberos' authentication. The principal short name must be HTTP per Kerberos HTTP SPNEGO specification. The default value is HTTP/_HOST@$LOCALHOST, where _HOST -if present- is replaced with bind address of the HTTP server.

hadoop.http.authentication.kerberos.keytab: Location of the keytab file with the credentials for the Kerberos principal used for the HTTP endpoint. The default value is $user.home/hadoop.keytab.i


 

 

 

 

 

2. 手动创建 ${user.home}/hadoop-http-auth-signature-secret 文件,并sep到其他节点

 

3、重新启动后发现如下错误,确认是该Hadoop版本不支持:

 

4、升级Hadoop版本,新版本为Hadoop1.2.1,升级过程如下;

 

1:运行dfsadmin -upgradeProgress status 检查是否存在备份 如果是第一次升级 就不存在备份
2:备份dfs.namenode.dir下文件
3:停止所有节点 bin/stop-all.sh
4:在所有节点上重新部署hadoop 并替换conf文件夹下所有文件(就是将原有的hadoop-0.19.1更名为hadoop-0.21.0-oldverstion,然后解压hadoop-1.2.1.tar.gz 将0.19.2中的conf文件替换为0.21.0中的conf文件夹),注意复制文件后文件权限问题。

[hadoop@master hadoop]$ vi /etc/profile.d/java.sh

export JAVA_HOME=/usr/java/jdk1.6.0_22/
export HADOOP_HOME=/opt/modules/hadoop/hadoop-0.21.0/
export HIVE_HOME=/opt/modules/hive-0.11.0/
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HIVE_HOME/bin:$PATH
[root@master profile.d]# source /etc/profile
[root@master profile.d]# cd $HADOOP_HOME


5:使用 bin/start-dfs.sh -upgrade 进行升级
6:运行一段时间后 没有问题 再 执行升级终结操作bin/hadoop dfsadmin -finalizeUpgrade

7:升级完成

 

注意:

 


HDFS从一个版本升级到另外一个版本的时候,NameNode和DataNode使用的文件格式有可能会改变。当你第一次使用新版本的时候,你要使用

bin/start-dfs.sh -upgrade

告诉Hadoop 去改变HDFS版本(否则,新版本不会生效)。

然后它开始升级,你可以通过

bin/hadoop dfsadmin -upgradeProgress

命令来查看版本升级的情况。

当然你可以使用

bin/hadoop dfsadmin -upgradeProgress details

来查看更多的详细信息。

当升级过程被阻塞的时候,你可以使用

bin/hadoop dfsadmin -upgradeProgress force

来强制升级继续执行(当你使用这个命令的时候,一定要慎重考虑)。



当HDFS升级完毕后,Hadoop依旧保留着旧版本的有关信息,
以便你可以方便的对HDFS进行降级操作。
可以使用bin/start-dfs.sh -rollback来执行降级操作。

 

 

4、重新验证:

 

 

 

 

6、最后:bin/hadoop dfsadmin -finalizeUpgrade

Started: Sun Oct 20 07:54:08 EDT 2013
Version: 1.2.1, r1503152
Compiled: Mon Jul 22 15:23:09 PDT 2013 by mattf
Upgrades: There are no upgrades in progress.

 

7、50030端口2个NODE处理Save安全模式下,最后离开安全模式;

 

 

 

重新查看效果;

 

 

7

展开阅读全文

没有更多推荐了,返回首页