目录
-
1.更改所有hadoop节点的core-site.xml配置
cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/
vim core-site.xml
添加以下配置<property> <name>hadoop.proxyuser.root.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property>
发送到node02,node03
scp core-site.xml node02:$PWD scp core-site.xml node03:$PWD
-
2.更改所有hadoop节点的hdfs-site.xml
vim hdfs-site.xml
添加以下配置<property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property>
发送到node02,node03
scp hdfs-site.xml node02:$PWD scp hdfs-site.xml node03:$PWD
-
3.重启hadoop集群
在node01执行
cd /export/servers/hadoop-2.6.0-cdh5.14.0 sbin/stop-dfs.sh sbin/start-dfs.sh sbin/stop-yarn.sh sbin/start-yarn.sh
-
4.停止hue的服务,并继续配置hue.ini
cd /export/servers/hue-3.9.0-cdh5.14.0/desktop/conf vim hue.ini 最好别用vim编辑
配置hue与hdfs集成
[[hdfs_clusters]] [[[default]]] fs_defaultfs=hdfs://node01.hadoop.com:8020 webhdfs_url=http://node01.hadoop.com:50070/webhdfs/v1 hadoop_hdfs_home=/export/servers/hadoop-2.6.0-cdh5.14.0 (前两行是修改,后三行是添加) hadoop_bin=/export/servers/hadoop-2.6.0-cdh5.14.0/bin hadoop_conf_dir=/export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop
配置hue与yarn集成
[[yarn_clusters]] [[[default]]] resourcemanager_host=node01 resourcemanager_port=8032 submit_to=True resourcemanager_api_url=http://node01:8088 history_server_api_url=http://node01:19888
-
启动hue进程,查看hadoop是否与Hue集成成功
cd /export/servers/hue-3.9.0-cdh5.14.0/ build/env/bin/supervisor
连接到web页面,显示如下图表示配置成功
可以直接在网页对HDFS的文件进行操作