Apache Hue 与软件的集成
Hue集成HDFS
注意修改完HDFS相关配置后,需要把配置scp给集群中每台机器,重启hdfs集群
- 修改core-site.xml配置
cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop
vim core-site.xml
<!-- 允许通过httpfs方式访问hdfs的主机名 -->
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<!--允许通过httpfs方式访问hdfs的用户组 -->
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
将core-site.xml拷贝到其他节点
scp core-site.xml 节点ip:$PWD
- 修改hdfs-site.xml配置
cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop
vim hdfs-site.xml
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
将hdfs-site.xml拷贝到其他节点
scp hdfs-site.xml 节点ip:$PWD
- 修改 hue.ini
配置我们的hue与hdfs集成(大概在885行)
cd /export/servers/hue-3.9.0-cdh5.14.0/desktop/conf
vim hue.ini
[[hdfs_clusters]]
[[[default]]]
fs_defaultfs=hdfs://node01:8020
webhdfs_url=http://node01:50070/webhdfs/v1
hadoop_hdfs_home=/export/servers/hadoop-2.6.0-cdh5.14.0
hadoop_bin=/export/servers/hadoop-2.6.0-cdh5.14.0/bin
hadoop_conf_dir=/export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop
- 重启HDFS、Hue
start-dfs.sh
cd /export/servers/hue-3.9.0-cdh5.14.0/
build/env/bin/supervisor
- HDFS新建文件夹
-
进入文件夹
-
创建文件
-
上传文件
- 删除文件
- 更改文件权限
Hue集成YARN
- 修改hue.ini
大概在913行
cd /export/servers/hue-3.9.0-cdh5.14.0/desktop/conf
vim hue.ini
[[yarn_clusters]]
[[[default]]]
resourcemanager_host=node01
resourcemanager_port=8032
submit_to=True
resourcemanager_api_url=http://node01:8088
history_server_api_url=http://node01:19888
- 开启yarn日志聚集服务
MapReduce 是在各个机器上运行的, 在运行过程中产生的日志存在于各个机器上,为了能够统一查看各个机器的运行日志,将日志集中存放在 HDFS 上, 这个过程就是日志聚集。
在yarn-site.xml中添加配置(三台)
cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/
vim yarn-site.xml
<!--是否启用日志聚集功能 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 设置日志保留时间,单位是秒 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
将yarn-site.xml拷贝到其他节点
scp -r yarn-site.xml 节点ip:$PWD
- 重启Yarn、Hue
cd /export/servers/hadoop-2.6.0-cdh5.14.0/sbin/
stop-yarn.sh
start-yarn.sh
cd /export/servers/hue-3.9.0-cdh5.14.0/
build/env/bin/supervisor
- 提交一个任务
hadoop jar /export/servers/hadoop-2.6.0-cdh5.14.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.14.0.jar pi 3 5
Hue集成Hive
如果需要配置hue与hive的集成,我们需要启动hive的metastore服务以及hiveserver2服务(impala需要hive的metastore服务,hue需要hvie的hiveserver2服务)
- 修改Hue.ini
大概在998行
cd /export/servers/hue-3.9.0-cdh5.14.0/desktop/conf
vim hue.ini
[beeswax]
hive_server_host=node01
hive_server_port=10000
hive_conf_dir=/export/servers/hive-1.1.0-cdh5.14.0/conf
server_conn_timeout=120
auth_username=root
auth_password=123456
[metastore]
#允许使用hive创建数据库表等操作
enable_new_create_table=true
- 启动Hive服务、重启hue
去node01机器上启动hive的metastore以及hiveserver2服务
cd /export/servers/hive-1.1.0-cdh5.14.0
nohup bin/hive --service metastore &
nohup bin/hive --service hiveserver2 &
- 重新启动hue
cd /export/servers/hue-3.9.0-cdh5.14.0/
build/env/bin/supervisor
- 创建数据库
CREATE DATABASE IF NOT EXISTS `test`;
- 创建表
create table student(
id int comment 'ID',
name string comment '姓名'
) comment '测试表' row format delimited fields terminated by '\t';
- 插入数据
insert into test.student values (6, '刘德华');
insert into test.student values (5, '黄晓明');
insert into test.student values (4, '张译');
insert into test.student values (3, '胡歌');
insert into test.student values (2, '佟丽娅');
insert into test.student values (1, '杨幂');
- 查询数据
select * from student;
-
结果导出
-
图表选择
-
数据可视化
Hue集成Mysql
- 修改hue.ini
需要把mysql的注释给去掉。 大概位于1544行
cd /export/servers/hue-3.9.0-cdh5.14.0/desktop/conf
vim hue.ini
[[[mysql]]]
nice_name="My SQL DB"
engine=mysql
host=node01
port=3306
user=root
password=123456
- 重启hue
cd /export/servers/hue-3.9.0-cdh5.14.0/
build/env/bin/supervisor
Hue集成Oozie
文件内查找: /查找内容
- 修改hue配置文件hue.ini
[liboozie]
# The URL where the Oozie service runs on. This is required in order for
# users to submit jobs. Empty value disables the config check.
oozie_url=http://node01:11000/oozie
# Requires FQDN in oozie_url if enabled
## security_enabled=false
# Location on HDFS where the workflows/coordinator are deployed when submitted.
remote_deployement_dir=/user/root/oozie_works
[oozie]
# Location on local FS where the examples are stored.
# local_data_dir=/export/servers/oozie-4.1.0-cdh5.14.0/examples/apps
# Location on local FS where the data for the examples is stored.
# sample_data_dir=/export/servers/oozie-4.1.0-cdh5.14.0/examples/input-data
# Location on HDFS where the oozie examples and workflows are stored.
# Parameters are $TIME and $USER, e.g. /user/$USER/hue/workspaces/workflow-$TIME
# remote_data_dir=/user/root/oozie_works/examples/apps
# Maximum of Oozie workflows or coodinators to retrieve in one API call.
oozie_jobs_count=100
# Use Cron format for defining the frequency of a Coordinator instead of the old frequency number/unit.
enable_cron_scheduling=true
# Flag to enable the saved Editor queries to be dragged and dropped into a workflow.
enable_document_action=true
# Flag to enable Oozie backend filtering instead of doing it at the page level in Javascript. Requires Oozie 4.3+.
enable_oozie_backend_filtering=true
# Flag to enable the Impala action.
enable_impala_action=true
[filebrowser]
# Location on local filesystem where the uploaded archives are temporary stored.
archive_upload_tempdir=/tmp
# Show Download Button for HDFS file browser.
show_download_button=true
# Show Upload Button for HDFS file browser.
show_upload_button=true
# Flag to enable the extraction of a uploaded archive in HDFS.
enable_extract_uploaded_archive=true
- 启动hue、oozie
启动hue进程
cd /export/servers/hue-3.9.0-cdh5.14.0
build/env/bin/supervisor
启动oozie进程
cd /export/servers/oozie-4.1.0-cdh5.14.0
bin/oozied.sh start
页面访问hue
http://节点ip:8888/
- 使用hue配置oozie调度
hue提供了页面鼠标拖拽的方式配置oozie调度
- 利用hue调度shell脚本
在HDFS上创建一个shell脚本程序文件
打开工作流调度页面
- 利用hue调度hive脚本
在HDFS上创建一个hive sql脚本程序文件
打开workflow页面,拖拽hive2图标到指定位置
- 利用hue调度MapReduce程序
利用hue提交MapReduce程序
- 利用Hue配置定时调度任务
在hue中,也可以针对workflow配置定时调度任务,具体操作如下
一定要注意时区的问题,否则调度就出错了。保存之后就可以提交定时任务
点击进去,可以看到定时任务的详细信息
Hue集成Hbase
- 修改hbase配置
在hbase-site.xml配置文件中的添加如下内容,开启hbase thrift服务。
修改完成之后scp给其他机器上hbase安装包
<property>
<name>hbase.thrift.support.proxyuser</name>
<value>true</value>
</property>
<property>
<name>hbase.regionserver.thrift.http</name>
<value>true</value>
</property>
- 修改hadoop配置
在core-site.xml中确保 HBase被授权代理,添加下面内容。
把修改之后的配置文件scp给其他机器和hbase安装包conf目录下
<property>
<name>hadoop.proxyuser.hbase.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hbase.groups</name>
<value>*</value>
</property>
- 修改Hue配置
[hbase]
# Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'.
# Use full hostname with security.
# If using Kerberos we assume GSSAPI SASL, not PLAIN.
hbase_clusters=(Cluster|node01:9090)
# HBase configuration directory, where hbase-site.xml is located.
hbase_conf_dir=/export/servers/hbase-1.2.1/conf
# Hard limit of rows or columns per row fetched before truncating.
## truncate_limit = 500
# 'buffered' is the default of the HBase Thrift Server and supports security.
# 'framed' can be used to chunk up responses,
# which is useful when used in conjunction with the nonblocking server in Thrift.
thrift_transport=buffered
- 启动hbase(包括thrift服务)、hue
需要启动hdfs和hbase,然后再启动thrift。
start-dfs.sh
start-hbase.sh
hbase-daemon.sh start thrift
重新启动hue。
cd /export/servers/hue-3.9.0-cdh5.14.0/
build/env/bin/supervisor
Hue集成Impala
- 修改Hue.ini
[impala]
server_host=node03
server_port=21050
impala_conf_dir=/etc/impala/conf
- 重启Hue
cd /export/servers/hue-3.9.0-cdh5.14.0/
build/env/bin/supervisor