1.7.1 大数据-HUE可视化软件安装

本文档详细介绍了HUE 3.9.0在CDH5.5.0环境下的安装过程,包括下载解压、编译、配置、启动服务以及集成HDFS、YARN、Hive和MySQL等步骤。在集成过程中,解决了权限问题、报错及重启服务导致的挑战。此外,还提及了HUE的后续版本4.2的特性,如HIVE查询的联想和进度条功能。
摘要由CSDN通过智能技术生成

版本

hue-3.9.0-cdh5.5.0

下载解压

http://archive.cloudera.com/cdh5/cdh/5/hue-3.9.0-cdh5.5.0.tar.gz

tar -zxf hue-3.9.0-cdh5.5.0.tar.gz -C /opt/modules

编译

  1. 联网虚拟机里面设置为自动连接
  2. 切换为root用户
  3. 安装相关依赖包
yum install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi gcc gcc-c++ krb5-devel libtidy libxml2-devel libxslt-devel openldap-devel python-devel sqlite-devel openssl-devel mysql-devel gmp-devel

根目录编译

make app

切换kfk用户并授权

sudo chmod -R 777 hue-3.9.0-cdh5.5.0/

配置

资料http://archive.cloudera.com/cdh5/cdh/5/hue-3.9.0-cdh5.5.0/manual.html

/opt/modules/hue-3.9.0-cdh5.5.0/desktop/conf/hue.ini

  secret_key=jFE93j;2[290-eiw.KEiwN2s3['d;/.q[eIW^y#e=+Iei*@Mn<qW5o

  # Webserver listens on this address and port
  http_host=bigdata-pro03.kfk.com
  http_port=8888
 
  # Time zone name
  time_zone=Asia/Shanghai

启动服务

[kfk@bigdata-pro03 hue-3.9.0-cdh5.5.0]$ ./build/env/bin/supervisor 

登录

http://bigdata-pro03.kfk.com:8888/
kfk kfk

集成HDFS

/opt/modules/hue-3.9.0-cdh5.5.0/desktop/conf/hue.ini

fs_defaultfs=hdfs://ns
webhdfs_url=http://bigdata-pro01.kfk.com:50070/webhdfs/v1
hadoop_conf_dir=/opt/modules/hadoop-2.5.0/etc/hadoop
hadoop_bin=/opt/modules/hadoop-2.5.0/bin
hadoop_hdfs_home=/opt/modules/hadoop-2.5.0

三台配置
hadoop-2.5.0/etc/hadoop/core-site.xml 不配报没权限
Note: you are a Hue admin but not a HDFS superuser, "hdfs" or part of HDFS supergroup, "supergroup".
default_hdfs_superuser=kfk

<!--hue-->
<property>
	<name>hadoop.proxyuser.hue.hosts</name>
	<value>*</value>
</property>
<property>
	<name>hadoop.proxyuser.hue.groups</name>
	<value>*</value>
</property>

重启服务

[kfk@bigdata-pro01 hadoop-2.5.0]$ sbin/stop-all.sh
[kfk@bigdata-pro01 hadoop-2.5.0]$ sbin/start-all.sh
 
[kfk@bigdata-pro03 hue-3.9.0-cdh5.5.0]$ ./build/env/bin/supervisor

集成报错Address already in use 解决

[kfk@bigdata-pro03 lib]$ ps -a
  PID TTY          TIME CMD
12991 pts/0    00:00:00 vim
18707 pts/0    00:03:00 java
18851 pts/0    00:00:00 bash
18864 pts/0    00:00:04 java
22839 pts/2    00:00:00 su
22844 pts/2    00:00:00 bash
27001 pts/0    00:00:00 supervisor
27007 pts/0    00:00:10 hue
27864 pts/1    00:00:00 vim
27964 pts/3    00:00:05 java
28058 pts/1    00:00:00 ps
杀掉进程 kill -9 27001

方案二 反复启动没杀好 用这个找hue supervisor

[kfk@bigdata-pro03 hue-3.9.0-cdh5.5.0]$ lsof -i

问题:StandbyException: Operation category READ is not supported in state standby

重启导致 namenode重置了 修改访问网址

/opt/modules/hue-3.9.0-cdh5.5.0/desktop/conf/hue.ini
webhdfs_url=http://bigdata-pro02.kfk.com:50070/webhdfs/v1

集成yarn

      resourcemanager_host=rs
 
      # The port where the ResourceManager IPC listens on
      resourcemanager_port=8032
 
      # Whether to submit jobs to this cluster
      submit_to=True
 
      # Resource Manager logical name (required for HA)
      ## logical_name=
 
      # Change this if your YARN cluster is Kerberos-secured
      ## security_enabled=false
 
      # URL of the ResourceManager API
      resourcemanager_api_url=http://bigdata-pro02.kfk.com:8088
 
      # URL of the ProxyServer API
      proxy_api_url=http://bigdata-pro02.kfk.com:8088
 
      # URL of the HistoryServer API
      history_server_api_url=http://bigdata-pro02.kfk.com:19888

集成hive

[beeswax]
 
  # Host where HiveServer2 is running.
  # If Kerberos security is enabled, use fully-qualified domain name (FQDN).
  ## hive_server_host=localhost
  hive_server_host=bigdata-pro03.kfk.com
 
  # Port where HiveServer2 Thrift server runs on.
  hive_server_port=10000
 
  # Hive configuration directory, where hive-site.xml is located
  hive_conf_dir=/opt/modules/hive-0.13.1-bin/conf

启动 nohup bin/hiveserver2 &

HiveServer2(HS2)是一个服务端接口,使远程客户端可以执行对Hive的查询并返回结果。目前基于Thrift RPC的实现是HiveServer的改进版本,并支持多客户端并发和身份验证

<property>
       <name>hive.server2.thrift.port</name>
      <value>10000</value>
</property>
 
<property>
      <name>hive.server2.thrift.bind.host</name>
       <value>bigdata-pro03.kfk.com</value>
 </property>

hadoop core-site.xml

<property>     
	<name>hadoop.proxyuser.kfk.hosts</name>     
	<value>*</value>
</property> 
<property>     
	<name>hadoop.proxyuser.kfk.groups</name>    
	<value>*</value> 
</property>

集成mysql

  [[[mysql]]]
      # Name to show in the UI.
      nice_name="MySQL-Sky"
 
      # For MySQL and PostgreSQL, name is the name of the database.
      # For Oracle, Name is instance of the Oracle server. For express edition
      # this is 'xe' by default.
      name=metastore
 
      # Database backend to use. This can be:
      # 1. mysql
      # 2. postgresql
      # 3. oracle
      engine=mysql
 
      # IP or hostname of the database to connect to.
      host=bigdata-pro01.kfk.com
 
      # Port the database server is listening to. Defaults are:
      # 1. MySQL: 3306
      # 2. PostgreSQL: 5432
      # 3. Oracle Express Edition: 1521
      ## port=3306
 
      # Username to authenticate with when connecting to the database.
      user=root
 
      # Password matching the username to authenticate with when
      # connecting to the database.
      password=123456

集成HBASE

启动thrift服务

bin/hbase-daemon.sh  start  thrift
[hbase]
  # Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'.
  # Use full hostname with security.
  # If using Kerberos we assume GSSAPI SASL, not PLAIN.
  hbase_clusters=(Cluster|bigdata-pro01.kfk.com:9090)
 
  # HBase configuration directory, where hbase-site.xml is located.
  hbase_conf_dir=/opt/modules/hbase-0.98.6-cdh5.3.0/conf

其他

下面版本=hue4.2 HIVE查询可联想 有进度条 另一个工具
tar -zxf hue-3.9.0-cdh5.12.1.tar.gz 联想 进度条

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值