Hue从源码编译到支持Hive全流程

在这里插入图片描述


安装Hue

测试集群:hadoop101 hadoop102 hadoop103
集群配置:阿里云3台云服务器 centos7.5 2core 8Gmemory
集群框架:
hue:hadoop102
hadoop:hadoop101 hadoop102 hadoop103
hive:hadoop 101
mysql: hadoop101
zookeeper: hadoop101 hadoop102 hadoop103
kafka: hadoop101 hadoop102 hadoop103
spark: hadoop101
可直接使用编译后的hue,修改配置成自己集群配置即可
本文默认已配置好上述集群中除hue以外的所有框架
注意:结合自己的集群修相关节点配置即可


(1)在102机器上装hue,创建software文件夹,上传压缩包并解压

[root@hadoop102 software]# unzip hue-master.zip -d /opt/module/

(2)安装环境

[root@hadoop102 software]# cd /opt/module/hue-master/
[root@hadoop102 hue-master]# sudo yum install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi cyrus-sasl-plain gcc gcc-c++ krb5-devel libffi-devel libxml2-devel libxslt-devel make mysql mysql-devel openldap-devel python-devel sqlite-devel gmp-devel

(3)编译安装,编译完成后生成build文件夹

[hue@hadoop102 hue-master]# make apps

(4)修改hadoop配置文件,因为集群HA模式,所以采用httpfs模式

[hue@hadoop102 hue-master]# cd /opt/module/hadoop-3.1.3/etc/hadoop/
[hue@hadoop102 hadoop]# vim hdfs-site.xml 
<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>
[hue@hadoop102 hadoop]# vim core-site.xml
<property>
  <name>hadoop.proxyuser.root.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.root.groups</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hue.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hue.groups</name>
  <value>*</value>	
</property>
[hue@hadoop102 hadoop]# vim httpfs-site.xml
<property>
  <name>hadoop.proxyuser.root.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.root.groups</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hue.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hue.groups</name>
  <value>*</value>
</property>

(5)分发配置文件

[hue@hadoop102 etc]# pwd
/opt/module/hadoop-3.1.3/etc
[hue@hadoop102 etc]scp -r hadoop/ hadoop101:/opt/module/hadoop-3.1.3/etc/
[hue@hadoop102 etc]scp -r hadoop/ hadoop103:/opt/module/hadoop-3.1.3/etc/

(6)修改hue配置文件,集成hdfs

[hue@hadoop102 hadoop]# cd /opt/module/hue-master/desktop/conf
[hue@hadoop102 conf]# vim pseudo-distributed.ini
 http_host=hadoop102
  http_port=8000
[[[default]]]
      # Enter the filesystem uri
       fs_defaultfs=hdfs://mycluster:8020
logical_name=mycluster
     webhdfs_url=http://hadoop102:14000/webhdfs/v1
 hadoop_conf_dir=/opt/module/hadoop-3.1.3/etc/hadoop/conf

(7)集成yarn,以下logic_name需要对应yarn-site.xml文件里的配置,注意:一定要解开HA标签前面的注释,否则会报错

 [[yarn_clusters]]

    [[[default]]]
      # Enter the host on which you are running the ResourceManager
      ## resourcemanager_host=mycluster

      # The port where the ResourceManager IPC listens on
      ## resourcemanager_port=8032

      # Whether to submit jobs to this cluster
      	submit_to=True

      # Resource Manager logical name (required for HA)
      logical_name=rm1

      # Change this if your YARN cluster is Kerberos-secured
      ## security_enabled=false

      # URL of the ResourceManager API
      resourcemanager_api_url=http://hadoop101:8088

      # URL of the ProxyServer API
      ## proxy_api_url=http://hadoop101:8088

      # URL of the HistoryServer API
      ## history_server_api_url=http://localhost:19888

     [[[ha]]]
      # Resource Manager logical name (required for HA)
      logical_name=rm2

      # Un-comment to enable
      submit_to=True

      # URL of the ResourceManager API
      resourcemanager_api_url=http://hadoop103:8088

      # ...

(8)集成mysql

 [[database]]
    # Database engine is typically one of:
    # postgresql_psycopg2, mysql, sqlite3 or oracle.
    #
    # Note that for sqlite3, 'name', below is a path to the filename. For other backends, it is the database name.
    # Note for Oracle, options={"threaded":true} must be set in order to avoid crashes.
    # Note for Oracle, you can use the Oracle Service Name by setting "host=" and "port=" and then "name=<host>:<port>/<service_name>".
    # Note for MariaDB use the 'mysql' engine.
     engine=mysql
     host=hadoop101
     port=3306
     user=root
     password=123456
  # conn_max_age option to make database connection persistent value in seconds
    # https://docs.djangoproject.com/en/1.9/ref/databases/#persistent-connections
    ## conn_max_age=0
    # Execute this script to produce the database password. This will be used when 'password' is not set.
    ## password_script=/path/script
    name=hue
在mysql中创建hue库
[hue@hadoop101 apache-hive-3.1.2-bin]# mysql -uroot -p123456
mysql> create database hue;

 ## [[[mysql]]]
      # Name to show in the UI.
      ## nice_name="My SQL DB"

      # For MySQL and PostgreSQL, name is the name of the database.
      # For Oracle, Name is instance of the Oracle server. For express edition
      # this is 'xe' by default.
       name=huemetastore
# Database backend to use. This can be:
      # 1. mysql
      # 2. postgresql
      # 3. oracle
       engine=mysql
       port=3306
       user=root
       password=123456

(9)集成hive

[beeswax]

  # Host where HiveServer2 is running.
  # If Kerberos security is enabled, use fully-qualified domain name (FQDN).
   hive_server_host=hadoop101
# Port where HiveServer2 Thrift server runs on.
   hive_server_port=10000
 hive_conf_dir=/opt/module/apache-hive-3.1.2-bin/conf/

(10)停止集群,重启

[root@hadoop101 software]# /opt/module/apache-zookeeper-3.5.7-bin/bin/zkServer.sh start
[root@hadoop102 software]# /opt/module/apache-zookeeper-3.5.7-bin/bin/zkServer.sh start
[root@hadoop103 software]# /opt/module/apache-zookeeper-3.5.7-bin/bin/zkServer.sh start
[root@hadoop101 software]# start-all.sh

(11)启动hive服务

[root@hadoop101 apache-hive-3.1.2-bin]# nohup hive --service metastore >metasotre.log>&1 &
[root@hadoop101 apache-hive-3.1.2-bin]# nohup hive --service hiveserver2 >hiveserver2.log >&1 &

(12)新增用户hue,并修改文件夹权限

[root@hadoop102 hue-master]# useradd hue
[root@hadoop102 hue-master]# passwd hue
[root@hadoop102 hue-master]# chown -R hue:hue /opt/module/hue-master/

(13)初始化数据库

[root@hadoop102 hue-master]# build/env/bin/hue syncdb
[root@hadoop102 hue-master]# build/env/bin/hue migrate

(14)启动hue,注意启动的时候必须启动hive的两个服务,并且metastore和hiveserver2两个服务必须启动,10000端口占用着

[root@hadoop102 hue-master]# build/env/bin/supervisor

(15)登录hue,执行hive sql
在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

海若[MATRIX]

鼓励将是我最大的动力!

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值