hue 4.2.0安装(使用MySQL作为元数据库)

1.安装依赖

yum install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi cyrus-sasl-plain gcc gcc-c++ krb5-devel libffi-devel libxml2-devel libxslt-devel make mysql mysql-devel openldap-devel python-devel sqlite-devel  gmp-devel openssl-devel

2.maven安装
a.下载maven安装包
apache-maven-3.5.4-bin.zip
b.解压安装包
unzip apache-maven-3.5.4-bin.zip
c.改包名
mv apache-maven-3.5.4-bin.zip maven
d.更改配置文件
cd maven/conf
vi settings.xml

<mirrors>
     <mirror>
         <id>alimaven</id>
         <mirrorOf>central</mirrorOf>
         <name>aliyun maven</name>
         <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
     </mirror>

     <mirror>
         <id>ui</id>
         <mirrorOf>central</mirrorOf>
         <name>Human Readable Name for this Mirror.</name>
         <url>http://uk.maven.org/maven2/</url>
     </mirror>

     <mirror>
         <id>jboss-public-repository-group</id>
         <mirrorOf>central</mirrorOf>
         <name>JBoss Public Repository Group</name>
         <url>http://repository.jboss.org/nexus/content/groups/public</url>
     </mirror>
 </mirrors>
e.配置环境变量
vi /etc/profile
export MAVEN_HOME=/usr/local/maven
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$MAVEN_HOME/bin:$PATH
f.使环境变量生效
source /etc/profile

3.安装hue
a.下载安装包hue-4.2.0.tgz
官网:http://gethue.com/
b.解压安装包
tar -zxvf hue-4.2.0.tgz
c. cd hue-4.2.0.tgz
d. 安装
make apps
e. 如果编译出现问题就重新编译
make clean
make apps
f.配置hue

vi hue/desktop/conf/hue.ini


HUE

secret_key=
http_host=192.168.168.128
http_port=8888
time_zone=Asia/Shanghai
server_user=root
server_group=root
default_user=root
default_hdfs_superuser=root

database

engine=mysql
host=10.0.30.8
port=3306
user=root
password=111111
name=hive1    #元数据库名字

Hadoop/HDFS

fs_defaultfs=hdfs://192.168.168.128:9000
webhdfs_url=http://192.168.168.128:50070/webhdfs/v1
hadoop_conf_dir=/usr/local/hadoop/etc/Hadoop
hadoop_hdfs_home=/usr/local/Hadoop
hadoop_bin=/usr/local/hadoop/bin

YARN

resourcemanager_host=192.168.168.128
resourcemanager_port=8032
hadoop_conf_dir=/usr/local/hadoop/etc/Hadoop
hadoop_mapred_home=/usr/local/Hadoop
hadoop_bin=/usr/local/hadoop/bin
resourcemanager_api_url=http://192.168.168.128:8088
proxy_api_url=http://192.168.168.128:8088
history_server_api_url=http://192.168.168.128:19888

Hive

hive_server_host=192.168.168.128
hive_server_port=10000
hive_conf_dir=/usr/local/hive/conf

mysql

nice_name="My SQL DB"
name=hive1
engine=mysql
host=10.0.30.8
port=3306
user=root
password=111111

HBase

hbase_clusters=(Cluster|master:9090)
hbase_conf_dir=/home/hadoop/hbase-1.0.1.1/conf

Pig

# Location of piggybank.jar on local filesystem.
## local_sample_dir=/usr/share/hue/apps/pig/examples
# Location piggybank.jar will be copied to in HDFS.
## remote_data_dir=/user/hue/pig/examples

Sqoop2

server_url=http://master:12000/sqoop
sqoop_conf_dir=/home/sqoop-1.99.6/conf

g.hadoop配置
vi hdfs-site.xml
<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>
vi core-site.xml
<!-- enable WebHDFS in the NameNode and DataNodes -->
<property> 
  <name>dfs.webhdfs.enabled</name> 
  <value>true</value> 
</property>
<property>
     <name>hadoop.proxyuser.root.hosts</name>
     <value>*</value>
</property>
<property>
     <name>hadoop.proxyuser.root.groups</name>
     <value>*</value>
</property>
<property>  
    <name>hadoop.proxyuser.httpfs.hosts</name>  
    <value>*</value>  
</property>  
<property>  
    <name>hadoop.proxyuser.httpfs.groups</name>  
    <value>*</value>  
</property> 
h.分发给各个节点
i.初始化用户、数据库并启动
bin/hue syncdb
bin/hue migrate
./supervisor
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值