Centos 7 环境 hive3.1.1 搭建

系列文章地址

Centos 7 环境 hadoop 3.2.0 完全分布式集群搭建

Centos 7 环境 hive3.1.1 搭建

Centos 7 环境 Spark 2.4.3 完全分布式集群的搭建过程

Centos 7 环境  HBase 2.1.5 完全分布式集群的搭建过程

Centos 7 环境 Storm 2.0.0 完全分布式集群的搭建过程

 

在上面 博客中 Centos 7 下hadoop 3.2.0 完全分布式集群搭建 介绍了hadoop 集群的搭建, 本篇将介绍hive 环境的搭建


 一  主机环境

master10.0.0.48  
slave10.0.0.49
slave10.0.0.50
mysqlcentos49

二  先搭建完成hadoop 集群

     详情请见 Centos 7 下hadoop 3.2.0 完全分布式集群搭建 介绍

三  安装hive 

1 解压和配置

cd /usr/local
wget https://mirrors.tuna.tsinghua.edu.cn/apache/hive/hive-3.1.1/apache-hive-3.1.1-bin.tar.gz
tar -zxvf apache-hive-3.1.1-bin.tar.gz

vi /etc/profile

export HIVE_HOME=/usr/local/apache-hive-3.1.1-bin
export PATH=$HIVE_HOME/bin:$PATH

2 创建hdfs目录并赋予权限

hdfs dfs -mkdir -p /usr/hive/warehouse
hdfs dfs -mkdir -p /usr/hive/tmp
hdfs dfs -mkdir -p /usr/hive/log
hdfs dfs -chmod 777 /usr/hive/warehouse
hdfs dfs -chmod 777 /usr/hive/tmp
hdfs dfs -chmod 777 /usr/hive/log

3 hive-env.sh配置

cd /usr/local/apache-hive-3.1.1-bin/conf
cp hive-env.sh.template hive-env.sh

vi hive-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_131
export HADOOP_HOME=/usr/local/hadoop-3.2.0
export HIVE_HOME=/usr/local/apache-hive-3.1.1-bin
export HIVE_CONF_DIR=$HIVE_HOME/conf
export HIVE_AUX_JARS_PATH=$HIVE_HOME/lib/*

4 hive-site.xml配置:

cp hive-default.xml.template hive-site.xml

  由于修改位置较多,下面直接贴出修改后的内容

 

这里有笔误,应该是10.0.0.49

以上配置可通过我的网盘下载  https://pan.baidu.com/s/10GPuELlBQyyIGLmFX1byGw

5 使用schemaTool初始化mysql数据库

cd /usr/local/apache-hive-3.1.1-bin/lib
wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar
schematool -dbType mysql -initSchema

四  启动hive测试

4.1 启动 hive 

4.2  创建一个测试库myest

hive> create database mytest;
OK
Time taken: 0.511 seconds

4.3 使用 mytest

hive> use mytest;
OK
Time taken: 0.067 seconds

4.4 创建表

hive> create table test (mykey string,myval string);
OK
Time taken: 1.042 seconds

4.4 插入记录

hive> insert into test values("1","www.baidu.com");
Query ID = root_20190821135936_0b9a533d-5479-4c9d-b815-7bf5197a9768
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1566296186593_0004, Tracking URL = http://centos48:3306/proxy/application_1566296186593_0004/
Kill Command = /usr/local/hadoop-3.2.0/bin/mapred job  -kill job_1566296186593_0004
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-08-21 13:59:49,131 Stage-1 map = 0%,  reduce = 0%
2019-08-21 13:59:56,537 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.45 sec
2019-08-21 14:00:02,822 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.71 sec
MapReduce Total cumulative CPU time: 4 seconds 710 msec
Ended Job = job_1566296186593_0004
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://centos48:8020/usr/hive/warehouse/mytest.db/test/.hive-staging_hive_2019-08-21_13-59-36_300_7455504770779585306-1/-ext-10000
Loading data to table mytest.test
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 4.71 sec   HDFS Read: 14714 HDFS Write: 252 SUCCESS
Total MapReduce CPU Time Spent: 4 seconds 710 msec
OK
Time taken: 28.799 seconds

4.5 查询数据

hive> select * from test;
OK
1	www.baidu.com
Time taken: 0.593 seconds, Fetched: 1 row(s)

五 在页面上查询数据

6. 在hadoop YARN 上查看

我的联系邮箱 jackjobmail@126.com,  如有疑问请联系我

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值