hive的安装和使用

基础依赖环境:

1, jdk 已装
2, hadoop 2.x 已装
3, hive 2.3.6
4, mysql 
5, mysql-connector-jar

1.下载

[linyouyi@hadoop01 software]$ wget https://mirrors.aliyun.com/apache/hive/hive-2.3.6/apache-hive-2.3.6-bin.tar.gz--2019-09-01 00:08:04--  https://mirrors.aliyun.com/apache/hive/hive-2.3.6/apache-hive-2.3.6-bin.tar.gz
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 119.147.158.241, 119.147.158.240, 183.2.199.237, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|119.147.158.241|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 232225538 (221M) [application/gzip]
Saving to: ‘apache-hive-2.3.6-bin.tar.gz’

100%[=======================================================================>] 232,225,538 5.65MB/s   in 38s    

2019-09-01 00:08:43 (5.80 MB/s) - ‘apache-hive-2.3.6-bin.tar.gz’ saved [232225538/232225538]
[linyouyi@hadoop01 software]$ ll
total 1028752
-rw-rw-r-- 1 linyouyi linyouyi 232225538 Aug 23 02:53 apache-hive-2.3.6-bin.tar.gz
-rw-rw-r-- 1 linyouyi linyouyi 312465430 Apr 30 06:17 apache-storm-2.0.0.tar.gz
-rw-r--r-- 1 linyouyi linyouyi 218720521 Aug  3 17:56 hadoop-2.7.7.tar.gz
-rw-rw-r-- 1 linyouyi linyouyi 132569269 Mar 18 14:28 hbase-2.0.5-bin.tar.gz
-rw-rw-r-- 1 linyouyi linyouyi  63999924 Mar 23 08:57 kafka_2.11-2.2.0.tgz
-rw-r--r-- 1 linyouyi linyouyi  54701720 Aug  3 17:47 server-jre-8u144-linux-x64.tar.gz
-rw-r--r-- 1 linyouyi linyouyi  37676320 Aug  8 09:36 zookeeper-3.4.14.tar.gz

2.解压拷贝

[linyouyi@hadoop01 software]$ tar -zxvf apache-hive-2.3.6-bin.tar.gz -C /hadoop/module/
[linyouyi@hadoop01 software]$ ll /hadoop/module/
total 28
drwxrwxr-x 10 linyouyi linyouyi 4096 Sep  1 00:10 apache-hive-2.3.6-bin
drwxrwxr-x 18 linyouyi linyouyi 4096 Aug 12 21:24 apache-storm-2.0.0
drwxr-xr-x 12 linyouyi linyouyi 4096 Aug  9 22:51 hadoop-2.7.7
drwxrwxr-x  7 linyouyi linyouyi 4096 Aug 11 12:10 hbase-2.0.5
drwxr-xr-x  7 linyouyi linyouyi 4096 Jul 22  2017 jdk1.8.0_144
drwxr-xr-x  7 linyouyi linyouyi 4096 Aug 17 15:19 kafka_2.11-2.2.0
drwxr-xr-x 15 linyouyi linyouyi 4096 Aug  8 11:03 zookeeper-3.4.14

安装mysql用于存放元数据,把mysql_connector拷贝到lib目录,修改配置文件

[linyouyi@hadoop01 apache-hive-2.3.6-bin]# unzip mysql_connector_java8.0.13.zip 
[linyouyi@hadoop01 apache-hive-2.3.6-bin]# cp mysql-connector-java-8.0.13/mysql-connector-java-8.0.13.jar lib
[linyouyi@hadoop01 apache-hive-2.3.6-bin]# ls lib/mysql-*
lib/mysql-connector-java-8.0.13.jar  lib/mysql-metadata-storage-0.9.2.jar

//修改hive-site.xml
[linyouyi@hadoop01 apache-hive-2.3.6-bin]$ cp conf/hive-default.xml.template conf/hive-site.xml
[linyouyi@hadoop01 apache-hive-2.3.6-bin]$ vim conf/hive-site.xml 
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <!--<value>jdbc:derby:;databaseName=metastore_db;create=true</value>-->
    <value>jdbc:mysql://localhost:3306/hive?createDatabaseInfoNotExist=true</value>

<property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <!--<value>org.apache.derby.jdbc.EmbeddedDriver</value>-->
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>
<property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>linyouyi</value>
    <description>Username to use against metastore database</description>
  </property>
<property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>linyouyi</value>
    <description>password to use against metastore database</description>
  </property>

//修改tmp dir
修改将含有"system:java.io.tmpdir"的配置项的值修改为如下地址
/tmp/hive

创建数据库,创建用户赋予权限

MariaDB [(none)]> create database hive;
MariaDB [(none)]> grant all privileges on *.* to linyouyi@'localhost' identified by 'linyouyi';
MariaDB [(none)]> flush privileges;

初始化hive(初始化metadata)

[linyouyi@hadoop01 apache-hive-2.3.6-bin]$ bin/schematool -initSchema -dbType mysql
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/module/apache-hive-2.3.6-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/module/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:     jdbc:mysql://localhost:3306/hive?createDatabaseInfoNotExist=true
Metastore Connection Driver :     com.mysql.jdbc.Driver
Metastore connection User:     linyouyi
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
Starting metastore schema initialization to 2.3.0
Initialization script hive-schema-2.3.0.mysql.sql
Initialization script completed
schemaTool completed

3.启动hive

[linyouyi@hadoop01 apache-hive-2.3.6-bin]$ bin/hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/module/apache-hive-2.3.6-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/module/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/hadoop/module/apache-hive-2.3.6-bin/lib/hive-common-2.3.6.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> show databases;
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
OK
default
Time taken: 3.063 seconds, Fetched: 1 row(s)

如果出现 which: no hbase in 是因为没有配置hbase环境变量

[root@hadoop01 apache-hive-2.3.6-bin]# vim /etc/profile
#HBASE_HOME
export HBASE_HOME=/hadoop/module/hbase-2.0.5
export PATH=$PATH:$HBASE_HOME/bin
[root@hadoop01 apache-hive-2.3.6-bin]# vim /etc/profile
[root@hadoop01 apache-hive-2.3.6-bin]# exit
//退出用户重新登陆,让环境变量生效
[linyouyi@hadoop01 apache-hive-2.3.6-bin]$ exit


//再次登陆执行,就没有刚才的错误提示了

 

4.使用

hive> create database lin;
OK
Time taken: 0.088 seconds
hive> show databases;
OK
default
lin
Time taken: 0.008 seconds, Fetched: 2 row(s)
hive> use lin;
OK
Time taken: 0.017 seconds
hive> create table t_lin(id int,name string,salary string);
OK
Time taken: 0.3 seconds
hive> show tables;
OK
t_lin
Time taken: 0.017 seconds, Fetched: 1 row(s)

 插入数据一般都很慢

hive> insert into table t_lin values(1,"youyi","10K");
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = linyouyi_20190901170958_4cd00921-8614-4a8d-8d12-41843d92e21d
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1565368243366_0001, Tracking URL = http://hadoop02:8088/proxy/application_1565368243366_0001/
Kill Command = /hadoop/module/hadoop-2.7.7/bin/hadoop job  -kill job_1565368243366_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2019-09-01 17:10:06,685 Stage-1 map = 0%,  reduce = 0%
2019-09-01 17:10:11,867 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.19 sec
MapReduce Total cumulative CPU time: 2 seconds 190 msec
Ended Job = job_1565368243366_0001
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://mycluster/user/hive/warehouse/lin.db/t_lin/.hive-staging_hive_2019-09-01_17-09-58_657_115403678722774435-1/-ext-10000
Loading data to table lin.t_lin
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 2.19 sec   HDFS Read: 4317 HDFS Write: 77 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 190 msec
OK
Time taken: 14.633 seconds


hive> select * from t_lin;
OK
1    youyi    10K
Time taken: 0.103 seconds, Fetched: 1 row(s)

 

转载于:https://www.cnblogs.com/linyouyi/p/11441261.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值