大数据系列五:Hive

一.前言

Hive是建立在Hadoop之上的数据仓库,由Facebook开发,现在是apache顶级开源项目;它依赖于HDFS存储数据,依赖MR处理数据;不完全支持SQL标准,其事务支持,索引,子查询和连接操作也存在很多限制,新版本已经支持update,但效率不高;
Hive主要由三个模块:
1.用户接口模块,含CLI、HWI、JDBC、Thrift Server 。
2.驱动模块(Driver),含编译器、优化器、执行器等 。
3.元数据存储模块(Metastore),是一个独立的关系型数据库,通常与MySQL数据库连接后创建的一个MySQL实例。

随着近些年分布式数据库的不断发展,Hive作为大数据的OLAP,优势越来越不明显,随着数据量越来越大,Hive耗时越来越长,已经有很多大厂弃而转其它产品,除非存在历史遗留相关,OLAP选型不建议Hive。

二.安装配置

版本问题说明:
Hive最新版本apache-hive-3.1.2支持相关版本:spark-2.3.0(spark此版本支持hadoop2.6.5),hadoop3.1.0,scala 2.11.8;注意红色部分的冲突。前文也说过原生Hadoop生态圈最大的问题之一就是版本,生产环境请使用CDH发行版。

2.1 vi /etc/profile
export HIVE_HOME=/usr/local/hive-3.1.2
export PATH=$PATH:$HIVE_HOME/bin

2.2 vi hive-site.xml

<property>
      <name>javax.jdo.option.ConnectionUserName</name>
      <value>root</value>
  </property>
  <property>
      <name>javax.jdo.option.ConnectionPassword</name>
      <value>Msql#1234</value>
  </property>
  <property>
      <name>javax.jdo.option.ConnectionURL</name>
      <value>jdbc:mysql://192.168.100.102:3310/hive?createDatabaseIfNotExist=true&amp;useSSL=false</value>
  </property>
  <property>
      <name>javax.jdo.option.ConnectionDriverName</name>
      <value>com.mysql.jdbc.Driver</value>
  </property>
  <property>
      <name>hive.metastore.schema.verification</name>
      <value>false</value>
  </property>
  <property>
    <name>datanucleus.schema.autoCreateAll</name>
    <value>true</value>
  </property>

2.3 vi hive-evn.sh
修改
export JAVA_HOME=/usr/java/jdk1.8.0_251-amd64
export HADOOP_HOME=/usr/local/hadoop-3.1.3/
export HIVE_CONF_DIR=/usr/local/hive-3.1.2/conf
export HIVE_AUX_JARS_PATH=/usr/local/hive-3.1.2/lib

2.4 vi hive-log4j2.properties
property.hive.log.dir = /usr/local/hive-3.1.2/logs

2.5 mysql driver
cp mysql-connector-java-5.1.36.jar /usr/local/hive-3.1.2/lib
2.6 create mysql hive db
如果需要特殊权限限制,请创建;
2.7 create hdfs dir
如果需要特殊目录请指定
hadoop dfs -mkdir -p /user/hive/warehouse
hadoop dfs -chmod g+w /user/hive/warehouse

2.8 init database
[root@ipsnode2 hive-3.1.2]# schematool -dbType mysql -initSchema
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hive-3.1.2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop-3.1.3/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Exception in thread “main” java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:518)
at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:536)
at org.apache.hadoop.mapred.JobConf.(JobConf.java:430)
at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5141)
at org.apache.hadoop.hive.conf.HiveConf.(HiveConf.java:5104)
at org.apache.hive.beeline.HiveSchemaTool.(HiveSchemaTool.java:96)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1473)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)

解决方法:
rm -rf $HIVE_HOME/lib/guava*.jar
cp $HADOOP_HOME/share/hadoop/common/lib/guava-27.0-jre.jar $HIVE_HOME/lib/

2.9 start hive
Logging initialized using configuration in file:/usr/local/hive-3.1.2/conf/hive-log4j2.properties Async: true
Exception in thread “main” java.lang.RuntimeException: java.net.ConnectException: Call From ipsnode2/192.168.100.102 to ipsnode1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:651)
at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:591)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:747)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Caused by: java.net.ConnectException: Call From ipsnode2/192.168.100.102 to ipsnode1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:755)

解决方法:
把hdfs-site.xml,core-site.xml,mountTable.xml to hive/conf


Logging initialized using configuration in file:/usr/local/hive-3.1.2/conf/hive-log4j2.properties Async: true
Exception in thread “main” java.lang.RuntimeException: The dir: /tmp/hive on HDFS should be writable. Current permissions are: rwx–x--x
at org.apache.hadoop.hive.ql.exec.Utilities.ensurePathIsWritable(Utilities.java:4501)
at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:760)

解决方法:
hadoop fs -chmod -R 777 /tmp
出现错误,只要仔细分析日志,基本都能解决。

三.运行

Hive生产部署是不需要集群的,只要开启hiveserver2就可接受并发连接
hive --service hiveserver2 #后台启动 nohup bin/hiveserver2 1>/dev/null 2>&1 &
在这里插入图片描述

四.客户端操作

beeline -u jdbc:hive2://192.168.100.102:10000 # or beeline -u “jdbc:hive2://192.168.100.102:10000” -n livequery -p 123456
2019-12-21 12:07:00,156 ERROR [main] DataNucleus.Datastore: Exception thrown creating StoreManager. See the nested exception
Error creating transactional connection factory
org.datanucleus.exceptions.NucleusException: Error creating transactional connection factory
at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:214)
at org.datanucleus.store.AbstractStoreManager.(AbstractStoreManager.java:162)
at org.datanucleus.store.rdbms.RDBMSStoreManager.(RDBMSStoreManager.java:285)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl

解决方法:
vi core-site.xml

<property>
    <name>hadoop.proxyuser.root.hosts</name>
    <value>*</value>
</property>
<property>
    <name>hadoop.proxyuser.root.groups</name>
    <value>*</value>
</property>

[root@ipsnode2 ~]# beeline -u jdbc:hive2://192.168.100.102:10000
Connecting to jdbc:hive2://192.168.100.102:10000
2019-12-21 14:17:02 INFO Utils:310 - Supplied authorities: 192.168.100.102:10000
2019-12-21 14:17:02 INFO Utils:397 - Resolved authority: 192.168.100.102:10000
2019-12-21 14:17:02 INFO HiveConnection:203 - Will try to open client transport with JDBC Uri: jdbc:hive2://192.168.100.102:10000
Connected to: Apache Hive (version 3.1.2)
Driver: Hive JDBC (version 1.2.1.spark2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.2.1.spark2 by Apache Hive
0: jdbc:hive2://192.168.100.102:10000>
0: jdbc:hive2://192.168.100.102:10000> show databases;
INFO : Compiling command(queryId=root_20191221142037_93e10dbb-a3a1-4416-8804-eea6e84e3eef): show databases
INFO : Concurrency mode is disabled, not creating a lock manager
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)
INFO : Completed compiling command(queryId=root_20191221142037_93e10dbb-a3a1-4416-8804-eea6e84e3eef); Time taken: 0.016 seconds
INFO : Concurrency mode is disabled, not creating a lock manager
INFO : Executing command(queryId=root_20191221142037_93e10dbb-a3a1-4416-8804-eea6e84e3eef): show databases
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Completed executing command(queryId=root_20191221142037_93e10dbb-a3a1-4416-8804-eea6e84e3eef); Time taken: 0.015 seconds
INFO : OK
INFO : Concurrency mode is disabled, not creating a lock manager
±---------------±-+
| database_name |
±---------------±-+
| default |
| hr |
±---------------±-+
在这里插入图片描述

五.更换引擎

Hive默认执行引擎是mapreduce,由于效率比较低已经不推荐使用,其它引擎有Tez,Spark,本次更换为Spark,由于Spark是放在内存中,所以总体来讲比MapReduce快很多。

5.1 为hive添加spark jar包
[root@ipsnode2 jars]# pwd
/usr/local/spark-2.3.0/jars
[root@ipsnode2 jars]# cp scala-library-2.11.8.jar spark-core_2.11-2.3.0.jar spark-network-common_2.11-2.3.0.jar /usr/local/hive-3.1.2/lib/

5.2 vi hive-site.xml
vi /usr/local/hive-3.1.2/conf/hive-site.xml

<property>
        <name>hive.execution.engine</name>
        <value>spark</value>
</property>

5.3 optional,配置依赖jar,把$SPARK_HOME/jars/下的上传hdfs,
vi hive-site.xml

<property>
  <name>spark.yarn.jars</name>
  <value>hdfs://xxxx:8020/spark-jars/*</value>
</property>

[root@ipsnode2 ~]# beeline -u jdbc:hive2://192.168.100.102:10000
Connecting to jdbc:hive2://192.168.100.102:10000
2019-12-21 16:09:48 INFO Utils:310 - Supplied authorities: 192.168.100.102:10000
2019-12-21 16:09:48 INFO Utils:397 - Resolved authority: 192.168.100.102:10000
2019-12-21 16:09:48 INFO HiveConnection:203 - Will try to open client transport with JDBC Uri: jdbc:hive2://192.168.100.102:10000
Connected to: Apache Hive (version 3.1.2)
Driver: Hive JDBC (version 1.2.1.spark2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.2.1.spark2 by Apache Hive
0: jdbc:hive2://192.168.100.102:10000> set hive.execution.engine;
±-----------------------------±-+
| set |
±-----------------------------±-+
| hive.execution.engine=spark |
±-----------------------------±-+
1 row selected (0.206 seconds)
0: jdbc:hive2://192.168.100.102:10000>

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值