hive的安装配置
1、上传、解压、重命名
tar -zxvf apache-hive-3.1.2-bin.tar.gz -C ../apps/
cd /home/offcn/apps/
mv apache-hive-3.1.2-bin hive-3.1.2
2、基础配置
(1)配置环境变量
sudo vim /etc/profile
#hive-3.1.2
export HIVE_HOME=/home/offcn/apps/hive-3.1.2
export PATH=$PATH:$HIVE_HOME/bin
source /etc/profile
(2)通过hive命令启动hive客户端
问题1:
日志jar报冲突
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/offcn/apps/hive-3.1.2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/offcn/apps/hadoop-3.2.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
问题2:
升级到tez引擎
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
show databases;
问题3:
不能实例化SessionHiveMetaStoreClient
FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
结束当前客户端进程,发现同级目录下会出现metastore_db目录以及derby.log,将其删除
执行脚本初始化derby数据库
schematool -dbType derby -initSchema
执行成功后再次启动hive
hive> show databases;
OK
default
Time taken: 0.778 seconds, Fetched: 1 row(s)
hive> create database test1;
OK
Time taken: 0.154 seconds
hive> show databases;
default
test1
Time taken: 0.037 seconds, Fetched: 2 row(s)
换一个目录执行hive
hive> show databases;
FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
rm -rf metastore_db derby.log
schematool -dbType derby -initSchema
再次执行hive
> show databases;
OK
default
Time taken: 0.838 seconds, Fetched: 1 row(s)
问题4:
换目录执行需要重新初始化元数据库,生成的库文件不可同步
解决方案:
- 重命名jar包
- 详见3、安装配置Tez
- 元数据库问题推荐使用MySQL
- 推荐使用beeline客户端
(3)重命名jar包解决日志jar冲突
cd /home/offcn/apps/hive-3.1.2/lib
mv log4j-slf4j-impl-2.10.0.jar log4j-slf4j-impl-2.10.0.jar.bak
(4)修改配置文件更换MySQL数据库存储元数据
a.安装mysql
b.修改配合文件
cd /home/offcn/apps/hive-3.1.2/conf
vim hive-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://bd-offcn-01:3306/metastore?createDatabaseIfNotExist=true&useSSL=false</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>root</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<!-- 关闭版本校验 -->
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</value>
</property>
</configuration>
# 加载mysql连接驱动包
cd /home/offcn/softwares/mysql-5.7
cp mysql-connector-java-5.1.48.jar /home/offcn/apps/hive-3.1.2/lib/
# 使用mysql作为元数据存储
schematool -dbType mysql -initSchema
# 再次启动hive客户端测试
(5)修改配置文件配置metsstore、hiveServer2服务
cd /home/offcn/apps/hive-3.1.2/conf
vim hive-site.xml:添加以下内容
<property>
<name>hive.metastore.uris</name>
<value>thrift://用户名:9083</value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>用户名</value>
</property>
<property>
<name>hive.metastore.event.db.notification.api.auth</name>
<value>false</value>
</property>
注意:
此时无法直接使用hive命令启动客户端,因为metastore服务没有启动
hive> show databases;
FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
3、服务的启停:
(1)启动metastore服务
hive --service metastore
(2)启动hiveServer2服务
hive --service hiveserver2
(3)启动客户端
beeline
beeline> !connect jdbc:hive2://bd-offcn-01:10000
Connecting to jdbc:hive2://bd-offcn-01:10000
Enter username for jdbc:hive2://bd-offcn-01:10000: offcn
Enter password for jdbc:hive2://bd-offcn-01:10000: hadoop
Connected to: Apache Hive (version 3.1.2)
Driver: Hive JDBC (version 3.1.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://bd-offcn-01:10000>
0: jdbc:hive2://bd-offcn-01:10000>
(4)一键启动脚本
# 创建日志目录
cd /home/offcn/logs/
mkdir hive-3.1.2
cd hive-3.1.2/
touch metastore.out hiveserver2.out
# 编写脚本
cd /home/offcn/bin/
vim hive.sh
#!/bin/bash
if [ $1 = start ];
then
nohup $HIVE_HOME/bin/hive --service metastore 2>&1>> $HOME/logs/hive-3.1.2/metastore.out &
nohup $HIVE_HOME/bin/hive --service hiveserver2 2>&1>> $HOME/logs/hive-3.1.2/hiveserver2.out &
else
hive_id=`ps -ef | grep RunJar | grep -v grep | awk '{print $2}'`
for id in $hive_id
do
kill -9 $id
echo "killed $id"
done
fi