Hive的安装与配置


前言

本文安装的是hive的版本是apache-hive-3.1.2,本文安装hive是基于已经安装了hadoop和mysql并且启动了的情况下

一、解压安装

#解压hive
tar -zxvf /opt/download/apache-hive-3.1.2-bin.tar.gz -C /opt/software
#改名
mv /opt/software/apache-hive-3.1.2 /opt/software/hive312
#cd /opt/software/hive312/conf
mv hive-default.xml.template hive-default.xml

二、配置

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
	<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
	<configuration>
	  <property>
		<name>javax.jdo.option.ConnectionURL</name>
		<value>jdbc:mysql://localhost:3306/hive312?createDatabaseIfNotExist=true</value>
		<description>connect to mysql for hive metastore</description>
	  </property>
	  <!--配置mysql的driver-->
	  <property>
		<name>javax.jdo.option.ConnectionDriverName</name>
		<value>com.mysql.jdbc.Driver</value>
		<description>driver for mysql</description>
	  </property>
	  <!--配置mysql的username-->
	  <property>
		<name>javax.jdo.option.ConnectionUserName</name>
		<value>root</value>
		<description>username to mysql</description>
	  </property>
	  <!--配置mysql的password-->
	  <property>
		<name>javax.jdo.option.ConnectionPassword</name>
		<value>123456</value>
		<description>password to mysql</description>
	  </property>
	  <!--关闭权限认证-->
	  <property>
		<name>hive.server2.authentication</name>
		<value>NONE</value>
	  </property>
	</configuration>

三、引入jar包

#1、将mysql-connector-java-5.1.47.jar放入hive312/lib 下

#2、查找默认guava
find /opt/software/hadoop313/ -name 'guava*.jar'
/opt/software/hadoop313/share/hadoop/common/lib/guava-27.0-jre.jar
/opt/software/hadoop313/share/hadoop/hdfs/lib/guava-27.0-jre.jar

#3、在lib下删除并复制guava
rm guava-19.0-jre.jar
cp /opt/software/hadoop313/share/hadoop/common/lib/guava-27.0-jre.jar ./

#4、查看是否复制成功
[root@singlealvin lib]# ls|grep guava
guava-27.0-jre.jar
jersey-guava-2.25.1.jar

四、配置环境变量

vim /etc/profile.d/myenv.sh  #加入以下内容
# hive
export HIVE_HOME=/opt/software/hive312
export PATH=$HIVE_HOME/bin:$PATH

五、初始化并启动

#进入hive的bin目录
[root@singlealvin bin]# ./schematool -dbType mysql -initSchema
#启动服务
nohup hive --service metastore>/dev/null 2>&1 &
nohup hive --service hiveserver2>/dev/null 2>&1 &
#查看启动状态
jps -ml
#查看端口是否存在
netstat -anp|grep 10000
#启动beeline 
#启动之前关闭hadoop安全模式 
hadoop dfsadmin -safemode leave
beeline -u jdbc:hive2://192.168.29.144:10000 -n root

六、简单操作

进入beeline之后,就可以进行hive sql语句的操作(测试截图)

[root@singlealvin ~]# beeline -u jdbc:hive2://192.168.29.144:10000 -n root
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/software/hive312/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/software/hadoop313/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://192.168.29.144:10000
Connected to: Apache Hive (version 3.1.2)
Driver: Hive JDBC (version 3.1.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.2 by Apache Hive
0: jdbc:hive2://192.168.29.144:10000> show databases;
INFO  : Compiling command(queryId=root_20210624161247_5f9d9c16-b424-454e-906e-75d1508b544a): show databases
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Semantic Analysis Completed (retrial = false)
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)
INFO  : Completed compiling command(queryId=root_20210624161247_5f9d9c16-b424-454e-906e-75d1508b544a); Time taken: 0.018 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=root_20210624161247_5f9d9c16-b424-454e-906e-75d1508b544a): show databases
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=root_20210624161247_5f9d9c16-b424-454e-906e-75d1508b544a); Time taken: 0.02 seconds
INFO  : OK
INFO  : Concurrency mode is disabled, not creating a lock manager
+----------------+
| database_name  |
+----------------+
| default        |
| demo           |
| payanalysis    |
| test           |
+----------------+
4 rows selected (0.169 seconds)
0: jdbc:hive2://192.168.29.144:10000> 

总结

本文主要记录的就是hive-3.1.2的安装启动,以及进入beeline进行hivesql的操作,后续文章将会陆续写hive的常用函数以及hive调优(数据清洗,数据倾斜…)

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Hive是基于Hadoop的数据仓库工具,它可以将结构化数据映射为一张数据库表,并提供类似SQL语言的查询功能,使得数据分析师和开发人员可以使用SQL语言来查询和分析大规模的数据。下面是Hive安装配置详解: 1. 安装Java Hive需要Java环境来运行,所以需要先安装Java。可以通过以下命令来安装Java: ``` sudo apt-get update sudo apt-get install default-jdk ``` 2. 安装Hadoop Hive是基于Hadoop的,所以需要先安装Hadoop。可以参考Hadoop的安装配置教程。 3. 下载Hive 可以从Hive的官方网站下载最新的版本,也可以从Apache的镜像站点下载。下载完成后,解压缩到指定目录,比如/opt/hive。 4. 配置Hive 配置文件位于Hive的conf目录下,修改hive-env.sh文件,设置JAVA_HOME和HADOOP_HOME变量的值,比如: ``` export JAVA_HOME=/usr/lib/jvm/default-java export HADOOP_HOME=/opt/hadoop ``` 另外,还需要修改hive-site.xml文件,将以下属性设置为对应的值: ``` <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:derby:/opt/hive/metastore_db;create=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>org.apache.derby.jdbc.EmbeddedDriver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hive</value> <description>password to use against metastore database</description> </property> ``` 5. 启动Hive 启动Hive之前,需要先启动Hadoop。启动Hadoop后,可以通过以下命令启动Hive: ``` cd /opt/hive/bin ./hive ``` 启动成功后,可以在Hive的Shell中输入SQL语句,比如: ``` hive> show tables; ``` 以上就是Hive安装配置详解。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值