Apache-hive2.1.1安装部署

1、安装mysql、安装hadoop

2、在hadoop的Master节点中解压

  注:是hadoop的master节点

解压命令:tar -zxvf apache-hive-2.1.1-bin.tar.gz

3、hive安装目录结构及用途

binlinux/windows下脚本命令目录
conf配置文件目录
exampleshive示例目录
hcataloghcatalog服务
jdbc数据库连接包
lib依赖包
scriptshive metastore数据库初始化sql脚本

 

4、配置环境变量


export HIVE_HOME=/scsi/soft/hive/apache-hive-2.1.1-bin
PATH=$HIVE_HOME/bin:$PATH

 

5、执行hive --version 查看版本信息

[root@coordinate-presto conf]# hive --version
which: no hbase in (/scsi/soft/apache-hive-2.1.1-bin/bin:/scsi/soft/apache-maven-3.6.2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/scsi/soft/jdk1.8.0_231/bin:/scsi/soft/hadoop-2.7.3/bin:/scsi/soft/hadoop-2.7.3/sbin:/root/bin)
Hive 2.1.1
Subversion git://jcamachorodriguez-rMBP.local/Users/jcamachorodriguez/src/workspaces/hive/HIVE-release2/hive -r 1af77bbf8356e86cabbed92cfa8cc2e1470a1d5c
Compiled by jcamachorodriguez on Tue Nov 29 19:46:12 GMT 2016
From source with checksum 569ad6d6e5b71df3cb04303183948d90

 

6、hive配置

6.1 配置  hive-site.xm 

在hive安装目录conf目录下新建一个hive-site.xml文件:vi hive-site.xml;
将下面代码粘贴到打开的文件中
# cat hive-site.xml 

<?xml version="1.0"?> 
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<configuration>
   <property> 
      <name>hive.metastore.local</name> 
      <value>true</value> 
   </property>

   <property>
      <name>hive.metastore.schema.verification</name>
      <value>false</value>
   </property> 

   <!-- 数据库jdbc连接url -->
   <property> 
      <name>javax.jdo.option.ConnectionURL</name> 
      <value>jdbc:mysql://192.168.133.128:33306/hive-test?characterEncoding=UTF-8</value> 
   </property> 
   
   <!-- 数据库jdbc驱动 -->
   <property> 
      <name>javax.jdo.option.ConnectionDriverName</name> 
      <value>com.mysql.jdbc.Driver</value> 
   </property> 
 
   <!-- mysql连接账号-->
   <property> 
     <name>javax.jdo.option.ConnectionUserName</name> 
     <value>root</value> 
   </property> 

   <!-- mysql连接密码-->
   <property> 
     <name>javax.jdo.option.ConnectionPassword</name> 
     <value>123456</value> 
   </property>
 
   <!-- 允许通过hive的jdbc/thrift客户端方式连接 -->
   <property>
      <name>hive.server2.thrift.port</name>
      <value>10000</value>
      <description>Port number of HiveServer2 Thrift interface. Can be overridden by setting $HIVE_SERVER2_THRIFT_PORT</description>
    </property>

<property>
  <name>hive.server2.thrift.bind.host</name>
  <value>0.0.0.0</value>
  <description>Bind host on which to run the HiveServer2 Thrift interface.
  Can be overridden by setting $HIVE_SERVER2_THRIFT_BIND_HOST</description>
</property>

</configuration>

 

6.2 配置  hive-env.sh

配置:HIVE_CONF_DIR   、  HADOOP_HOME

export     HIVE_CONF_DIR=/scsi/soft/hive/apache-hive-2.1.1-bin/conf
export     HADOOP_HOME=/scsi/soft/hadoop/hadoop-2.7.3/

7、将mysql jdbc包上传到hive安装包下lib目录中

进入lib目录下:cd ~/apache-hive-2.1.1-bin/lib
通过lrzsz的rz命令将mysql-connector-java-5.1.27-bin.jar上传

8、初始化metastore

hive安装目录bin目录下:./schematool -initSchema -dbType mysql
[root@coordinate-presto bin]# ./schematool -initSchema -dbType mysql
which: no hbase in (/scsi/soft/apache-hive-2.1.1-bin/bin:/scsi/soft/apache-maven-3.6.2/bin:/scsi/soft/hive-1.1.0-cdh5.7.0/bin:/scsi/soft/apache-maven-3.6.2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/scsi/soft/jdk1.8.0_231/bin:/scsi/soft/hadoop-2.7.3/bin:/scsi/soft/hadoop-2.7.3/sbin:/root/bin:/scsi/soft/jdk1.8.0_231/bin:/scsi/soft/hadoop-2.7.3/bin:/scsi/soft/hadoop-2.7.3/sbin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/scsi/soft/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/scsi/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:        jdbc:mysql://192.168.133.128:33306/hive-test?characterEncoding=UTF-8
Metastore Connection Driver :    com.mysql.jdbc.Driver
Metastore connection User:       root
Starting metastore schema initialization to 2.1.0
Initialization script hive-schema-2.1.0.mysql.sql
Initialization script completed
schemaTool completed

9、查看数据中生成的元数据表

 

10、启动hive

启动hive

[root@coordinate-presto bin]# hive
which: no hbase in (/scsi/soft/apache-hive-2.1.1-bin/bin:/scsi/soft/apache-maven-3.6.2/bin:/scsi/soft/hive-1.1.0-cdh5.7.0/bin:/scsi/soft/apache-maven-3.6.2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/scsi/soft/jdk1.8.0_231/bin:/scsi/soft/hadoop-2.7.3/bin:/scsi/soft/hadoop-2.7.3/sbin:/root/bin:/scsi/soft/jdk1.8.0_231/bin:/scsi/soft/hadoop-2.7.3/bin:/scsi/soft/hadoop-2.7.3/sbin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/scsi/soft/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/scsi/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/scsi/soft/apache-hive-2.1.1-bin/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> 

启动9083端口

hive --service metastore &

允许hive客户端jdbc/thrift方式连接

nohup hiveserver2 &

查看启动的进程

[root@hadoop-namenode bin]# ps -ef | grep hive
root     62242 62122 27 11:37 pts/1    00:00:11 /opt/jdk1.8.0_231/bin/java -Xmx256m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/opt/hadoop/hadoop-2.6.5/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/hadoop/hadoop-2.6.5 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/opt/hadoop/hadoop-2.6.5/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/opt/apache-hive-2.1.1-bin/conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /opt/apache-hive-2.1.1-bin/lib/hive-metastore-2.1.1.jar org.apache.hadoop.hive.metastore.HiveMetaStore
root     62325 62122 39 11:37 pts/1    00:00:13 /opt/jdk1.8.0_231/bin/java -Xmx256m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/opt/hadoop/hadoop-2.6.5/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/hadoop/hadoop-2.6.5 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/opt/hadoop/hadoop-2.6.5/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/opt/apache-hive-2.1.1-bin/conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /opt/apache-hive-2.1.1-bin/lib/hive-cli-2.1.1.jar org.apache.hadoop.hive.cli.CliDriver
root     62418 62122 99 11:37 pts/1    00:00:13 /opt/jdk1.8.0_231/bin/java -Xmx256m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/opt/hadoop/hadoop-2.6.5/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/hadoop/hadoop-2.6.5 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/opt/hadoop/hadoop-2.6.5/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dlog4j.configurationFile=hive-log4j2.properties -Djava.util.logging.config.file=/opt/apache-hive-2.1.1-bin/conf/parquet-logging.properties -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /opt/apache-hive-2.1.1-bin/lib/hive-service-2.1.1.jar org.apache.hive.service.server.HiveServer2
root     62508 62122  0 11:38 pts/1    00:00:00 grep --color=auto hive

11、执行sql

查看数据库
hive> show databases;
OK
default
mydb1
Time taken: 0.628 seconds, Fetched: 2 row(s)

创建数据库
hive> create database mydb1;
OK
Time taken: 0.439 seconds

使用数据库
hive> use mydb1;
OK
Time taken: 0.032 seconds

查看数据表
hive> show tables;
OK
Time taken: 0.03 seconds

创建表
hive> create table userinfo(name string, age int);
OK
Time taken: 0.315 seconds

查看数据库表
hive> show tables;
OK
userinfo
Time taken: 0.026 seconds, Fetched: 1 row(s)

插入数据
hive> insert into userinfo (name, age) values('huangliao',32);
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20191018171210_75af5c01-7850-47f8-bfbd-e32ad87f1601
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1571313686451_0002, Tracking URL = http://coordinate-presto:8088/proxy/application_1571313686451_0002/
Kill Command = /scsi/soft/hadoop-2.7.3//bin/hadoop job  -kill job_1571313686451_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2019-10-18 17:12:21,755 Stage-1 map = 0%,  reduce = 0%
2019-10-18 17:12:27,197 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.77 sec
MapReduce Total cumulative CPU time: 770 msec
Ended Job = job_1571313686451_0002
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://coordinate-presto:9000/user/hive/warehouse/mydb1.db/userinfo/.hive-staging_hive_2019-10-18_17-12-10_458_8495777321783536331-1/-ext-10000
Loading data to table mydb1.userinfo
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 0.77 sec   HDFS Read: 4207 HDFS Write: 83 SUCCESS
Total MapReduce CPU Time Spent: 770 msec
OK
Time taken: 18.252 seconds

查询数据
hive> select * from userinfo;
OK
huangliao       32
Time taken: 0.093 seconds, Fetched: 1 row(s)
hive>

 

12、在hdfs中显示

[root@coordinate-presto bin]# hadoop fs -lsr /
lsr: DEPRECATED: Please use 'ls -R' instead.
drwx------   - root supergroup          0 2019-10-18 16:42 /tmp
drwx------   - root supergroup          0 2019-10-17 20:08 /tmp/hadoop-yarn
drwx------   - root supergroup          0 2019-10-17 20:08 /tmp/hadoop-yarn/staging
drwxr-xr-x   - root supergroup          0 2019-10-17 20:08 /tmp/hadoop-yarn/staging/history
drwxrwxrwt   - root supergroup          0 2019-10-17 20:08 /tmp/hadoop-yarn/staging/history/done_intermediate
drwxrwx---   - root supergroup          0 2019-10-18 17:12 /tmp/hadoop-yarn/staging/history/done_intermediate/root
-rwxrwx---   1 root supergroup      33569 2019-10-17 20:08 /tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1571313686451_0001-1571314082289-root-word+count-1571314101917-1-1-SUCCEEDED-default-1571314088829.jhist
-rwxrwx---   1 root supergroup        347 2019-10-17 20:08 /tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1571313686451_0001.summary
-rwxrwx---   1 root supergroup     117316 2019-10-17 20:08 /tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1571313686451_0001_conf.xml
-rwxrwx---   1 root supergroup      22922 2019-10-18 17:12 /tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1571313686451_0002-1571389934103-root-insert+into+userinfo+...lues%28%27huangliao%27%2C32%29%28Stage-1571389946602-1-0-SUCCEEDED-default-1571389940875.jhist
-rwxrwx---   1 root supergroup        376 2019-10-18 17:12 /tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1571313686451_0002.summary
-rwxrwx---   1 root supergroup     281465 2019-10-18 17:12 /tmp/hadoop-yarn/staging/history/done_intermediate/root/job_1571313686451_0002_conf.xml
drwx------   - root supergroup          0 2019-10-17 20:08 /tmp/hadoop-yarn/staging/root
drwx------   - root supergroup          0 2019-10-18 17:12 /tmp/hadoop-yarn/staging/root/.staging
drwx-wx-wx   - root supergroup          0 2019-10-18 16:42 /tmp/hive
drwx------   - root supergroup          0 2019-10-18 17:12 /tmp/hive/root
drwxr-xr-x   - root supergroup          0 2019-10-18 17:06 /user
drwxr-xr-x   - root supergroup          0 2019-10-18 17:06 /user/hive
drwxr-xr-x   - root supergroup          0 2019-10-18 17:06 /user/hive/warehouse
drwxr-xr-x   - root supergroup          0 2019-10-18 17:10 /user/hive/warehouse/mydb1.db
drwxr-xr-x   - root supergroup          0 2019-10-18 17:12 /user/hive/warehouse/mydb1.db/userinfo
-rwxr-xr-x   1 root supergroup         13 2019-10-18 17:12 /user/hive/warehouse/mydb1.db/userinfo/000000_0

13、用hdfs查看数据

[root@coordinate-presto bin]# hadoop fs  -cat /user/hive/warehouse/mydb1.db/userinfo/000000_0
huangliao32

与用hive执行sql一样

hive> select * from userinfo;
OK
huangliao       32
Time taken: 0.093 seconds, Fetched: 1 row(s)

14、参考:https://blog.csdn.net/huaidandaidai1/article/details/81056651

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值