linux 下安装hive-2.3.7包含配置mysql以及hadoop

2 篇文章 0 订阅

前提:MySQL以及hadoop已经安装好,并且都已经正常启动

Hadoop启动查看jps这样即正常

[root@localhost conf]# jps

1952 SecondaryNameNode

1527 NameNode

2104 ResourceManager

2393 NodeManager

18505 Jps

17039 RunJar


1.安装包下载

下载地址http://mirrors.hust.edu.cn/apache/hive/hive-2.3.7/

 

2.解压安装包

[root@localhost hive]# tar -axf apache-hive-2.3.7-bin

3.修改配置文件hive-site.xml 

配置文件路径:

[root@localhost conf]# pwd

/root/hive/apache-hive-2.3.7-bin/conf

新建文件夹并且添加内容

[root@localhost conf]# touch hive-site.xml

[root@localhost conf]# vim hive-site.xml

<configuration>

       <property>

                <name>javax.jdo.option.ConnectionURL</name>

                <value>jdbc:mysql://localhost:3306/hive?characterEncoding=UTF-8</value>

                <description>JDBC connect string for a JDBC metastore</description>

        </property>

        <property>

                <name>javax.jdo.option.ConnectionDriverName</name>

                <value>com.mysql.jdbc.Driver</value>

                <description>Driver class name for a JDBC metastore</description>

        </property>

        <property>

                <name>javax.jdo.option.ConnectionUserName</name>

                <value>root</value>

                <description>username to use against metastore database</description>

        </property>

        <property>

                <name>javax.jdo.option.ConnectionPassword</name>

                <value>root</value>

        <description>password to use against metastore database</description>

        </property>

</configuration>

4.修改文件hive-env.sh

文件路径:

[root@localhost conf]# pwd

/root/hive/apache-hive-2.3.7-bin/conf

文件尾部加入:

export HADOOP_HOME=/opt/app/hadoop-2.7.7

5.将mysql驱动jar包放到hive的lib目录下

[root@localhost lib]# pwd

/root/hive/apache-hive-2.3.7-bin/lib

注:此jar包需要自行下载,并非mysql 自带

下载地址:

https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.46.zip

 

 

6.修改环境变量

查看hive_home:

[root@localhost conf]# echo $HIVE_HOME

/root/hive/apache-hive-2.3.7-bin

查看hadoop_home:

[root@localhost conf]# echo $HADOOP_HOME

/opt/app/hadoop-2.7.7

文件中加入如下内容:

export HADOOP_HOME=/opt/app/hadoop-2.7.7

export PATH=$HADOOP_HOME/bin:$PATH

export HIVE_HOME=/root/hive/apache-hive-2.3.7-bin

export PATH=$PATH:$HIVE_HOME/bin

 

修改完配置文件source 一下

[root@localhost lib]# source ~/.bashrc

 

7.验证hive 安装

[root@localhost conf]# hive --help

Usage ./hive <parameters> --service serviceName <service parameters>

Service List: beeline cleardanglingscratchdir cli hbaseimport hbaseschematool help hiveburninclient hiveserver2 hplsql jar lineage llapdump llap llapstatus metastore metatool orcfiledump rcfilecat schemaTool version

Parameters parsed:

  --auxpath : Auxiliary jars

  --config : Hive configuration directory

  --service : Starts specific service/component. cli is default

Parameters used:

  HADOOP_HOME or HADOOP_PREFIX : Hadoop install directory

  HIVE_OPT : Hive options

For help on a particular service:

  ./hive --service serviceName --help

Debug help:  ./hive --debug --help

 

8.mysql数据库配置

mysql> use mysql;

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> create database hive charset=utf8;

Query OK, 1 row affected (0.02 sec)

 

mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'PASSWORD' WITH GRANT OPTION ;

Query OK, 0 rows affected (0.01 sec)

 

mysql> quit

 

9.初始化元数据库

注意:当使用的 hive 是 2.x 之前的版本,不做初始化也是 OK 的,当 hive 第一次启动的 时候会自动进行初始化,只不过会不会生成足够多的元数据库中的表。在使用过程中会 慢慢生成。但最后进行初始化。如果使用的 2.x 版本的 Hive,那么就必须手动初始化元 数据库。使用命令:

schematool -dbType mysql -initSchema

[root@localhost conf]# schematool -dbType mysql -initSchema

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/root/hive/apache-hive-2.3.7-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Metastore connection URL:  jdbc:mysql://localhost:3306/hive?characterEncoding=UTF-8

Metastore Connection Driver :  com.mysql.jdbc.Driver

Metastore connection User:  root

Starting metastore schema initialization to 2.3.0

Initialization script hive-schema-2.3.0.mysql.sql

Initialization script completed

schemaTool completed

 

 

10.检查数据库

mysql> show databases;

+--------------------+

| Database           |

+--------------------+

| information_schema |

| hive               |

| mysql              |

| performance_schema |

| test               |

+--------------------+

5 rows in set (0.01 sec)

 

mysql> use hive

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

 

Database changed

mysql> use hive;

Database changed

mysql> show tables;

+---------------------------+

| Tables_in_hive            |

+---------------------------+

| AUX_TABLE                 |

| BUCKETING_COLS            |

| CDS                       |

| COLUMNS_V2                |

| COMPACTION_QUEUE          |

| COMPLETED_COMPACTIONS     |

| COMPLETED_TXN_COMPONENTS  |

| DATABASE_PARAMS           |

| DBS                       |

| DB_PRIVS                  |

| DELEGATION_TOKENS         |

| FUNCS                     |

| FUNC_RU                   |

| GLOBAL_PRIVS              |

| HIVE_LOCKS                |

| IDXS                      |

| INDEX_PARAMS              |

| KEY_CONSTRAINTS           |

| MASTER_KEYS               |

| NEXT_COMPACTION_QUEUE_ID  |

| NEXT_LOCK_ID              |

| NEXT_TXN_ID               |

| NOTIFICATION_LOG          |

| NOTIFICATION_SEQUENCE     |

| NUCLEUS_TABLES            |

| PARTITIONS                |

| PARTITION_EVENTS          |

| PARTITION_KEYS            |

| PARTITION_KEY_VALS        |

| PARTITION_PARAMS          |

| PART_COL_PRIVS            |

| PART_COL_STATS            |

| PART_PRIVS                |

| ROLES                     |

| ROLE_MAP                  |

| SDS                       |

| SD_PARAMS                 |

| SEQUENCE_TABLE            |

| SERDES                    |

| SERDE_PARAMS              |

| SKEWED_COL_NAMES          |

| SKEWED_COL_VALUE_LOC_MAP  |

| SKEWED_STRING_LIST        |

| SKEWED_STRING_LIST_VALUES |

| SKEWED_VALUES             |

| SORT_COLS                 |

| TABLE_PARAMS              |

| TAB_COL_STATS             |

| TBLS                      |

| TBL_COL_PRIVS             |

| TBL_PRIVS                 |

| TXNS                      |

| TXN_COMPONENTS            |

| TYPES                     |

| TYPE_FIELDS               |

| VERSION                   |

| WRITE_SET                 |

+---------------------------+

57 rows in set (0.00 sec)

 

 

11.在hadoop下创建hive所用文件夹

前提,配置了本机HADOOP_HOME的环境变量可以这么使用命令.
创建文件夹 并赋予权限.

$HADOOP_HOME/bin/hadoop fs -mkdir  -p     /tmp

$HADOOP_HOME/bin/hadoop fs -mkdir  -p     /user/hive/warehouse

$HADOOP_HOME/bin/hadoop fs -chmod g+w   /tmp

$HADOOP_HOME/bin/hadoop fs -chmod g+w   /user/hive/warehouse

 

发现创建时候报错:大致意思hadoop是处于安全模式,关闭安全模式

[root@localhost bin]# $HADOOP_HOME/bin/hadoop fs -mkdir  -p     /user/hive/warehouse

mkdir: Cannot create directory /user/hive/warehouse. Name node is in safe mode.

 

[root@localhost bin]# hadoop  dfsadmin -safemode leave

DEPRECATED: Use of this script to execute hdfs command is deprecated.

Instead use the hdfs command for it.

Safe mode is OFF

 

继续创建即可

 

12.进入hive

[root@localhost bin]# hive

which: no hbase in (/opt/app/hadoop-2.7.7/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/java/jdk1.8.0_131/bin:/usr/java/jdk1.8.0_131/jre/bin:/root/hive/apache-hive-2.3.7-bin/bin:/root/bin:/usr/local/python3/bin:/usr/java/jdk1.8.0_131/bin:/usr/java/jdk1.8.0_131/jre/bin:/usr/java/jdk1.8.0_131/bin:/usr/java/jdk1.8.0_131/jre/bin:/usr/java/jdk1.8.0_131/bin:/usr/java/jdk1.8.0_131/jre/bin:/usr/java/jdk1.8.0_131/bin:/usr/java/jdk1.8.0_131/jre/bin:/usr/java/jdk1.8.0_131/bin:/usr/java/jdk1.8.0_131/jre/bin:/root/hive/apache-hive-2.3.7-bin/bin:/usr/java/jdk1.8.0_131/bin:/usr/java/jdk1.8.0_131/jre/bin:/usr/java/jdk1.8.0_131/bin:/usr/java/jdk1.8.0_131/jre/bin:/root/hive/apache-hive-2.3.7-bin/bin)

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/root/hive/apache-hive-2.3.7-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

 

Logging initialized using configuration in jar:file:/root/hive/apache-hive-2.3.7-bin/lib/hive-common-2.3.7.jar!/hive-log4j2.properties Async: true

Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.

hive>

 

 

13.创建一个数据库myhive

hive> create database myhive;

OK

Time taken: 7.153 seconds

 

14.使用新的数据库myhive

 

hive> use myhive;

OK

Time taken: 0.065 seconds

 

15.查看正在使用的数据库

 

hive> select current_database();

OK

myhive

Time taken: 0.631 seconds, Fetched: 1 row(s)

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值