5. 搭建Hive

一、环境准备

1、查看Hadoop是否搭建

  • 如图是已经创建好的情况
[root@hadoop ~]# jps
116946 ResourceManager
115910 NameNode
20888 Jps
116104 DataNode
116682 SecondaryNameNode
117070 NodeManager

2、查看MySQL是否已经安装

  • systemctl status mysqld.service # 查看运行状态
[root@hadoop ~]# systemctl status mysqld.service 
● mysqld.service - MySQL Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
   Active: active (running) since 日 2023-11-19 22:46:33 CST; 21h ago
     Docs: man:mysqld(8)
           http://dev.mysql.com/doc/refman/en/using-systemd.html
  Process: 1478 ExecStart=/usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid $MYSQLD_OPTS (code=exited, status=0/SUCCESS)
  Process: 1037 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
 Main PID: 1485 (mysqld)
    Tasks: 27
   CGroup: /system.slice/mysqld.service
           └─1485 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid

1119 22:46:30 hadoop systemd[1]: Starting MySQL Server...
1119 22:46:33 hadoop systemd[1]: Started MySQL Server.

  • 登录数据库
    mysql -uroot -p123456
[root@hadoop ~]# mysql -uroot -p123456
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.44 MySQL Community Server (GPL)

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> exit;

  • 安装MySQL教程:“点这里

二、hive安装配置

1、使用finalshell上传,解压

  • 上传
[root@hadoop ~]# ls
apache-hive-2.3.7-bin.tar.gz

在这里插入图片描述

  • 解压
    tar -zxvf apache-hive-2.3.7-bin.tar.gz -C /opt/
    mv apache-hive-2.3.7-bin/ hive
[root@hadoop ~]# cd /opt/
[root@hadoop opt]# ls
apache-hive-2.3.7-bin  hadoop  hd_space  jdk  rh
[root@hadoop opt]# mv apache-hive-2.3.7-bin/ hive

2、配置hive文件

  • 切入conf目录 cd /opt/hive/conf/
[root@hadoop opt]# cd /opt/hive/conf/
[root@hadoop conf]# ls
beeline-log4j2.properties.template  hive-exec-log4j2.properties.template  llap-cli-log4j2.properties.template
hive-default.xml.template           hive-log4j2.properties.template       llap-daemon-log4j2.properties.template
hive-env.sh.template                ivysettings.xml                       parquet-logging.properties
[root@hadoop conf]# 
  • 编写文件:vim hive-site.xml
 <configuration>
	<property>
                <name>system:java.io.tmpdir</name>
                # 是hive安装的位置,tmp目录
                <value>/opt/hive/tmp</value>
              </property>
  	<property>
                <name>javax.jdo.option.ConnectionUserName</name>
                # 数据库的账户
                <value>root</value>
            </property>
            
            <property>
                <name>javax.jdo.option.ConnectionPassword</name>
                # 数据库密码
                <value>123456</value>
            </property>
    <property>
                <name>javax.jdo.option.ConnectionURL</name>
                # 这里的localhost改成ip映射账户名
                <value>jdbc:mysql://hadoop:3306/hive?createDatabaseIfNotExist=true&amp;useSSL=false</value>
            </property>
  <property>
                <name>javax.jdo.option.ConnectionDriverName</name>
                <value>com.mysql.jdbc.Driver</value>
            </property>
</configuration>

3、上传依赖文件

  • 进入hive的依赖库目录
    cd /opt/hive/lib/
  • 上传文件

注意: 这里一定要检查一下是否上传成功!

[root@hadoop lib]# ll mysql-connector-java-5.1.47.jar 
-rw-r--r--. 1 root root 1007502 1120 21:00 mysql-connector-java-5.1.47.jar

4、编辑Hadoop中core-site.xml文件

  • 进入目录:cd /opt/hadoop/etc/hadoop/
  • 编辑文件:vim core-site.xml
  • 添加内容如下:
<property>
	<name>hadoop.proxyuser.hadoop.groups</name>
	<value>*</value>
</property>
<property>
	<name>hadoop.proxyuser.hadoop.hosts</name>
	<value>*</value>
</property>

5、配置环境变量

  • 编辑文件:vim /etc/profile
  • 再文件追加如下内容:
export HIVE_HOME=/opt/hive
export PATH=${HIVE_HOME}/bin:$PATH
  • 刷新文件:source /etc/profile

三、启动hive

1、初始化MySQL

注意:需要MySQL正常运行,上述文件配置无误,依赖无误,否则报错

  • 执行 schematool -dbType mysql -initSchema
[root@hadoop lib]# schematool -dbType mysql -initSchema
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:        jdbc:mysql://hadoop:3306/hive?createDatabaseIfNotExist=true&useSSL=false
Metastore Connection Driver :    com.mysql.jdbc.Driver
Metastore connection User:       root
Starting metastore schema initialization to 2.3.0
Initialization script hive-schema-2.3.0.mysql.sql
Initialization script completed
schemaTool completed  # 这里是成功了

2、测试hive命令

[root@hadoop hadoop]# hive
which: no hbase in (/opt/hive/bin:.:/opt/hadoop/bin:/opt/hadoop/sbin:/opt/jdk/bin:.:/opt/hadoop/bin:/opt/hadoop/sbin:/opt/jdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/opt/hive/lib/hive-common-2.3.7.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> 
  • 使用命令;show databases 测试

若是报错如下:

hive> show databases;
FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

成功如下:

hive> show databases;
OK
default
Time taken: 3.167 seconds, Fetched: 1 row(s)
hive> 

四、总结

1、首先保证Hadoop与MySQL是可以的
2、注意上传依赖的位置与配置文件中数据库地址、账户名、密码一定不能错
3、先初始化hive,连接MySQL,成功再进入hive命令界面测试

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值