hive完全分布式部署

前言

关于hadoop完全分布式部署,我在https://blog.csdn.net/zisefeizhu/article/details/84317520已经有详细步骤

https://blog.csdn.net/zisefeizhu/article/details/84317520继续部署hbase分布式

关于有关原理请关注这个大佬https://blog.csdn.net/ForgetThatNight/article/details/79632364

环境和此链接一致,在这链接出现的内容在此不在声明https://www.cnblogs.com/edisonchou/p/4440107.html

关于hbase完全分布式部署,我在已经有详细步骤https://blog.csdn.net/zisefeizhu/article/details/84635440

本次所需软件:apache-hive-2.3.4-bin.tar.gz    mysql-connector-java-5.1.46.zip

网盘分享:

链接:https://pan.baidu.com/s/1aZv8G-YucFfEf0KSN6xXng 
提取码:h3a0 

环境检查

[hadoop@hadoop01 hadoop]$ sbin/start-all.sh

[hadoop@hadoop01 lib]$ jps

2002 SecondaryNameNode

5726 Jps

1800 NameNode

2157 ResourceManager

注意

hdfs在启动开始时会进入安全模式,这时文件系统中的内容不允许修改也不允许删除,直到安全模式结束。安全模式主要是为了系统启动的时候检查各个DataNode上数据块的有效性,同时根据策略必要的复制或者删除部分数据块。运行期通过命令也可以进入安全模式。在实践过程中,系统启动的时候去修改和删除文件也会有安全模式不允许修改的出错提示,只需要等待一会儿即可。

如 hadoop fs -mkdir -p /user/hive/warehouse

mkdir: Cannot create directory /user/hive/warehouse. Name node is in safe mo

解决

可以等待其自动退出安全模式,也可以使用手动命令来离开安全模式:

hadoop fs -mkdir -p /user/hive/warehouse

开始部署

以下操作在root下进行

安装数据库【在这里我以最简单的方式安装】

因为Centos默认支持的数据库是Mariadb,所以需求先卸载Mariadb

rpm -qa | grep maria*

卸载数据库:

yum -y remove mari*

删除数据库文件:

 rm -rf /var/lib/mysql/*

又因为Centos 7 默认没有Mysql源 不过我在这里给你提供了rpm源

wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
rpm -ivh mysql-community-release-el7-5.noarch.rpm
yum install mysql-server
rpm -qi mysql-community-server
systemctl start mysqld.service  启动数据库
cat /var/log/mysqld.log|grep 'A temporary password'
mysql
mysqladmin -u root password '1'   设置数据库密码
mysql -u root -p
systemctl enable  mysqld.service

以下操作在hadoop用户下进行

mysql - uroot -p 验证普通用户下也能登陆

rz 上传 你下载的hive包到/home/hadoop 下

tar xf apache-hive-2.3.4-bin.tar.gz 解压

ln -s apache-hive-2.3.4-bin hive 设置软链接

设置环境变量

[hadoop@hadoop01 ~]$ vim ~/.bash_profile

#添加

#hive

export HIVE_HOME=/home/hadoop/hive

export PATH=$PATH:${HIVE_HOME}/bin

生效配置文件

source ~/.bash_profile

查看

hive --version

Hive 2.3.4

Git git://daijymacpro-2.local/Users/daijy/commit/hive -r 56acdd2120b9ce6790185c679223b8b5e884aaf2

Compiled by daijy on Wed Oct 31 14:20:50 PDT 2018

From source with checksum 9f2d17b212f3a05297ac7dd40b65bab0

对HIVE进行配置

创建hive库并设置相应权限

mysql -uroot -p

Enter password:

#创建hive数据库,为下边做准备

mysql> create database hive;

Query OK, 1 row affected (0.00 sec)

mysql> create user 'hive'@'hadoop01' identified by 'hive';

#授权,其中MASTER为本地主机名,已在hosts中做域名解析

mysql> grant all privileges on *.* to 'hive'@'hadoop01' identified by 'hive';

mysql> flush privileges;

Query OK, 0 rows affected (0.01 sec)

exit

对hive进行配置

进入[hadoop@hadoop01 ~]$ cd hive/conf/ 下 拷贝 hive-default.xml.template为 hive-site.xml

[hadoop@hadoop01 conf]$ cp hive-default.xml.template hive-site.xml

编辑hive-site.xml

[hadoop@hadoop01 conf]$ vim hive-site.xml

#更改hive-site.xml为以下内容

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

<!--url-->

<property>

<name>javax.jdo.option.ConnectionURL</name>

<value>jdbc:mysql://hadoop01:3306/hive?createDatabaseIfNotExist=true</value>

<description>JDBC connect string for a JDBC metastore</description>

</property>

<!--mysql用户名-->

<property>

<name>javax.jdo.option.ConnectionUserName</name>

<value>hive</value>

<description>username to use against metastore database</description>

</property>

<!--mysql中hive用户密码-->

<property>

<name>javax.jdo.option.ConnectionPassword</name>

<value>hive</value>

<description>password to use against metastore database</description>

</property>

<!--mysql驱动-->

<property>

<name>javax.jdo.option.ConnectionDriverName</name>

<value>com.mysql.jdbc.Driver</value>

</property>

<property

<name>hive.downloaded.resources.dir</name>

<value>/home/hadoop/hive/tmp/${hive.session.id}_resources</value>

</property>

编辑hive-env.sh

[hadoop@hadoop01 conf]$ cp hive-env.sh.template hive-env.sh

[hadoop@hadoop01 conf]$ vim hive-env.sh

#在文末添加,java,hadoop,hive环境变量

export JAVA_HOME=/home/hadoop/java

export HADOOP_HOME=/home/hadoop/hadoop

export HIVE_HOME=/home/hadoop/hive

将Mysql驱动包上床到lib目录,并解压,并删除原有包,只保留mysql-connector-java-5.1.46-bin.jar

[hadoop@hadoop01 lib]$ ls | grep mysql-

mysql-connector-java-5.1.46-bin.jar

mysql-metadata-storage-0.9.2.jar

对数据库进行初始化

[hadoop@hadoop01 lib]$ schematool -dbType mysql -initSchema

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/home/hadoop/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Metastore connection URL: jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true

Metastore Connection Driver : com.mysql.jdbc.Driver

Metastore connection User: hive

Starting metastore schema initialization to 2.3.0

Initialization script hive-schema-2.3.0.mysql.sql

Initialization script completed

schemaTool completed

执行hive

[hadoop@hadoop01 ~]$ hive 【注意等待时间过长 但正常 原理自行百度】

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/home/hadoop/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/

SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.7.7/share/hadoop/common/lib/slf4j-log4s]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/home/hadoop/hive/lib/hive-common-2.3.4.ja

Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider u using Hive 1.X releases.

hive>create database hive_2; 创建hive_2数据库

OK 显示OK 表明hive 在主节点配置完成

hive>show databases; 查看创建的数据库

hive> show databases;

OK

default

hive_2

Time taken: 18.034 seconds, Fetched: 2 row(s)

当然了也可以在浏览器查看

 

Hive客户端配置

把hive文件复制到从节点

[hadoop@hadoop01 ~]$ scp -r hive hadoop@10.0.0.92:/home/hadoop

[hadoop@hadoop01 ~]$ scp -r hive hadoop@10.0.0.94:/home/hadoop

把环境变量复制到从节点

[hadoop@hadoop01 ~]$ scp ~/.bash_profile hadoop@10.0.0.92:~
[hadoop@hadoop01 ~]$ scp ~/.bash_profile hadoop@10.0.0.94:~

在从节点生效环境变量

source ~/.bash_profile

修改从节点的hive-site-xml

#hadoop01为Hive服务器主机名,hadoop02和hadoop03都需要修改

[hadoop@hadoop02 ~]$ vim hive/conf/hive-site.xml

<configuration>

<property>

<name>hive.metastore.uris</name>

<value>thrift://hadoop01:9083</value>

</property>

</configuration>

[hadoop@hadoop03 ~]$ vim hive/conf/hive-site.xml

<configuration>

<property>

<name>hive.metastore.uris</name>

<value>thrift://hadoop01:9083</value>

</property>

</configuration>

启动Hive服务端

#在后台启动 不然一直出现信息提示

[hadoop@hadoop01 ~]$ hive --service metastore &

查看jps进程

[hadoop@hadoop01 hadoop]$ jps

2002 SecondaryNameNode

6021 Jps

4006 RunJar

1800 NameNode

2157 ResourceManager

 

OK 至此hive完全分布式部署完成

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值