一、hbase2.1.1安装
在master节点:
修改配置
$ tar -zxvf hbase-2.1.1-bin.tar.gz -C /opt
$ cd /opt/hbase-2.1.1/conf/
$ vi hbase-env.sh
将下面这句话加到文件末尾
export JAVA_HOME=/opt/jdk1.8.0_102
$ vi hbase-site.xml
修改如下:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master,slave1,slave2</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/fay/zookeeper</value>
</property>
</configuration>
$ vi regionservers
修改如下:
# 删掉localhost
slave1
slave2
$ vi backup-masters
这个文件是新建的,加入下面这句话
slave1
然后就是配置环境变量:
vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_251
export HBASE_HOME=/home/hbase/hbase-2.1.1
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin
export HADOOP_HOME=/home/hadoop/hadoop-2.8.5/bin
然后将配置文件和整个目录给其它两个节点:(并且修改环境变量)
scp -r hbase-2.1.1 server2:/home/hbase/
scp -r hbase-2.1.1 server3:/home/hbase/
在启动hadoop的前提下,启动hbase
(启动master节点就行这里我是server1)
cd到bin目录下
[root@server1 bin]# ./start-hbase.sh
# 停止
[root@server1 bin]# ./stop-hbase.sh
jps确认是否启动
[root@server1 bin]# jps
12274 Jps
24821 ResourceManager
24614 SecondaryNameNode
5513 QuorumPeerMain
24667 Kafka
24414 NameNode
[root@server2 ~]# jps
4453 QuorumPeerMain
16406 HRegionServer
17179 Jps
7374 Kafka
[root@server3 ~]# jps
22306 Jps
12692 QuorumPeerMain
15925 Kafka
21814 HRegionServer
hbase shell
测试一下
二、Hive2.3.4搭建
mysql-8.0离线部署
tar -xvf mysql-8.0.15-1.el7.x86_64.rpm-bundle.tar
rpm -qa | grep mariadb
命令查看 mariadb 的安装包
rpm -e mariadb-libs-5.5.60-1.el7_5.x86_64 --nodeps
卸载mariadb
rpm -qa | grep mariadb
命令查看 mariadb是否卸载
rpm -ivh mysql-community-common-8.0.15-1.el7.x86_64.rpm --nodeps --force
命令安装 common
依次: libs,client,server
rpm -qa | grep mysql
mysql-community-libs-8.0.15-1.el7.x86_64
mysql-community-client-8.0.15-1.el7.x86_64
mysql-community-server-8.0.15-1.el7.x86_64
mysql-community-common-8.0.15-1.el7.x86_64
初始化:mysqld --initialize;
[root@1234 mysql]# mysqld --initialize;
mysqld: error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory
yum install -y libaio安装依赖
mysqld --initialize;
chown mysql:mysql /var/lib/mysql -R;
systemctl start mysqld;
cat /var/log/mysqld.log | grep password
查看密码
登录后修改密码:
ALTER USER ‘root’@‘localhost’ IDENTIFIED WITH mysql_native_password BY ‘密码’;
alter USER 'root'@'localhost' IDENTIFIED BY '123456';
mysql> CREATE USER '用户名'@'server1' IDENTIFIED BY '你的密码';
mysql> create database hive_metedata
mysql> grant all privileges on *.* to '用户'@'server1';
mysql> flush privileges;
Hive安装
解压,重命名
[root@server3 home]# tar -zxvf apache-hive-2.3.4-bin.tar.gz
[root@server3 home]# mv apache-hive-2.3.4-bin hive
[root@server3 home]# rm -f apache-hive-2.3.4-bin.tar.gz
修改配置文件
[root@server3 home]# vim /etc/profile
#HIVE
export HIVE_HOME=/home/hive
export PATH=$PATH:$HIVE_HOME/bin
最后记得 resorce /etc/profile
修改 hive-env.sh
配置hadoop_home等环境变量,在文件末尾有
[root@server3 home]# cp hive-env.sh.template hive-env.sh
[root@server3 home]# vi hive-env.sh
...
# Set HADOOP_HOME to point to a specific hadoop install directory
# HADOOP_HOME=${bin}/../../hadoop
export HADOOP_HOME=/home/hadoop/hadoop-2.8.5
# Hive Configuration Directory can be controlled by:
# export HIVE_CONF_DIR=
export HIVE_CONF_DIR=/home/hive/conf
# Folder containing extra libraries required for hive compilation/execution can be controlled by:
# export HIVE_AUX_JARS_PATH
export HIVE_AUX_JARS_PATH=/home/hive/lib
export JAVA_HOME=/usr/java/jdk1.8.0_251
修改hive-site.xml
[root@server3 conf]# cp hive-default.xml.template hive-site.xml
[root@server2 conf]# vim hive-site.xml
# 文件太长了,就copy几个重要需要改的地方,找到这些配置项改
<property>
<name>hive.exec.local.scratchdir</name>
<value>/home/server3/hive/tmp/server3</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/home/server3/hive/tmp/${hive.session.id}_resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<!--连接MySQL的密码-->
<name>javax.jdo.option.ConnectionPassword</name>
<value>你的密码</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://127.0.0.1:3306/hive_metedata?createDatabaseIfNotExist=true</value>
<description>
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.cj.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<!--连接MySQL的用户名-->
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property>
然后将配置文件中所有的${system:java.io.tmpdir}
更改为 /home/server/hive/tmp (如果没有该文件则创建),并将此文件夹赋予读写权限,将${system:user.name}
更改为 server3
创建hdfs文件并赋予权限
$ hadoop fs -mkdir -p /user/hive/
$ hadoop fs -mkdir -p /user/hive/warehouse
$ hadoop fs -chmod 777 /user/hive/
$ hadoop fs -chmod 777 /user/hive/warehouse
初始化hive,然后启动hive:
[fay@master conf]$ schematool -dbType mysql -initSchema
[fay@master conf]$ hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/apache-hive-2.3.4-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.8.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Logging initialized using configuration in jar:file:/opt/apache-hive-2.3.4-bin/lib/hive-common-2.3.4.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive>