hadoop3.3.1搭建过程(HA)

本文总结自:https://blog.csdn.net/Akari0216/article/details/107861974 https://blog.csdn.net/wangkai_123456/article/details/87185339


centos版本为7.9,java版本是自带的openjdk1.8.0,hadoop版本为3.2.1
集群规模是6台主机,其中两台ha(namenode ha和resourceManager ha) 其余做工作节点

IP名字用途
10.110.147.191master1zk/namenode主/resourceManager主/DFSZKFailoverController(HA)
10.110.147.192master2zk/datanode备/nodemanager备/journalNode/historyserver/DFSZKFailoverController
10.110.150.130node1zk/datanode/nodemanager/journalNode
10.110.150.131node2zk/datanode/nodemanager/journalNode
10.110.150.132node3zk/datanode/nodemanager/journalNode
10.110.150.181node4zk/datanode/nodemanager/journalNode

下载hadoop,spark,hive,zookeeper等,目录结构如下:

/home/hadoop/
		├── hadoop-3.3.1
		├── source
		├── spark-3.1.2-bin-hadoop3.2
		└── zookeeper-3.7.0

创建如下存放各种数据的目录:

/data
	└── hadoop
	    ├── dfs
	    │   ├── data
	    │   └── name
	    ├── hdfs
	    ├── history
	    │   ├── done
	    │   └── done_intermediate
	    ├── tmp
	    ├── var
	    ├── yarn
	    │   └── nm
	    └── zk
	        ├── data
	        ├── journaldata
	        └── logs
#在各个节点上面,都要 执行
mkdir -p /data/hadoop/dfs/data
mkdir -p /data/hadoop/dfs/name
mkdir -p /data/hadoop/history/done
mkdir -p /data/hadoop/history/done_intermediate
mkdir -p /data/hadoop/yarn/nm
mkdir -p /data/hadoop/zk/data
mkdir -p /data/hadoop/zk/journaldata
mkdir -p /data/hadoop/zk/logs
mkdir -p /data/hadoop/yarn/staging
mkdir -p /data/hadoop/tmp
mkdir -p /data/hadoop/var	   

[hadoop@node4 logs]$ mkdir -p /data/spark/worker/data
[hadoop@node4 logs]$ mkdir -p /data/spark/local/data

查看操作系统版本

  1. 安装redhat_lsb
yum install redhat-lsb-core -y
  1. 查看系统版本

[hadoop@master2 dfs]$ lsb_release -a
LSB Version:    :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description:    CentOS Linux release 7.9.2009 (Core)
Release:        7.9.2009
Codename:       Core

安装java

  1. 查看系统自带安装的java
[root@node5 ~]# rpm -qa | grep java
javapackages-tools-3.4.1-11.el7.noarch
tzdata-java-2021a-1.el7.noarch
python-javapackages-3.4.1-11.el7.noarch
java-1.8.0-openjdk-1.8.0.292.b10-1.el7_9.x86_64
java-1.8.0-openjdk-devel-1.8.0.292.b10-1.el7_9.x86_64
java-1.8.0-openjdk-headless-1.8.0.292.b10-1.el7_9.x86_64
  1. 卸载系统自带的所有已经安装的java
 rpm -e --nodeps  rpm -e --nodeps javapackages-tools-3.4.1-11.el7.noarch
 rpm -e --nodeps  rpm -e --nodeps tzdata-java-2021a-1.el7.noarch
 rpm -e --nodeps  rpm -e --nodeps python-javapackages-3.4.1-11.el7.noarch
 rpm -e --nodeps  rpm -e --nodeps java-1.8.0-openjdk-1.8.0.292.b10-1.el7_9.x86_64
 rpm -e --nodeps  rpm -e --nodeps java-1.8.0-openjdk-devel-1.8.0.292.b10-1.el7_9.x86_64
 rpm -e --nodeps  rpm -e --nodeps java-1.8.0-openjdk-headless-1.8.0.292.b10-1.el7_9.x86_64
  1. 卸载yum安装的java
yum -y remove java
  1. 安装java
 yum -y install java-1.8.0-openjdk-devel.x86_64
  1. 设置环境变量:
    查看java所在位置:
[hadoop@master2 dfs]$ which is java
/usr/bin/which: no is in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/local/firefox:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.302.b08-0.el7_9.x86_64/jre/bin:/home/hadoop/hadoop-3.3.1/bin:/home/hadoop/hadoop-3.3.1/sbin/:/home/hadoop/spark-3.1.2-bin-hadoop3.2/bin:/home/hadoop/spark-3.1.2-bin-hadoop3.2/sbin:/home/hadoop/.local/bin:/home/hadoop/bin:/usr/local/firefox:/home/hadoop/zookeeper-3.7.0/bin:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.302.b08-0.el7_9.x86_64/jre/bin:/home/hadoop/hadoop-3.3.1/bin:/home/hadoop/hadoop-3.3.1/sbin/:/home/hadoop/spark-3.1.2-bin-hadoop3.2/bin:/home/hadoop/spark-3.1.2-bin-hadoop3.2/sbin)
/bin/java
[hadoop@master2 dfs]$ ls -al /bin/java
lrwxrwxrwx 1 root root 22 812 16:07 /bin/java -> /etc/alternatives/java
[hadoop@master2 dfs]$ ls -al /etc/alternatives/java
lrwxrwxrwx 1 root root 73 812 16:07 /etc/alternatives/java -> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.302.b08-0.el7_9.x86_64/jre/bin/java

在这里插入图片描述

vim /etc/profile

export ZK_HOME=/home/hadoop/zookeeper-3.7.0
export HADOOP_HOME=/home/hadoop/hadoop-3.3.1
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.302.b08-0.el7_9.x86_64/jre
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HIVE_HOME=/home/hadoop/hive-3.1.2
export SPARK_HOME=/home/hadoop/spark-3.1.2-bin-hadoop3.2
export CLASSPATH=$JAVA_HOME/lib:$($HADOOP_HOME/bin/hadoop classpath):$CLASSPATH
export PATH=$PATH:$ZK_HOME/bin:/usr/local/firefox:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin/:$SPARK_HOME/bin:$SPARK_HOME/sbin:$HIVE_HOME/bin

source /etc/profile

建立hadoop用户

安装成功后新建用户hadoop,设置hadoop的密码并授予hadoop用户sudo权限

[root@localhost ~]$ useradd hadoop
[root@localhost ~]$ passwd hadoop
[root@localhost ~]$ chmod u+w /etc/sudoers
[root@localhost ~]$ vim /etc/sudoers #在root ALL=(ALL)ALL下添加hadoop ALL(ALL)ALL
[root@localhost ~]$ chmod u-w /etc/sudoers

设置主机名称

  1. 在6台机器上面,分别执行设置主机名称命令:
hostnamectl sethostname master1
hostnamectl sethostname master2
hostnamectl sethostname node1
hostnamectl sethostname node2
hostnamectl sethostname node3
hostnamectl sethostname node4
  1. 在6台机器上面,分别修改/etc/sysconfig/network
NEWTORKING=YES
NETWORKING_IPV6=NO
HOSTNAME=master
  1. 修改/etc/hosts设置主机名与ip映射关系
# hadoop使用
10.110.147.191 master1
10.110.147.192 master2
10.110.150.130 node1
10.110.150.131 node2
10.110.150.132 node3
10.110.150.118 node4
# zookeeper使用
10.110.147.192 zk01
10.110.150.130 zk02
10.110.150.131 zk03
10.110.150.132 zk04
10.110.150.118 zk05

设置免密登录

切换出hadoop用户,且以下操作都是在hadoop账户下进行

  1. 在master1上面执行:
[hadoop@master1.ssh]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:yrP0mH2tHClXx7PzgrfJ3/GFD82yTEbyvIBmmqNWgwk hadoop@master1
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|                 |
|                 |
|    E        .   |
|     . oS   o =  |
|     .o.o  + * * |
|      =...B...@.+|
|     ..B.O..o*oX=|
|     .=.=oo. .B+*|
+----[SHA256]-----+

将在/home/hadoop/.ssh目录下生成公钥id_rsa.pub和私钥id_rsa
将生成的秘钥,写入authorized_keys上面

[hadoop@master1~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hadoop@master1 ~]$ chmod 0600 ~/.ssh/authorized_keys
  1. 分别在master2/node1/node2/node3/node4上面执行
[hadoop@node1.ssh]$ ssh-keygen -t rsa
[hadoop@node1.ssh]$ ssh-copy-id -i master1 #可以看到master1上authorized_keys的变化
  1. 在master1上面,同步authorized_keys到其他机器上
[hadoop@master1~]$ scp authorized_keys master2:/home/hadoop/.ssh/
[hadoop@master1~]$ scp authorized_keys node1:/home/hadoop/.ssh/
[hadoop@master1~]$ scp authorized_keys node2:/home/hadoop/.ssh/
[hadoop@master1~]$ scp authorized_keys node3:/home/hadoop/.ssh/
[hadoop@master1~]$ scp authorized_keys node4:/home/hadoop/.ssh/
  1. 测试master1/master2/node1/node2/node3/node4之间是否能ssh互通

安装启动zookeeper

参照:https://blog.csdn.net/llwy1428/article/details/111601567
在master2上面

  1. 官网下载zookeeper-3.7.0,到本地解压
  2. 解压包放到/home/hadoop目录下
/home/hadoop
		├── hadoop-3.3.1
		├── source
		├── spark-3.1.2-bin-hadoop3.2
		└── zookeeper-3.7.0
  1. 创建zk数据、日志目录。
    如图所示的目录结构
/data
	└── hadoop
	    ├── dfs
	    │   ├── data
	    │   └── name
	    ├── hdfs
	    ├── history
	    │   ├── done
	    │   └── done_intermediate
	    ├── tmp
	    ├── var
	    ├── yarn
	    │   └── nm
	    └── zk
	        ├── data
	        ├── journaldata
	        └── logs

java.net.BindException: 无法指定被请求的地址 (Bind failed)
quorumListenOnAllIPs=true
4. 进入配置文件目录

[hadoop@node1 data]$ cd /home/hadoop/zookeeper-3.7.0/conf/
[hadoop@node1 conf]$ cp zoo_sample.cfg zoo.cfg
[hadoop@node1 conf]$ ls
configuration.xsl  log4j.properties  zoo.cfg  zoo_sample.cfg
[hadoop@node1 conf]$ vim zoo.cfg
# 编辑zoo.cfg内容如下
dataDir=/data/hadoop/zk/data/
dataLogDir=/data/hadoop/zk/logs
server.1=zk01:2888:3888
server.2=zk02:2888:3888
server.3=zk03:2888:3888
server.4=zk04:2888:3888
server.5=zk05:2888:3888
# the port at which the clients will connect
clientPort=2181
quorumListenOnAllIPs=true
  1. 拷贝配置好的zookeeper到node1/node2/node3/node4上面
scp -r /home/hadoop/zookeeper-3.7.0 node1:/home/hadoop/
scp -r /home/hadoop/zookeeper-3.7.0 node2:/home/hadoop/
scp -r /home/hadoop/zookeeper-3.7.0 node3:/home/hadoop/
scp -r /home/hadoop/zookeeper-3.7.0 node4:/home/hadoop/
#分别在master2/node1/node2/node3/node4上面配置myid文件
[hadoop@master1 hadoop]$ echo 1 > /data/hadoop/zk/data/myid
[hadoop@node1 hadoop]$ echo 2 > /data/hadoop/zk/data/myid
[hadoop@node2 hadoop]$ echo 3 > /data/hadoop/zk/data/myid
[hadoop@node3 hadoop]$ echo 4 > /data/hadoop/zk/data/myid
[hadoop@node4 hadoop]$ echo 5 > /data/hadoop/zk/data/myid
  1. 分别在master2/node1/node2/node3/node4上面启动zk
[hadoop@master2 bin]$ /home/hadoop/zookeeper-3.7.0/bin/zkServer.sh start
[hadoop@node1 bin]$ /home/hadoop/zookeeper-3.7.0/bin/zkServer.sh start
[hadoop@node2 bin]$ /home/hadoop/zookeeper-3.7.0/bin/zkServer.sh start
[hadoop@node3 bin]$ /home/hadoop/zookeeper-3.7.0/bin/zkServer.sh start
[hadoop@node4 bin]$ /home/hadoop/zookeeper-3.7.0/bin/zkServer.sh start
  1. 错误处理:
    遇到了下面的问题
2017-07-05 23:40:14,794 [myid:0] - ERROR [/47.94.204.115:3888:QuorumCnxManager$Listener@763] - Exception while listening
java.net.BindException: 无法指定被请求的地址 (Bind failed)
        at java.net.PlainSocketImpl.socketBind(Native Method)
        at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
        at java.net.ServerSocket.bind(ServerSocket.java:375)
        at java.net.ServerSocket.bind(ServerSocket.java:329)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager$Listener.run(QuorumCnxManager.java:742)
2017-07-05 23:40:14,807 [myid:0] - INFO  [QuorumPeer[myid=0]/0.0.0.0:2181:QuorumPeer@865] - LOOKING

解决:在配置文件zoo.cfg中加入:quorumListenOnAllIPs=true
主要是因为node1-node4都是用openstack搭建的虚拟机,没有外网ip对应的网卡,所以导致该错误
参考:https://blog.csdn.net/u014284000/article/details/74508963

配置hadoop

上面做了这么多准备,接下来开始正题,开始配置hadoop

  1. 配置hadoop-env.sh,添加JAVA_HOME环境变量:
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.302.b08-0.el7_9.x86_64/jre
  1. 配置hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--  Licensed under the Apache License, Version 2.0 (the "License");  you may not use this file except in compliance with the License.  You may obtain a copy of the License at    http://www.apache.org/licenses/LICENSE-2.0  Unless required by applicable law or agreed to in writing, software  distributed under the License is distributed on an "AS IS" BASIS,  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  See the License for the specific language governing permissions and  limitations under the License. See accompanying LICENSE file.--><!-- Put site-specific property overrides in this file. -->
<configuration>
     <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 -->
    <property>
            <name>dfs.nameservices</name>
            <value>ns1</value>
    </property>
    <!-- ns1下面有两个NameNode,分别是nn1,nn2 -->
    <property>
            <name>dfs.ha.namenodes.ns1</name>
            <value>nn1,nn2</value>
    </property>
    <!-- nn1的RPC通信地址 -->
    <property>
            <name>dfs.namenode.rpc-address.ns1.nn1</name>
            <value>master1:9000</value>
    </property>
    <!-- nn1的http通信地址 -->
    <property>
            <name>dfs.namenode.http-address.ns1.nn1</name>
            <value>master1:50070</value>
    </property>
    <!-- nn2的RPC通信地址 -->
    <property>
            <name>dfs.namenode.rpc-address.ns1.nn2</name>
            <value>master2:9000</value>
    </property>
    <!-- nn2的http通信地址 -->
    <property>
            <name>dfs.namenode.http-address.ns1.nn2</name>
            <value>master2:50070</value>
    </property>


 <!-- 指定NameNode的元数据在JournalNode上的存放位置 -->
    <property>
            <name>dfs.namenode.shared.edits.dir</name>
            <value>qjournal://zk01:8485;zk02:8485;zk03:8485;zk04:8485;zk05:8485/ns1</value>
    </property>
    <!-- 指定JournalNode在本地磁盘存放数据的位置 -->
    <property>
            <name>dfs.journalnode.edits.dir</name>
            <value>/data/hadoop/zk/journaldata</value>
    </property>
    <!-- 开启NameNode失败自动切换 -->
    <property>
            <name>dfs.ha.automatic-failover.enabled</name>
            <value>true</value>
    </property>
    <!-- 配置失败自动切换实现方式 -->
    <property>
            <name>dfs.client.failover.proxy.provider.ns1</name>
            <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
    <property>
            <name>dfs.ha.fencing.methods</name>
            <value>
                    sshfence
                    shell(/bin/true)
            </value>
    </property>
    <!-- 使用sshfence隔离机制时需要ssh免登陆 -->
    <property>
            <name>dfs.ha.fencing.ssh.private-key-files</name>
            <value>/home/hadoop/.ssh/id_rsa</value>
    </property>
    <!-- 配置sshfence隔离机制超时时间 -->
    <property>
            <name>dfs.ha.fencing.ssh.connect-timeout</name>
            <value>30000</value>
    </property>
    <!-- DataNode进程死亡或者网络故障造成DataNode无法与NameNode通信,NameNode不会
             立即把该节点判定为死亡,要经过一段超时时间。HDFS默认的超时时间是10分钟+30秒,如果定
    义超时时间为timeout,则其计算公式为:
    timeout = 2 * heartbeat.recheck.interval + 10 * dfs.heartbeat.interval -->
    <property>
            <name>heartbeat.recheck.interval</name>
            <!-- 单位:毫秒 -->
            <value>2000</value>
    </property>
    <property>
            <name>dfs.heartbeat.interval</name>
            <!-- 单位:秒 -->
            <value>1</value>
    </property>
    <!-- 在日常维护hadoop集群过程中会发现这样一种现象:某个节点由于网络故障或者
             DataNode进程死亡,被NameNode判定为死亡,HDFS马上自动开始数据块的容错拷贝,
    当该节点重新加入到集群中,由于该节点的数据并没有损坏,导致集群中某些block的
    备份数超过了设定数值。默认情况下要经过1个小时的时间才会对这些冗余block进行清理。
    而这个时长与数据块报告时间有关。DataNode会定期将该节点上的所有block信息报告给
    NameNode,默认间隔1小时。下面的参数可以修改报告时间 -->
    <property>
            <name>dfs.blockreport.intervalMsec</name>
            <value>10000</value>
            <description>Determines block reporting interval in milliseconds.</description>
    </property>
    <!--指定磁盘预留多少空间,防止磁盘被撑满用完,单位为bytes -->
    <property>
        <name>dfs.datanode.du.reserved</name>
        <value>10240000</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>node1:50090</value>
    </property>
    <property>
        <name>dfs.name.dir</name>
        <value>/data/hadoop/dfs/name</value>
        <description>Path on the local filesystem where theNameNode stores the namespace and transactions logs
            persistently.
        </description>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/data/hadoop/dfs/data</value>
        <description>Comma separated list of paths on the localfilesystem of a DataNode where it should store its
            blocks.
        </description>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>true</value>
        <description>need not permissions</description>
    </property>
    <!--NameNode有一个工作线程池用来处理客户端的远程过程调用及集群守护进程的调用。处理程序数量越多意味着要更大的池来处理来自不同DataNode的并发心跳以及客户端并发的元数据操作。对于大集群或者有大量客户端的集群来说,通常需要增大参数dfs.namenode.handler.count的默认值10。设置该值的一般原则是将其设置为集群大小的自然对数乘以20,即20logN,N为集群大小。
如果该值设的太小,明显的状况就是DataNode在连接NameNode的时候总是超时或者连接被拒绝,但NameNode的远程过程调用队列很大时,远程过程调用延时就会加大。症状之间是相互影响的,很难说修改dfs.namenode.handler.count就能解决问题,但是在查找故障时,检查一下该值的设置是必要的-->
    <property>
        <name>dfs.datanode.handler.count</name>
        <value>35</value>
        <description>The number of server threads for the datanode.</description>
    </property>
    <!--读超时时间:dfs.client.socket-timeout。默认值1分钟。
    写超时时间:dfs.datanode.socket.write.timeout。默认8分钟。-->
    <property>
        <name>dfs.client.socket-timeout</name>
        <value>600000</value>
    </property>
    <property>
        <!--这里设置Hadoop允许打开最大文件数,默认4096,不设置的话会提示xcievers exceeded错误-->
        <name>dfs.datanode.max.transfer.threads</name>
        <value>409600</value>
    </property>
    <!---块大小-->

    <property>
        <name>dfs.blocksize</name>
        <value>134217728</value>
        <description>node2文件系统HDFS块大小为128M</description>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>

</configuration>

  1. 配置core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
       Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://ns1</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
        <description>该属性值单位为KB,131072KB即为默认的 64M</description>
    </property>
    <!-- 指定zookeeper地址 -->
    <property>
            <name>ha.zookeeper.quorum</name>
            <value>zk01:2181,zk02:2181,zk03:2181,zk04:2181,zk05:2181</value>
    </property>
</configuration>

  1. 配置yarn-site.xml
<?xml version="1.0"?>
<!--  Licensed under the Apache License, Version 2.0 (the "License");  you may not use this file except in compliance with the License.  You may obtain a copy of the License at    http://www.apache.org/licenses/LICENSE-2.0  Unless required by applicable law or agreed to in writing, software  distributed under the License is distributed on an "AS IS" BASIS,  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  See the License for the specific language governing permissions and  limitations under the License. See accompanying LICENSE file.-->
<configuration>
    <!-- 开启RM高可用 -->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
    <!-- 指定RM的cluster id -->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>yrc</value>
    </property>

    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.store.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>

    <property>
        <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.automatic-failover.zk-base-path</name>
        <value>/yarn-leader-election</value>
    </property>
    <property>
        <name>yarn.client.failover-proxy-provider</name>
        <value>org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider</value>
    </property>
    <!-- 指定RM的名字 -->
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>
    <!-- 分别指定RM的地址 -->
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>master1</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>master2</value>
    </property>
    <!-- 指定zk集群地址 -->
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>zk01:2181,zk02:2181,zk03:2181,zk04:2181,zk05:2181</value>
    </property>

    <property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>file:///data/hadoop/yarn/nm</value>
    </property>
    <property>
        <description>The address of the applications manager interface in the RM.</description>
        <name>yarn.resourcemanager.address.rm1</name>
        <value>${yarn.resourcemanager.hostname.rm1}:8032</value>
    </property>
    <property>
        <description>The address of the applications manager interface in the RM.</description>
        <name>yarn.resourcemanager.address.rm2</name>
        <value>${yarn.resourcemanager.hostname.rm2}:8032</value>
    </property>


    <property>
        <description>The address of the scheduler interface.</description>
        <name>yarn.resourcemanager.scheduler.address.rm1</name>
        <value>${yarn.resourcemanager.hostname.rm1}:8030</value>
    </property>
    <property>
        <description>The address of the scheduler interface.</description>
        <name>yarn.resourcemanager.scheduler.address.rm2</name>
        <value>${yarn.resourcemanager.hostname.rm2}:8030</value>
    </property>


    <property>
        <description>The http address of the RM1 web application.</description>
        <name>yarn.resourcemanager.webapp.address.rm1</name>
        <value>${yarn.resourcemanager.hostname.rm1}:8088</value>
    </property>
    <property>
        <description>The http address of the RM2 web application.</description>
        <name>yarn.resourcemanager.webapp.address.rm2</name>
        <value>${yarn.resourcemanager.hostname.rm2}:8088</value>
    </property>


    <property>
        <description>The https adddress of the RM web application.</description>
        <name>yarn.resourcemanager.webapp.https.address.rm1</name>
        <value>${yarn.resourcemanager.hostname.rm1}:8090</value>
    </property>
    <property>
        <description>The https adddress of the RM web application.</description>
        <name>yarn.resourcemanager.webapp.https.address.rm2</name>
        <value>${yarn.resourcemanager.hostname.rm2}:8090</value>
    </property>


    <property>
        <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
        <value>${yarn.resourcemanager.hostname.rm1}:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
        <value>${yarn.resourcemanager.hostname.rm2}:8031</value>
    </property>


    <property>
        <description>The address of the RM admin interface.</description>
        <name>yarn.resourcemanager.admin.address.rm1</name>
        <value>${yarn.resourcemanager.hostname.rm1}:8033</value>
    </property>
    <property>
        <description>The address of the RM admin interface.</description>
        <name>yarn.resourcemanager.admin.address.rm2</name>
        <value>${yarn.resourcemanager.hostname.rm2}:8033</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>8192</value>
        <discription>每个节点可用内存,单位MB,默认8182MB</discription>
    </property>
    <property>
        <name>yarn.scheduler.minmum-allocation-mb</name>
        <value>1024</value>
        <discription>每个节点可用内存,单位MB,默认8182MB</discription>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-pmem-ratio</name>
        <value>2.1</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>25600</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>true</value>
    </property>
    <property>
        <description>使用命令:[hadoop@node1 ~]$ hadoop classpath获取到</description>
        <name>yarn.application.classpath</name>
        <value>
            /home/hadoop/hadoop-3.3.1/etc/hadoop:/home/hadoop/hadoop-3.3.1/share/hadoop/common/lib/*:/home/hadoop/hadoop-3.3.1/share/hadoop/common/*:/home/hadoop/hadoop-3.3.1/share/hadoop/hdfs:/home/hadoop/hadoop-3.3.1/share/hadoop/hdfs/lib/*:/home/hadoop/hadoop-3.3.1/share/hadoop/hdfs/*:/home/hadoop/hadoop-3.3.1/share/hadoop/mapreduce/*:/home/hadoop/hadoop-3.3.1/share/hadoop/yarn:/home/hadoop/hadoop-3.3.1/share/hadoop/yarn/lib/*:/home/hadoop/hadoop-3.3.1/share/hadoop/yarn/*
        </value>
    </property>
</configuration>

  1. 配置mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
       Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master2:10020</value>
        <description>MR JobHistory Server管理的日志的存放位置</description>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master2:19888</value>
        <description>查看历史服务器已经运行完的Mapreduce作业记录的web地址,需要启动该服务才行</description>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.staging-dir</name>
        <value>/data/hadoop/yarn/staging</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.done-dir</name>
        <value>${yarn.app.mapreduce.am.staging-dir}/done</value>
        <description>MR JobHistory Server管理的日志的存放位置,默认:/mr-history/done</description>
    </property>
    <property>
        <name>mapreduce.jobhistory.intermediate-done-dir</name>
        <value>${yarn.app.mapreduce.am.staging-dir}/done_intermediate</value>
        <description>MapReduce作业产生的日志存放位置,默认值:/mr-history/tmp</description>
    </property>

<!--
    <property>
        <name>mapred.job.tracker</name>
        <value>master2:49001</value>
    </property>
-->
    <property>
        <name>mapred.local.dir</name>
        <value>/data/hadoop/var</value>
    </property>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
  1. 修改workers
    在master1节点的workers文件内把localhost删除,加入
master2
node1
node2
node3
node4

将/home/hadoop/hadoop-3.3.1拷贝到集群其他机器上面

[hadoop@master1 ~]$ scp -r hadoop-3.3.1 master2:/home/hadoop/
[hadoop@master1 ~]$ scp -r hadoop-3.3.1 node1:/home/hadoop/
[hadoop@master1 ~]$ scp -r hadoop-3.3.1 node2:/home/hadoop/
[hadoop@master1 ~]$ scp -r hadoop-3.3.1 node3:/home/hadoop/
[hadoop@master1 ~]$ scp -r hadoop-3.3.1 node4:/home/hadoop/

启动hadoop

参照:https://blog.csdn.net/daoxu_hjl/article/details/85875136

启动ha相关进程

  1. 格式化zookeeper上的hadoop-ha目录
#格式化
hdfs zkfc -formatZK
#验证:检查zookeeper上是否已经有Hadoop HA目录,在任意一台zk节点上面
$ZK_HOME/bin/zkCli.sh -server zk01:2181
#在打开的zk终端shell中,输入
[zk: localhost:2181(CONNECTED) 0] ls /
[hadoop-ha,zookeeper]
[zk: zk01:2181(CONNECTED) 2] ls /hadoop-ha
[ns1]
  1. 动namenode日志同步服务journalnode
    所有ZooKeeper节点均启动
#当前节点
[hadoop@master2 ~]$ HADOOP_HOME/bin/hdfs --daemon start journalnode
[hadoop@master2 ~]$ jps
348886 QuorumPeerMain
358111 JournalNode

#其他节点
[hadoop@node1~]$ HADOOP_HOME/sbin/hadoop-daemon.sh start journalnode
[hadoop@node2~]$ HADOOP_HOME/sbin/hadoop-daemon.sh start journalnode
[hadoop@node3~]$ HADOOP_HOME/sbin/hadoop-daemon.sh start journalnode
[hadoop@node4~]$ HADOOP_HOME/sbin/hadoop-daemon.sh start journalnode

启动hadoop

  1. 在主namenode节点master1格式化NAMENODE,并启动namenode
[hadoop@master1 ~]$ $HADOOP_HOME/bin/hdfs namenode -format
[hadoop@master1 ~]$ $HADOOP_HOME/bin/hdfs --daemon start namenode
[hadoop@master1 ~]$ jps
421228 NameNode
  1. 在备namenode节点同步元数据,并启动namenode 服务
[hadoop@master2 ~]$ $HADOOP_HOME/bin/hdfs namenode -bootstrapStandby
[hadoop@master2 ~]$ $HADOOP_HOME/sbin/hdfs --daemon start namenode
[hadoop@master2 ~]$ jps
348886 QuorumPeerMain
45432 Jps
358111 JournalNode
44767 NameNode

注:此前一定要先启动主namenode

启动DFSZKFailoverController

在所有namenode节点上

[hadoop@master1 ~]$ $HADOOP_HOME/sbin/hdfs --daemon start zkfc
[hadoop@master1 ~]$ jps
348012 DFSZKFailoverController
421228 NameNode
430168 Jps

[hadoop@master2 ~]$ $HADOOP_HOME/sbin/hdfs --daemon start zkfc
[hadoop@master2 ~]$ jps
348886 QuorumPeerMain
358111 JournalNode
360366 DFSZKFailoverController
45964 Jps
44767 NameNode

启动datanode服务

集群任意节点:

hdfs --workers --daemon start datanode #启动所有的datanode节点
#  hdfs --daemon start datanode启动单个datanode

启动yarn

#启动主resourcemanager
[hadoop@master1 ~]$ yarn --daemon start resourcemanager
[hadoop@master1 ~]$ jps
432622 Jps
348012 DFSZKFailoverController
421228 NameNode
432520 ResourceManager
#启动备用resourcemanager
[hadoop@master2 ~]$ yarn --daemon start resourcemanager
[hadoop@master2 ~]$ jps
48272 ResourceManager
348886 QuorumPeerMain
48901 Jps
358111 JournalNode
360366 DFSZKFailoverController
46492 DataNode
44767 NameNode
#启动所有的nodemanager
[hadoop@master2 ~]$ yarn --workers --daemon start nodemanager
#在node节点中
[hadoop@node1 ~]$ jps
24544 QuorumPeerMain
24020 Jps
19317 DataNode
25756 JournalNode
23548 NodeManager

在本地验证

NameNode状态

http://master1:50070
在这里插入图片描述
在这里插入图片描述

yarn Applications

http://master2:8088/cluster
在这里插入图片描述

history服务

在master2上面,启动historyserver

$HADOOP_HOME/bin/mapred --daemon start historyserver
#启动后可在监控页面查看:http://master2:19888/jobhistor
#注:在mapred-site.xml上面,配置了history的地址,在其他机器上面启动,会报错:
threw a non Bind IOException.BindException: Port in use: master2:19888
#具体原因待查

便可以在web端,查看MapReduce的运行日志了,对于error的任务,很重要
在这里插入图片描述

其他

# datanode 报告
hdfs dfsadmin -report
# 重新格式化,先删除logs下所有文件后,再删除/root/hadoop自身及所属目录,然后重新建回/root/hadoop等6个目录后再./hadoop namenode -format```
# 获取一个NameNode节点的HA状态
[hadoop@master1 sbin]$ hdfs haadmin -getServiceState nn1
active
[hadoop@master1 sbin]$ hdfs haadmin -getServiceState nn2
standby
#获取resourcemanager的HA状态
[hadoop@master2 ~]$ yarn rmadmin -getServiceState rm2
standby
[hadoop@master2 ~]$ yarn rmadmin -getServiceState rm1
active

测试任务

在hadoop集群上面,跑MapReduce任务wordcount实例

  1. map代码
import lombok.extern.slf4j.Slf4j;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
/**
 * @author cy
 * @since : 2019/1/29 16:44
 * 读取采用空格分隔的字符,并且每个词计数为1
 */
@Slf4j
public class WordCountMapper extends Mapper<Object, Text, Text, IntWritable> {
    @Override
    protected void map(Object key, Text value, Context context) throws IOException, InterruptedException {
        String line = value.toString();
        String[] words = line.split("[ {}:\",]");
        for (String word : words) {
            log.info("word:{}", word);
            context.write(new Text(word), new IntWritable(1));
        }
    }
}
  1. reduce代码
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
/**
 * @author xu.dm
 * @since : 2019/1/29 16:44
 * 累加由map传递过来的计数
 */
public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
    @Override
    protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
        int sum = 0;
        for (IntWritable val : values) {
            sum += val.get();
        }
        context.write(key, new IntWritable(sum));
    }
}
  1. main函数
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

    public static void main(String[] args) throws Exception {
        if(args.length!=2)
        {
            System.err.println("使用格式:WordCount <input path> <output path>");
            System.exit(-1);
        }
        //Configuration类代表作业的配置,该类会加载map-red-site.xml、hdfs-site.xml、core-site.xml等配置文件。
        Configuration conf =new Configuration();

        Path outPath = new Path(args[1]);
        //FileSystem里面包括很多系统,不局限于hdfs
        FileSystem fileSystem = outPath.getFileSystem(conf);
        //删除输出路径
        if(fileSystem.exists(outPath))
        {
            fileSystem.delete(outPath,true);
        }

        Job job = Job.getInstance(conf,"word count"); // new Job(conf, "word count");
        job.setJarByClass(WordCount.class);

        job.setMapperClass(WordCountMapper.class);
        //Combiner最终不能影响reduce输出的结果
//        job.setCombinerClass(WordCountReducer.class);
        job.setReducerClass(WordCountReducer.class);

        //一般情况下mapper和reducer的输出的数据类型是一样的,如果不一样,可以单独指定mapper的输出key、value的数据类型
        //job.setMapOutputKeyClass(Text.class);
        //job.setMapOutputValueClass(IntWritable.class);        //输入类型通过InputFormat类来控制
        //hadoop默认的是TextInputFormat和TextOutputFormat,本例就是对文本进行处理所以可以不用配置。
        //job.setInputFormatClass(TextInputFormat.class);
        //job.setOutputFormatClass(TextOutputFormat.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);


        //指定的这个路径可以是单个文件、一个目录或符合特定文件模式的一系列文件。
        //从方法名称可以看出,可以通过多次调用这个方法来实现多路径的输入。
        FileInputFormat.addInputPath(job,new Path(args[0])); //在运行job前,这个目录不应该存在,如果存在hadoop会拒绝执行。这种预防措施的目的是防止数据丢失(长时间的job被意外覆盖)
        FileOutputFormat.setOutputPath(job,new Path(args[1]));

        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}
  1. 运行
[hadoop@node2 ~]$ hdfs dfs -mkdir -p /demo/input /demo/output
[hadoop@node1 ~]$ hdfs dfs -put 9527.txt /demo/input
[hadoop@node3 demo]$ hadoop jar bigdata-0.0.1-SNAPSHOT.jar com.lenovo.ai.bigdata.hadoop.WordCount /demo/input /demo/output
2021-08-13 08:52:00,263 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2021-08-13 08:52:00,306 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /data/hadoop/yarn/staging/hadoop/.staging/job_1628843203875_0002
2021-08-13 08:52:00,658 INFO input.FileInputFormat: Total input files to process : 1
2021-08-13 08:52:00,896 INFO mapreduce.JobSubmitter: number of splits:1
2021-08-13 08:52:01,121 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1628843203875_0002
2021-08-13 08:52:01,124 INFO mapreduce.JobSubmitter: Executing with tokens: []
2021-08-13 08:52:01,287 INFO conf.Configuration: resource-types.xml not found
2021-08-13 08:52:01,288 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2021-08-13 08:52:01,565 INFO impl.YarnClientImpl: Submitted application application_1628843203875_0002
2021-08-13 08:52:01,609 INFO mapreduce.Job: The url to track the job: http://master1:8088/proxy/application_1628843203875_0002/
2021-08-13 08:52:01,609 INFO mapreduce.Job: Running job: job_1628843203875_0002
2021-08-13 08:52:07,738 INFO mapreduce.Job: Job job_1628843203875_0002 running in uber mode : false
2021-08-13 08:52:07,741 INFO mapreduce.Job:  map 0% reduce 0%
2021-08-13 08:52:12,856 INFO mapreduce.Job:  map 100% reduce 0%
2021-08-13 08:52:17,917 INFO mapreduce.Job:  map 100% reduce 100%
2021-08-13 08:52:18,943 INFO mapreduce.Job: Job job_1628843203875_0002 completed successfully
2021-08-13 08:52:19,068 INFO mapreduce.Job: Counters: 54
        File System Counters
                FILE: Number of bytes read=105430
                FILE: Number of bytes written=766739
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=45742
                HDFS: Number of bytes written=17970
                HDFS: Number of read operations=8
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
                HDFS: Number of bytes read erasure-coded=0
        Job Counters
                Launched map tasks=1
                Launched reduce tasks=1
                Rack-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=3118
                Total time spent by all reduces in occupied slots (ms)=2587
                Total time spent by all map tasks (ms)=3118
                Total time spent by all reduce tasks (ms)=2587
                Total vcore-milliseconds taken by all map tasks=3118
                Total vcore-milliseconds taken by all reduce tasks=2587
                Total megabyte-milliseconds taken by all map tasks=3192832
                Total megabyte-milliseconds taken by all reduce tasks=2649088
        Map-Reduce Framework
                Map input records=1208
                Map output records=9966
                Map output bytes=85492
                Map output materialized bytes=105430
                Input split bytes=95
                Combine input records=0
                Combine output records=0
                Reduce input groups=1895
                Reduce shuffle bytes=105430
                Reduce input records=9966
                Reduce output records=1895
                Spilled Records=19932
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=170
                CPU time spent (ms)=2800
                Physical memory (bytes) snapshot=674127872
                Virtual memory (bytes) snapshot=5763280896
                Total committed heap usage (bytes)=1030750208
                Peak Map Physical memory (bytes)=310792192
                Peak Map Virtual memory (bytes)=2817318912
                Peak Reduce Physical memory (bytes)=363335680
                Peak Reduce Virtual memory (bytes)=2945961984
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=45647
        File Output Format Counters
                Bytes Written=17970

备注:也可以本地运行
关于spark运行模式,参考:https://www.jianshu.com/p/65a3476757a5

#该模式被称为Local[N]模式,是用单机的多个线程来模拟Spark分布式计算,通常用来验证开发出来的应用程序逻辑上有没有问题。
#如果是local[*],则代表 Run Spark locally with as many worker threads as logical cores on your machine.其中N代表可以使用N个线程,每个线程拥有一个core。如果不指定N,则默认是1个线程(该线程有1个core)。
#运行该模式非常简单,只需要把Spark的安装包解压后,改一些常用的配置即可使用,而不用启动Spark的Master、Worker守护进程( 只有集群的Standalone方式时,才需要这两个角色),也不用启动Hadoop的各服务(除非你要用到HDFS),这是和其他模式的区别哦,要记住才能理解。
spark-submit --master local[2]     --class com.lenovo.ai.bigdata.spark.WordCount  bigdata-0.0.1-SNAPSHOT.jar  file:///home/hadoop/demo/9527.txt file:///home/hadoop/demo/result.txt

在这里插入图片描述
在这里插入图片描述

搭建hive3.1.2

参照:https://blog.csdn.net/weixin_45775873/article/details/109245875

下载hive3.1.2

地址:https://mirrors.tuna.tsinghua.edu.cn/apache/hive/hive-3.1.2/

配置环境变量

# java
vim /etc/profile
export HADOOP_HOME=/home/hadoop/hadoop-3.3.1
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.302.b08-0.el7_9.x86_64/jre
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HIVE_HOME=/home/hadoop/hive-3.1.2
export SPARK_HOME=/home/hadoop/spark-3.1.2-bin-hadoop3.2
export CLASSPATH=$JAVA_HOME/lib:$($HADOOP_HOME/bin/hadoop classpath):$CLASSPATH
export PATH=$PATH:/usr/local/firefox:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin/:$SPARK_HOME/bin:$SPARK_HOME/sbin:$HIVE_HOME/bin

修改hive配置文件

创建配置文件
[hadoop@master1 ~]$ cd $HIVE_HOME/conf
[hadoop@master1 ~]$ cp hive-env.sh.template hive-env.sh
[hadoop@master1 ~]$ cp hive-log4j2.properties.template hive-log4j2.properties
[hadoop@master1 ~]$ cp hive-default.xml.template hive-default.xml
#配置hive环境变量
[hadoop@master1 ~]$ vim /etc/profile
export HADOOP_HOME=/home/hadoop/hadoop-3.3.1
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.302.b08-0.el7_9.x86_64/jre
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HIVE_HOME=/home/hadoop/hive-3.1.2
export SPARK_HOME=/home/hadoop/spark-3.1.2-bin-hadoop3.2
export CLASSPATH=$JAVA_HOME/lib:$($HADOOP_HOME/bin/hadoop classpath):$CLASSPATH
export PATH=$PATH:/usr/local/firefox:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin/:$SPARK_HOME/bin:$SPARK_HOME/sbin:$HIVE_HOME/bin
编辑hive-env.sh
#第48行
HADOOP_HOME=/opt/software/hadoop-3.2.1
#第51行 
export HIVE_CONF_DIR=/opt/software/hadoop-3.2.1/conf

在这里插入图片描述

配置hive-log4j2.properties
#第24行
property.hive.log.dir = /home/hadoop/hive-3.1.2/logs

在这里插入图片描述

在mysql上面配置元数据表

包括建表,建立用户等

[root@server229 ~]# mysql -uroot -p -h127.0.0.1
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 969
Server version: 5.6.41-log MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> create user 'hadoop'@'%' identified by 'hadoop1234';
Query OK, 0 rows affected (0.13 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.02 sec)
mysql> create database hive;
Query OK, 1 row affected (0.02 sec)
mysql> grant all privileges on hive.* to hadoop@'%' identified by 'hadoop1234';
Query OK, 0 rows affected (0.04 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec)
# 为保证数据初始化正常,需要将mysql binlog格式修改成row
[root@server229 ~]# vim /etc/my.cnf
binlog_format = row
#重启mysql
[root@server229 ~]# /etc/init.d/mysqld restart
手动创建hive-site.xml

网上有很多其他的配置项,可能是因为版本的原因,配置上以后,反而导致hiveserver2不能正常启动?

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
-->
<configuration>
    <!-- WARNING!!! This file is auto generated for documentation purposes ONLY! -->
    <!-- WARNING!!! Any changes you make to this file will be ignored by Hive.   -->
    <!-- WARNING!!! You must make your changes in hive-site.xml instead.         -->
    <!-- Hive Execution Parameters -->
    <property>
        <name>hive.cli.print.header</name>
        <value>true</value>
    </property>
    <property>
        <name>hive.cli.print.current.db</name>
        <value>true</value>
    </property>
    <property>
        <!-- 默认数据仓库存储的位置,该位置为HDFS上的路径 -->
        <name>hive.metastore.warehouse.dir</name>
        <value>/user/hive/warehouse</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://10.110.147.229:3306/hive?createDatabaseIfNotExist=true&amp;useSSL=false</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <!--<value>com.mysql.jdbc.Driver</value>-->
        <value>com.mysql.cj.jdbc.Driver</value>
    </property>
    <!-- 插入一下代码 -->
    <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>hadoop</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>hadoop1234</value>
    </property>
    <property>
        <name>hive.server2.thrift.port</name>
        <value>11240</value>
    </property>
    <property>
        <name>hive.server2.thrift.bind.host</name>
        <value>master1</value>
    </property>
    <property>
        <name>hive.server2.active.passive.ha.enable</name>
        <value>true</value>
    </property>
</configuration>
配置mysql连接驱动包

本地maven的配置

<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>8.0.21</version>
</dependency>

将本地下载好的mysql-connector-java-8.0.21.jar拷贝到$HIVE_HOME/lib/目录下。

#$HADOOP_HOME/share/hadoop/common/lib/guava-27.0-jre.jar替换掉$HIVE_HOME/lib/目录下低版本的guava-19.0.jar
[hadoop@master1 ~]$ mv $HADOOP_HOME/share/hadoop/common/lib/guava-27.0-jre.jar $HIVE_HOME/lib/
修改hadoop-3.2.1的配置文件
[hadoop@master1 ~]$ vim $HADOOP_HOME/etc/hadoop/core-site.xml
<property>
    <name>hadoop.proxyuser.hadoop.hosts</name>
    <value>*</value>
</property>
<property>
    <name>hadoop.proxyuser.hadoop.groups</name>
    <value>*</value>
</property>
<!--
hadoop.proxyuser.${user}.hosts
hadoop.proxyuser.${user}.groups
两处变量user在我的环境中是用户hadoop,所以写的是hadoop
-->

分发core-site.xml到集群中 其他机器上面

[hadoop@master1 ~]$ scp $HADOOP_HOME/etc/hadoop/core-site.xml master2:/home/hadoop/hadoop-3.3.1/ect/hadoop/
[hadoop@master1 ~]$ scp $HADOOP_HOME/etc/hadoop/core-site.xml node1:/home/hadoop/hadoop-3.3.1/ect/hadoop/
[hadoop@master1 ~]$ scp $HADOOP_HOME/etc/hadoop/core-site.xml node2:/home/hadoop/hadoop-3.3.1/ect/hadoop/
[hadoop@master1 ~]$ scp $HADOOP_HOME/etc/hadoop/core-site.xml node3:/home/hadoop/hadoop-3.3.1/ect/hadoop/
[hadoop@master1 ~]$ scp $HADOOP_HOME/etc/hadoop/core-site.xml node4:/home/hadoop/hadoop-3.3.1/ect/hadoop/

重启hadoop

#在master2上面依次执行
[hadoop@master2 ~]$ hdfs --workers --daemon stop datanode
[hadoop@master2 ~]$ hdfs --daemon stop namenode
[hadoop@master2 ~]$ yarn --workers --daemon stop nodemanager
[hadoop@master2 ~]$ yarn --daemon stop resourcemanager
#在master1上面执行
[hadoop@master1 ~]$ hdfs --daemon stop namenode
[hadoop@master1 ~]$ yarn --daemon stop resourcemanager
#在master1上面执行
[hadoop@master1 ~]$ hdfs --daemon start namenode
[hadoop@master1 ~]$ hdfs --workers --daemon start datanode
[hadoop@master1 ~]$ yarn --daemon start resourcemanager
[hadoop@master1 ~]$ yarn --workers --daemon start nodemanager
#在master2上面执行
[hadoop@master2 ~]$ hdfs --daemon start namenode
[hadoop@master2 ~]$ yarn --daemon start resourcemanager
初始化元数据库
[hadoop@master1 ~]$ schematool -dbType mysql -initSchema
[hadoop@master1 conf]$ schematool -dbType mysql -initSchema
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hive-3.1.2/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-3.3.1/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:        jdbc:mysql://10.110.147.229:3306/hive?createDatabaseIfNotExist=true&useSSL=false
Metastore Connection Driver :    com.mysql.cj.jdbc.Driver
Metastore connection User:       hadoop
Starting metastore schema initialization to 3.1.0
Initialization script hive-schema-3.1.0.mysql.sql
.
.
.
Initialization script completed
schemaTool completed
启动hive
# 分发文件
[hadoop@master1 ~]$ scp -r hive-3.1.2 node3:/home/hadoop/
# 启动hiveserver2
[hadoop@master1 logs]$ nohup hiveserver2 > $HIVE_HOME/logs/hive.log 2>&1 &

通过log可以看到,启动了两个端口,ui端口和rpc端口
在这里插入图片描述
命令行:

[hadoop@master1 logs]$ ps -ef|grep hive
hadoop   120165  75169  4 11:43 pts/5    00:01:04 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.302.b08-0.el7_9.x86_64/jre/bin/java -Dproc_jar -Dproc_hiveserver2 -Dlog4j.configurationFile=hive-log4j2.properties
........
[hadoop@master1 logs]$ netstat -nplt|grep 120165
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp6       0      0 :::11240                :::*                    LISTEN      120165/java
tcp6       0      0 :::10002                :::*                    LISTEN      120165/java

启动客户端beeline

#启用beeline客户端
[hadoop@node3 conf]$ beeline
Beeline version 2.3.7 by Apache Hive
beeline> !connect jdbc:hive2://master1:11240
Connecting to jdbc:hive2://master1:11240
Enter username for jdbc:hive2://master1:11240: hadoop
Enter password for jdbc:hive2://master1:11240: **********
2021-08-17 03:47:32,188 INFO jdbc.Utils: Supplied authorities: master1:11240
2021-08-17 03:47:32,189 INFO jdbc.Utils: Resolved authority: master1:11240
Connected to: Apache Hive (version 3.1.2)
Driver: Hive JDBC (version 2.3.7)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://master1:11240> show databases;
INFO  : Compiling command(queryId=hadoop_20210817114738_9f9b04d5-fb80-42cb-abf7-830799949b5f): show databases
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Semantic Analysis Completed (retrial = false)
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)
INFO  : Completed compiling command(queryId=hadoop_20210817114738_9f9b04d5-fb80-42cb-abf7-830799949b5f); Time taken: 0.726 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hadoop_20210817114738_9f9b04d5-fb80-42cb-abf7-830799949b5f): show databases
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=hadoop_20210817114738_9f9b04d5-fb80-42cb-abf7-830799949b5f); Time taken: 0.048 seconds
INFO  : OK
INFO  : Concurrency mode is disabled, not creating a lock manager
+----------------+
| database_name  |
+----------------+
| default        |
+----------------+
1 row selected (1.203 seconds)
0: jdbc:hive2://master1:11240> create database my_test;
INFO  : Compiling command(queryId=hadoop_20210817114747_a4f02c6c-f00f-4ab4-ac8e-00ee9b03a16c): create database my_test
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Semantic Analysis Completed (retrial = false)
INFO  : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO  : Completed compiling command(queryId=hadoop_20210817114747_a4f02c6c-f00f-4ab4-ac8e-00ee9b03a16c); Time taken: 0.027 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hadoop_20210817114747_a4f02c6c-f00f-4ab4-ac8e-00ee9b03a16c): create database my_test
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=hadoop_20210817114747_a4f02c6c-f00f-4ab4-ac8e-00ee9b03a16c); Time taken: 0.174 seconds
INFO  : OK
INFO  : Concurrency mode is disabled, not creating a lock manager
No rows affected (0.236 seconds)
0: jdbc:hive2://master1:11240> show databases;
INFO  : Compiling command(queryId=hadoop_20210817114750_42654290-de1f-461b-8333-7de9e26a9dce): show databases
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Semantic Analysis Completed (retrial = false)
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)
INFO  : Completed compiling command(queryId=hadoop_20210817114750_42654290-de1f-461b-8333-7de9e26a9dce); Time taken: 0.022 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hadoop_20210817114750_42654290-de1f-461b-8333-7de9e26a9dce): show databases
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=hadoop_20210817114750_42654290-de1f-461b-8333-7de9e26a9dce); Time taken: 0.019 seconds
INFO  : OK
INFO  : Concurrency mode is disabled, not creating a lock manager
+----------------+
| database_name  |
+----------------+
| default        |
| my_test        |
+----------------+
2 rows selected (0.075 seconds)
0: jdbc:hive2://master1:11240>

web端,访问:http://master1:10002/
在这里插入图片描述

  • 4
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值