国产操作系统(欧拉)部署开源大数据组件

安装前准备

一、配置免密登录

#配置hosts
vim /etc/hosts
10.0.16.159 hadoop01
10.0.16.160 hadoop02
10.0.16.161 hadoop03
#配置免密登录
# 输入ssh-keygen -t rsa -m PEM之后 一直按enter即可
#pukka 用户下的操作
[pukka@Bigdata-MerleWang01 ~]$ ssh-keygen -t rsa -m PEM -b 4096
Generating public/private rsa key pair.
Enter file in which to save the key (/home/pukka/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/pukka/.ssh/id_rsa
Your public key has been saved in /home/pukka/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:pyfbPgGOF+6G7UVd/uEjRLEoBwnpOoLTLlSkPiuAnoY pukka@hadoop01
The key's randomart image is:
+---[RSA 3072]----+
|       .o..  .   |
|   .   . .. . o  |
|  o   .  . o o.  |
| . .   .o o..o   |
|o +   .+Soo .... |
|o* o o. ++. . ...|
|= * . .=o o. . o.|
|E* .  . +*.   . .|
|o .    oo.o.     |
+----[SHA256]-----+
[pukka@Bigdata-MerleWang01 ~]$
ssh-copy-id -p2406 pukka@hadoop01 
ssh-copy-id -p2406 pukka@hadoop02 
ssh-copy-id -p2406 pukka@hadoop03

sudo chown -R pukka.pukka /etc/ssh/  #给文件夹赋予权限
chmod 644  ssh_config  #给文件赋予读写的权限
systemctl restart sshd #重启SSH服务

#如果遇到root用户无法su到普通用户,但是需要pukka用户去设置定时任务,则使用root用户ssh到pukka用户执行定时任务
[root@hadoop01 ~]# ssh-copy-id -p2406 pukka@hadoop01
[root@hadoop01 ~]# crontab -l
0 0 * * * ssh -p 2406 pukka@hadoop01 'kinit -kt /home/pukka/keytabs/hdfs.hadoop.keytab hdfs/hadoop@PUKKA.COM'

二、配置时间同步(运维实施)

0 * * * *   /usr/sbin/ntpdate ntp1.aliyun.com  && hwclock --systohc

三、关闭防火墙

[root@hadoop01 hadoop]# service iptables stop   #  关闭防火墙
[root@hadoop01 hadoop]# systemctl stop firewalld.service  #  关闭防火墙
[root@hadoop01 hadoop]# chkconfig iptables off  # 关闭防火墙开机自动启动

四、修改文件打开限制

vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072

五、创建安装目录

mkdir -p /opt/context/software/bigdata
chown -R pukka. /opt/context/software/bigdata

JDK安装

一、下载

Java Downloads | Oracle

下载8.0以上的版本的jdk

Scala 2.12.2 | The Scala Programming Language (scala-lang.org)

下载2.12.2的scala

二、安装

#卸载自带的openJDK
rpm -qa | grep openjdk
rpm -e --nodeps `rpm -qa | grep openjdk`
# 配置JDK环境
[root@hadoop01 ~]# vim /etc/profile
export JAVA_HOME=/opt/context/software/bigdata/jdk1.8.0_30
export CLASSPATH=$JAVA_HOME/lib/
export PATH=$PATH:$JAVA_HOME/bin
[root@hadoop01 ~]# source /etc/profile
#  echo $JAVA_HOME
# 查看环境变量的是否配置成功   
[root@hadoop01 jdk1.8.0_301]# java -version
java version "1.8.0_301"
Java(TM) SE Runtime Environment (build 1.8.0_301-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.301-b09, mixed mode)
#将scala-2.12.2.tgz上传至/opt/context/software/bigdata
cd /opt/context/software/bigdata
tar -zxvf ./scala-2.12.2.tgz 
mv ./scala-2.12.2 scala2122
vim /etc/profile
export SCALA_HOME=/opt/context/software/bigdata/scala2122
export PATH=$SCALA_HOME/bin
#查看scala版本
[pukka@hadoop01 bigdata]$ source /etc/profile
[pukka@hadoop01 bigdata]$ scala -version
Scala code runner version 2.12.2 -- Copyright 2002-2017, LAMP/EPFL and Lightbend, Inc.

Zookeeper安装

一、下载

Zookeeper的下载地址:(下载3.6.3的Zookeeper版本)

https://repo.huaweicloud.com/apache/zookeeper/stable/

二、安装

集群部署
#在单机的基础上,在配置文件中加入如下
[root@hadoop01 conf]# vim zoo.cfg
server.1=hadoop01:2888:3888
server.2=hadoop02:2888:3888
server.3=hadoop03:2888:3888
# 2888为组成zookeeper服务器之间的通信端口3888为用来选举leader的端口 三台虚拟机都需操作


# 将配置好的zookeeper文件复制到另外两台机子上

scp -r apache-zookeeper-3.6.3-bin root@Hadoop02:/opt/context/software/bigdata/

#在3台机子上创建myid文件,每台对应1,2.3
[root@hadoop01 data]# touch myid
[root@hadoop01 data]# vim myid      Hadoop02的改为2
#											 Hadoop03的改为3

#配置zookeeper 的环境变量
[root@hadoop03 data]# vim /etc/profile
export ZOOKEEPER_HOME=/opt/context/software/bigdata/apache-zookeeper-3.6.3-bin
export PATH=$ZOOKEEPER_HOME/bin:$PATH

#启动   在apache-zookeeper-3.6.3-bin目录下执行 bin/zkServer.sh start (三台都需要启动)

[root@hadoop01 apache-zookeeper-3.6.3-bin]# bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/context/software/bigdata/apache-zookeeper-3.6.3-bin/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@hadoop01 apache-zookeeper-3.6.3-bin]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/context/software/bigdata/apache-zookeeper-3.6.3-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
[root@hadoop02 apache-zookeeper-3.6.3-bin]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/context/software/bigdata/apache-zookeeper-3.6.3-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
[root@hadoop03 apache-zookeeper-3.6.3-bin]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/context/software/bigdata/apache-zookeeper-3.6.3-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

# 注:
#1、防火墙是否已关闭
#2、确定是否都三台虚拟机都启动了集群

Hadoop安装

通过华为镜像进行开源组件的下载:https://repo.huaweicloud.com/

一、下载

hadoop的下载地址:(下载3.2.3的hadoop版本)

https://repo.huaweicloud.com/apache/hadoop/common/hadoop-3.2.3/

二、安装

1、jar包上传解压
#三台机子 分别为:[root@hadoop01 ~] [root@hadoop02 ~] [root@hadoop03 ~]
#将hadoop-3.2.3.tar.gz上传至/opt/context/software/bigdata/目录下
[pukka@hadoop01 ~]# cd /opt/context/software/bigdata/
[pukka@hadoop01 bigdata]# tar -zxvf hadoop-3.2.3.tar.gz
# 查看hadoop版本信息,成功显示则安装成功
[pukka@hadoop01 hadoop-3.2.3]# ./bin/hadoop version
Hadoop 3.2.3
Source code repository https://github.com/apache/hadoop -r abe5358143720085498613d399be3bbf01e0f131
Compiled by ubuntu on 2022-03-20T01:18Z
Compiled with protoc 2.5.0
From source with checksum 39bb14faec14b3aa25388a6d7c345fe8
This command was run using /opt/context/software/bigdata/hadoop-3.2.3/share/hadoop/common/hadoop-common-3.2.3.jar

注:不允许root权限登录时,将/etc/ssh/下的sshd_config文件中的“PermitRootLogin” 参数的值由no更改为yes

​ systemctl restart sshd即可

2、配置Hadoop的环境变量
[root@hadoop01 ~]# vim /etc/profile
export HADOOP_PATH=/opt/context/software/bigdata/hadoop-3.2.3
export PATH=$PATH:$HADOOP_PATH/bin:$HADOOP_PATH/sbin
3、配置核心配置文件
  • hadoop-env.sh
[root@hadoop01 hadoop]# vim hadoop-env.sh
export JAVA_HOME=/opt/context/software/bigdata/jdk1.8.0_301

#hadoop启动时,从机的hadoop登录默认为22端口,若换端口,增加下面语句
export HADOOP_SSH_OPTS="-p 2406"
  • yarn-env.sh

注释:将JDK路径明确配置给Yarn

[root@hadoop01 hadoop]# vim yarn-env.sh
export JAVA_HOME=/opt/context/software/bigdata/jdk1.8.0_301
  • core-site.xml

解释:指定NameNode节点以及数据存储目录

<!---hadoop01-->
<configuration>
     <property>
        <!--配置默认的文件系统-->
        <name>fs.defaultFS</name>
        <!--用的是HDFS作为文件系统,还要指定HDFS放在哪台主机上运行,9000默认端口号,如果配置了HA,fs.defaultFs的值应该是nameservice的名称,如hdfs-site.xml文件中dfs.nameservices的值为mycluster,此处填写hdfs://mycluster-->
        <value>hdfs://mycluster</value>
    </property>
    	<!---基于hadoop数据目录配置,建议放在挂载磁盘目录下/data/hadoop-->
        <property>
             <name>hadoop.tmp.dir</name>
             <value>file:/opt/context/software/bigdata/hadoop-3.2.3/tmp</value>
             <description>Abase for other temporary directories.</description>
        </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
        <!--HDFS连接zookeeper集群的地址和端口-->
    </property>
    	<property>
                <name>io.file.buffer.size</name>
                <value>131072</value>
        </property>
    	<property>
                <name>hadoop.proxyuser.root.hosts</name>
                <value>*</value>
        </property>
        <property>
                <name>hadoop.proxyuser.root.groups</name>
                <value>*</value>
        </property>
    	<property>
                <name>hadoop.proxyuser.pukka.hosts</name>
                <value>*</value>
        </property>
        <property>
                <name>hadoop.proxyuser.pukka.groups</name>
                <value>*</value>
        </property>

</configuration>

注:上面代码中file后的路径下的temp文件夹需要自己创建

hadoop.tmp.dir 这个参数配置的目录建议放在hadoop整体目录之外的某个地方,扩容时scp的时候会很方便

  • hdfs-site.xml

注释:指定SecondaryNameNode节点

<configuration>
    
        <property>
        <name>dfs.nameservices</name>
        <value>mycluster</value>
        <!--定义hdfs集群id号,需要和core-site.xml中的fs.defaultFS保持一致-->
    </property> 
    <property>
        <name>dfs.ha.namenodes.mycluster</name>
        <value>nn1,nn2</value>
        <!--定义hdfs集群中的namenode的id号-->
    </property>
     <property>
        <name>dfs.namenode.rpc-address.mycluster.nn1</name>
        <value>hadoop01:9000</value>
        <!--nn1的RPC通信地址-->
    </property>
    <property>
        <name>dfs.namenode.http-address.mycluster.nn1</name>
        <value>hadoop01:50070</value>
        <!--nn1的http通信地址-->
    </property> 
        <property>
        <name>dfs.namenode.rpc-address.mycluster.nn2</name>
        <value>hadoop02:9000</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.mycluster.nn2</name>
        <value>hadoop02:50070</value>
    </property>
    <property>
		<name>dfs.namenode.name.dir</name>
		<value>file:/opt/context/software/bigdata/hadoop-3.2.3/tmp/dfs/name</value>
	</property>
    <property>
		<name>dfs.datanode.data.dir</name>
		<value>file:/opt/context/software/bigdata/hadoop-3.2.3/tmp/dfs/data</value>
	</property>
	<property>
		<name>dfs.replication</name>
		<value>2</value>
	</property>
	<property>
		<name>dfs.secondary.http.address</name>
		<value>hadoop02:50090</value>
	</property>
    <property>
        <name>dfs.datanode.max.transfer.threads</name>
        <value>4096</value>
        <!--用于设置 DataNode 在进行文件传输时的最大线程数. 如果集群中有某台 DataNode 主机的这个值比其他主机的大,那么出现的问题是,这台主机上存储的数据相对别的主机比较多,导致数据分布不均匀的问题,即使 balance 仍然会不均匀-->
    </property>
       
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>
  			ssh -p 2406 $target_host 'sh /home/pukka/exchange_nn.sh'
			shell(/bin/true)
        </value>
        <!-- HDFS集群中两个namenode切换状态时的隔离方法,shell(/bin/true)是在服务器宕机时,第一个shell无法生效后,执行第二个shell,直接返回true,直接将本节点NN设置为active,注意两个shell之间不能有空格,用回车键分割参考资料https://blog.csdn.net/a1786742005/article/details/104841078 -->
    </property>
    <property>
      <name>dfs.ha.fencing.ssh.connect-timeout</name>
      <value>30000</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/pukka/.ssh/id_rsa</value>
        <!-- HDFS集群中两个namenode切换状态时的隔离方法的密钥 -->
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
        <!-- 故障自动转移,HA的HDFS集群自动切换namenode的开关-->
    </property>
    
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://hadoop01:8485;hadoop02:8485/mycluster</value>
        <!-- 指定NameNode的元数据在JournalNode上的存放位置 -->
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/opt/context/software/bigdata/hadoop-3.2.3/tmp/dfs/ha/jn</value>
        <!-- journalnode集群中用于保存edits文件的目录 -->
    </property>
    
    <property>
        <name>dfs.client.failover.proxy.provider.mycluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        <!-- 定义HDFS的客户端连接HDFS集群时返回active namenode地址 -->
    </property>
    <!-- 开启webHDFS -->
        <property>
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>dfs.permissions</name>
                <value>false</value>
        </property>
        <property>
                <name>dfs.web.ugi</name>
               	<value>supergroup</value>
        </property>
    
    
</configuration>

注意事项:

​ dfs.ha.fencing.methods设置为sshfence时,需要注意:

(1)SSH端口如果被修改,需要设置为sshfence(pukka:2406)

(2)需要考虑hadoop的jsch版本与服务器安装的OpenSSH之间的兼容性,服务器安装的SSH-2.0-OpenSSH_8.8,Hadoop使用的SSH-2.0-JSCH-0.1.54,加密策略不同,导致standby节点去kill之前active的节点的NN失败,一直卡住,导致故障转移失败,网上教程都是OpenSSH_8.0和OpenSSH_7.X的,需要对OpenSSH进行降级,不具备普适性,比如低版本OpenSSH出现漏洞必须升级

综上原因,选择使用shell的方式进行故障转移

  • mapred-site.xml

注释:

1、将JDK路径明确配置给MapReduce(修改mapred-env.sh,这里不做修改)

2、指定MapReduce计算框架运行Yarn资源调度框架

</configuration>
	<property>

		<name>mapreduce.framework.name</name>

		<value>yarn</value>

	</property>
	<property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>hadoop01:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>hadoop01:19888</value>
        </property>

</configuration>
  • yarn-site.xml

注释:指定ResourceManagermaster节点所在计算机节点

<configuration> 
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
    <description>是否开启高可用</description>
  </property>
  <property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>yrc</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>hadoop01</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>hadoop02</value>
  </property>
   
  <!-- 指定 rm 的内部通信地址 --> 
  <property> 
    <name>yarn.resourcemanager.address.rm1</name>  
    <value>hadoop01:8032</value> 
  </property>  
  <property> 
    <name>yarn.resourcemanager.address.rm2</name>  
    <value>hadoop02:8032</value> 
  </property> 

  <!-- 指定 AM 向 rm 申请资源的地址 -->  
  <property> 
    <name>yarn.resourcemanager.scheduler.address.rm1</name>  
    <value>hadoop01:8030</value> 
  </property>    
  <property> 
    <name>yarn.resourcemanager.scheduler.address.rm2</name>  
    <value>hadoop02:8030</value> 
  </property>

  <!-- 指定供 NM 连接的地址 -->  
  <property> 
    <name>yarn.resourcemanager.resource-tracker.address.rm1</name>  
    <value>hadoop01:8031</value> 
  </property>  
  <property> 
    <name>yarn.resourcemanager.resource-tracker.address.rm2</name>  
    <value>hadoop02:8031</value> 
  </property>    
  
  <property>
    <name>yarn.resourcemanager.zk-address</name>
    <value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
  </property>
  
  <!-- 启用自动恢复 -->  
  <property> 
    <name>yarn.resourcemanager.recovery.enabled</name>  
    <value>true</value> 
  </property>  
  <!-- 指定 resourcemanager 的状态信息存储在 zookeeper 集群 -->  
  <property> 
    <name>yarn.resourcemanager.store.class</name>  
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value> 
  </property>  
  <!-- 环境变量的继承 -->  
  <property> 
    <name>yarn.nodemanager.env-whitelist</name>  
    <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLAS SPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value> 
  </property>  
</configuration>
  • workers

注释:

1、指定DataNode从节点(修改/opt/context/software/bigdata/hadoop-3.2.3/etc/hadoop文件,每个节点配置信息占一行)

2、指定NodeManager节点

hadoop01
hadoop02
hadoop03
scp -r -P 2406 /opt/context/software/bigdata/hadoop-3.2.3 pukka@hadoop02:/opt/context/software/bigdata
scp -r -P 2406 /opt/context/software/bigdata/hadoop-3.2.3 pukka@hadoop03:/opt/context/software/bigdata
4、创建所需文件目录和脚本
#在3台节点上创建以下目录
mkdir -p /opt/context/software/bigdata/hadoop-3.2.3/tmp/dfs/data
mkdir -p /opt/context/software/bigdata/hadoop-3.2.3/tmp/dfs/name
mkdir -p /opt/context/software/bigdata/hadoop-3.2.3/tmp/dfs/ha/jn
#在hadoop01和hadoop02上创建该脚本
vim /home/pukka/exchange_nn.sh
	ps -ef | grep NameNode | grep -v grep | awk '{print $2}' | xargs kill -9
	sleep 10s
	/opt/context/software/bigdata/hadoop-3.2.3/bin/hdfs --daemon start namenode

5、格式化主节点的namenode

注:只在首次启动namenode时进行格式化

[root@hadoop01 hadoop-3.2.3]# /opt/context/software/bigdata/hadoop-3.2.3/bin/hdfs namenode -format
#显示初始格式化已完成
2023-05-16 18:01:42,401 INFO common.Storage: Storage directory /opt/context/software/bigdata/hadoop-3.2.3/tmp/dfs/name has been successfully formatted.

6、启动
#HA自动切换所需依赖,namenode所在节点均需安装
yum install psmisc
#hadoop01启动journalnode
hdfs --daemon start journalnode
#hadoop02启动journalnode
hdfs --daemon start journalnode
#查看日志 tail -f /opt/context/software/bigdata/hadoop-3.2.3/logs/hadoop-pukka-journalnode-hadoop01.log

#hadoop01启动namenode
hdfs --daemon start namenode
#hadoop02启动namenode
hdfs namenode -bootstrapStandby
hdfs --daemon start namenode

hadoop02启动namenode报错

org.apache.hadoop.hdfs.qjournal.protocol.JournalNotFormattedException: Journal Storage Directory root= /opt/context/software/bigdata/hadoop-3.2.3/tmp/dfs/ha/jn/mycluster; location= null not formatted ; journal id: mycluster
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkFormatted(Journal.java:532)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.getEditLogManifest(Journal.java:722)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getEditLogManifest(JournalNodeRpcServer.java:229)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getEditLogManifest(QJournalProtocolServerSideTranslatorPB.java:230)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:28984)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:549)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:518)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1029)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:957)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2957)

原因:之前未开启HA的时候,没有/opt/context/software/bigdata/hadoop-3.2.3/tmp/dfs/ha/jn/mycluster,之前进行过

hdfs namenode -format,所以这次未进行format,journalnode未初始化

#重新格式化 NameNode,该操作并不会导致HDFS数据丢失
hdfs namenode -initializeSharedEdits
chmod 777 /opt/context/software/bigdata/hadoop-3.2.3/logs   #给文件夹赋予可读可写可执行权限

  • 注意

注:若环境变量配置错误,则按照以下步骤进行恢复

先输入:
/bin/vi /etc/profile
进入环境变量,
把之前配置错误的命令都删除,
然后加入下面两行

export PATH=/usr/bin:/usr/sbin:/bin:/sbin:/usr/X11R6/bin
source

加好后按ESC键,输入wq保存并退出,
再输入:
source /etc/profile

不用管报错,然后输入下命令:
vi /etc/profile
再次进入环境变量配置,把刚才输入的第二行的source删除
再输入:
source /etc/profile

Hive安装

一、下载

hive的下载地址:(下载3.1.3的hive版本)

https://repo.huaweicloud.com/apache/hive/hive-3.1.3/

二、安装

主机名metastorehiveserver2mysql
hadoop01
hadoop02
hadoop03
1、mysql数据库配置

增加hive库,配置hive用户权限

mysql -uroot -ppukka@2023

mysql> create database hive;
Query OK, 1 row affected (0.00 sec)

mysql>
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| hive               |
| information_schema |
| jiangsumegadata    |
| megadata           |
| megadataofanhui    |
| megadataofsichuan  |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
9 rows in set (0.00 sec)

mysql> create user "hive"@"%" identified by "pukka@2023";
Query OK, 0 rows affected (0.00 sec)

mysql> grant all privileges on hive.* to "hive"@"%";
Query OK, 0 rows affected (0.01 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> quit;

mysql> create database hive;
ERROR 3680 (HY000): Failed to create schema directory 'hive' (errno: 13 - Permission denied)
问题:之前设置权限的时候讲mysql文件夹权限设置为了pukka,需要修改为mysql,修改后恢复正常
2、上传jar包

将hive的包上传至/opt/context/software/bigdata/目录下

cd /opt/context/software/bigdata/
tar -zxf apache-hive-3.1.3-bin.tar.gz
mv ./apache-hive-3.1.3-bin ./hive
vim /etc/profile
	export HIVE_HOME=/opt/context/software/bigdata/hive
	export PATH=$HIVE_HOME/bin
cd /opt/context/software/bigdata/hive/conf
cp hive-log4j2.properties hive-log4j2.properties
#修改日志位置
vim hive-log4j2.properties
	property.hive.log.dir = /opt/context/software/bigdata/hive/logs
mkdir /opt/context/software/bigdata/hive/logs
#修改hive-env.sh配置
cp hive-env.sh.template hive-env.sh
	HADOOP_HOME=/opt/context/software/bigdata/hadoop-3.2.3
	export HIVE_CONF_DIR=/opt/context/software/bigdata/hive/conf
	export HIVE_AUX_JARS_PATH=/opt/context/software/bigdata/hive/lib
#配置metastore
hdfs dfs -mkdir -p /user/hive/{warehouse,tmp,logs}
hdfs dfs -chmod -R 777 /user/hive/
vim /opt/context/software/bigdata/hive/conf/hive-site.xml

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
	<!--指定Hive元数据存储在本地 -->
    <property>
      <name>hive.metastore.local</name>
      <value>true</value>
    </property>
    <!--Hive作业的HDFS根目录位置 -->
    <property>
      <name>hive.exec.scratchdir</name>
      <value>/user/hive/tmp</value>
    </property>
    <!--Hive作业的HDFS根目录创建和写权限 -->
    <property>
      <name>hive.scratch.dir.permission</name>
      <value>775</value>
    </property>
    <!--指定Hive元数据存放在HDFS上的位置 -->
    <property>
      <name>hive.metastore.warehouse.dir</name>
      <value>/user/hive/warehouse</value>
    </property>
    <!--连接数据库地址,名称 -->
    <property>
      <name>javax.jdo.option.ConnectionURL</name>
      <value>jdbc:mysql://hadoop01:3306/hive?createDatabaseIfNotExist=true</value>
    </property>
    <!--连接数据库驱动 -->
    <property>
      <name>javax.jdo.option.ConnectionDriverName</name>
      <value>com.mysql.jdbc.Driver</value>
    </property>
    <!--连接数据库用户名称 -->
    <property>
      <name>javax.jdo.option.ConnectionUserName</name>
      <value>root</value>
    </property>
    <!--连接数据库用户密码 -->
    <property>
      <name>javax.jdo.option.ConnectionPassword</name>
      <value>pukka@2023</value>
    </property>
    <!-- 指定metastore连接地址 -->
    <property>
      <name>hive.metastore.uris</name>
      <value>thrift://hadoop01:9083,thrift://hadoop02:9083</value>
    </property>
</configuration>

#将hive分发到其他服务器
scp -r -P2406 /opt/context/software/bigdata/hive pukka@hadoop02:/opt/context/software/bigdata
scp -r -P2406 /opt/context/software/bigdata/hive pukka@hadoop03:/opt/context/software/bigdata
scp -P2406 /etc/profile pukka@hadoop02:/ect
scp -P2406 /etc/profile pukka@hadoop03:/ect
3、配置mysql驱动

下载mysql驱动包并放置到/opt/context/software/bigdata/hive/lib目录下

下载地址:Maven Repository: com.mysql » mysql-connector-j » 8.0.33 (mvnrepository.com)

删除hive下的log4j-slf4j-impl-2.17.1.jar,该包与hadoop的包冲突

rm -rf /opt/context/software/bigdata/hive/lib/log4j-slf4j-impl-2.17.1.jar
4、初始化Hive
[pukka@hadoop01 bin]$ /opt/context/software/bigdata/hive/bin/schematool -dbType mysql -initSchema
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
        at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:536)
        at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:554)
        at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:448)
        at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5144)
        at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:5107)
        at org.apache.hive.beeline.HiveSchemaTool.<init>(HiveSchemaTool.java:96)
        at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1473)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:236)

原因: hive 内依赖的 guava.jar 和 hadoop 内的版本不一致造成的

解决方案:

#查看hadoop、hive的guava版本
[pukka@hadoop01 lib]$ ls /opt/context/software/bigdata/hadoop-3.2.3/share/hadoop/common/lib/guava*
/opt/context/software/bigdata/hadoop-3.2.3/share/hadoop/common/lib/guava-27.0-jre.jar
[pukka@hadoop01 lib]$ ls /opt/context/software/bigdata/hive/lib/guava*
/opt/context/software/bigdata/hive/lib/guava-19.0.jar
#使用 Hadoop 的替代 Hive的版本
rm -rf /opt/context/software/bigdata/hive/lib/guava-19.0.jar
cp /opt/context/software/bigdata/hadoop-3.2.3/share/hadoop/common/lib/guava-27.0-jre.jar /opt/context/software/bigdata/hive/lib
5、启动metastore

(启动该服务之后才能使用hive进入hive的查询界面)

[pukka@hadoop01 bin]$ nohup /opt/context/software/bigdata/hive/bin/hive --service metastore >> /opt/context/software/bigdata/hive/logs/metastore.log 2>&1 &
[pukka@hadoop02 logs]$ nohup /opt/context/software/bigdata/hive/bin/hive --service metastore >> /opt/context/software/bigdata/hive/logs/metastore.log 2>&1 &

6、配置hiveserver2

在hadoop02和hadoop03上配置hiverserver2-site.xml

注意:hive.server2.thrift.bind.host为本机的hostname

vim /opt/context/software/bigdata/hive/conf/hiveserver2-site.xml

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
    <!--指定hive元数据存储的地址-->
    <property>
      <name>hive.metastore.uris</name>
      <value>thrift://hadoop01:9083,thrift://hadoop02:9083</value>
    </property>     
    <!--启用Hive Server2的动态服务发现支持-->
    <property>
      <name>hive.server2.support.dynamic.service.discovery</name>
      <value>true</value>
      <description>当启用时,Hive Server 2会注册到ZooKeeper,并通过ZooKeeper进行服务发现。这可以支持Hive Server 2的高可用和负载均衡配置</description>
    </property>
    <!--启用Hive Server2的主备模式,并通过ZooKeeper选举一个活动实例来提供服务-->
    <property>
      <name>hive.server2.active.passive.ha.enable</name>
      <value>true</value>
    </property>
    <!--指定在ZooKeeper中用于Hive Server2的命名空间-->
    <property>
      <name>hive.server2.zookeeper.namespace</name>
      <value>hiveserver2_zk</value>
    </property>
    <property>
      <name>hive.zookeeper.quorum</name>
      <value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
    </property>    
    <property>
      <name>hive.zookeeper.client.port</name>
      <value>2181</value>
    </property>
    <!--指定Hive Server2的Thrift服务监听的端口号-->
    <property>
      <name>hive.server2.thrift.port</name>
      <value>10001</value>
    </property>
    <!--指定Hive Server2的Thrift服务绑定的主机名或IP地址-->
    <property>
      <name>hive.server2.thrift.bind.host</name>
      <value>hadoop02</value>
    </property>
    <property>
      <name>hive.querylog.location</name>
      <value>/user/hive/logs</value>
    </property>
</configuration>
7、启动hiveserver2
nohup /opt/context/software/bigdata/hive/bin/hive --service hiveserver2 >>/opt/context/software/bigdata/hive/logs/hiveserver2.log 2>&1 &
#查看是否启动成功
ss -ntulp | grep 10001
#启动失败

解决方案:

#停止hadoop,修改core-site.xml
vim /opt/context/software/bigdata/hadoop-3.2.3/etc/hadoop/core-site.xml
#添加如下配置
<property>     
    <name>hadoop.proxyuser.root.hosts</name>     
    <value>*</value>
</property> 
<property>     
    <name>hadoop.proxyuser.root.groups</name>    
    <value>*</value> 
</property>

<property>     
    <name>hadoop.proxyuser.pukka.hosts</name>     
    <value>*</value>
</property> 
<property>     
    <name>hadoop.proxyuser.pukka.groups</name>    
    <value>*</value> 
</property>

#将core-site.xml发送到其他服务器并重启hadoop
scp -P2406 /opt/context/software/bigdata/hadoop-3.2.3/etc/hadoop/core-site.xml pukka@hadoop02:/opt/context/software/bigdata/hadoop-3.2.3/etc/hadoop

scp -P2406 /opt/context/software/bigdata/hadoop-3.2.3/etc/hadoop/core-site.xml pukka@hadoop03:/opt/context/software/bigdata/hadoop-3.2.3/etc/hadoop
/opt/context/software/bigdata/hadoop-3.2.3/sbin/stop-all.sh
/opt/context/software/bigdata/hadoop-3.2.3/sbin/start-all.sh
#重启hive-metastore,hadoop01和hadoop02
#kill掉metastore
ps -ef | grep metastore | grep -v grep | awk '{print $2}' | xargs kill -9
#重启metastore
nohup /opt/context/software/bigdata/hive/bin/hive --service metastore >> /opt/context/software/bigdata/hive/logs/metastore.log 2>&1 &
8、启动hiveserver2
nohup /opt/context/software/bigdata/hive/bin/hive --service hiveserver2 >>/opt/context/software/bigdata/hive/logs/hiveserver2.log 2>&1 &
#查看是否启动成功
ss -ntulp | grep 10001
[pukka@hadoop02 logs]$ ss -ntulp | grep 10001
tcp   LISTEN 0      50                        *:10001            *:*    users:(("java",pid=61321,fd=544))


验证:

[pukka@hadoop02 logs]$ beeline -u jdbc:hive2://hadoop02:10001
错误: 找不到或无法加载主类 org.apache.hive.beeline.BeeLine
[pukka@hadoop02 logs]$ /opt/context/software/bigdata/hive/bin/beeline -u jdbc:hive2://hadoop02:10001
Connecting to jdbc:hive2://hadoop02:10001
Connected to: Apache Hive (version 3.1.3)
Driver: Hive JDBC (version 3.1.3)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.3 by Apache Hive
0: jdbc:hive2://hadoop02:10001> show tables;
INFO  : Compiling command(queryId=pukka_20230829170557_644d60db-a371-4885-b3d8-6f56f89152ff): show tables
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Semantic Analysis Completed (retrial = false)
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from deserializer)], properties:null)
INFO  : Completed compiling command(queryId=pukka_20230829170557_644d60db-a371-4885-b3d8-6f56f89152ff); Time taken: 1.221 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=pukka_20230829170557_644d60db-a371-4885-b3d8-6f56f89152ff): show tables
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=pukka_20230829170557_644d60db-a371-4885-b3d8-6f56f89152ff); Time taken: 0.059 seconds
INFO  : OK
INFO  : Concurrency mode is disabled, not creating a lock manager
+-----------+
| tab_name  |
+-----------+
+-----------+
No rows selected (1.537 seconds)

Flume安装

一、下载

flume的下载地址:(下载1.7.0的flume版本)

https://repo.huaweicloud.com/apache/flume/

二、安装

1、集群规划

hadoop02、hadoop03作为采集数据的从机使用 ,hadoop01作为master使用将数据上传至hdfs

2、上传、解压
上传apache-flume-1.7.0-bin.tar.gz 安装包
#解压安装包
[root@hadoop01 bigdata]# tar -zxf apache-flume-1.7.0-bin.tar.gz
3、配置
1、修改flume-env.sh
#三台机子均修改
[root@hadoop01 conf]# vim flume-env.sh
export JAVA_HOME=/opt/context/software/bigdata/jdk1.8.0_301
2、修改slave.conf

在conf下新建slave.conf文件

[root@hadoop03 conf]# vim slave.conf
# 主要作用是监听目录中的新增数据,采集到数据之后,输出到avro (输出到agent)
# 注意:Flume agent的运行,主要就是配置source channel sink
# 下面的a1就是agent的代号,source叫r1 channel叫c1 sink叫k1

a1.sources = r1
a1.sinks = k1
a1.channels = c1

#具体定义source
a1.sources.r1.type = spooldir
#先创建此目录,保证里面空的
a1.sources.r1.spoolDir = /data/logs/flume

#对于sink的配置描述 使用avro日志做数据的消费
a1.sinks.k1.type = avro
# hostname是最终传给的主机名称或者ip地址
a1.sinks.k1.hostname = master
a1.sinks.k1.port = 44444

#对于channel的配置描述 使用文件做数据的临时缓存 这种的安全性要高
a1.channels.c1.type = file
a1.channels.c1.checkpointDir = /data/flume/checkpoint
a1.channels.c1.dataDirs = /data/flume/dataTerm

#通过channel c1将source r1和sink k1关联起来
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

3、修改master.conf

在conf目录下创建master.conf

# 获取slave1,2上的数据,聚合起来,传到hdfs上面
# 注意:Flume agent的运行,主要就是配置source channel sink
# 下面的a1就是agent的代号,source叫r1 channel叫c1 sink叫k1
 
a1.sources = r1
a1.sinks = k1
a1.channels = c1
 
#对于source的配置描述 监听avro
a1.sources.r1.type = avro
# hostname是最终传给的主机名称或者ip地址
a1.sources.r1.bind = master
a1.sources.r1.port = 44444
#定义拦截器,为消息添加时间戳  
a1.sources.r1.interceptors = i1  
a1.sources.r1.interceptors.i1.type = org.apache.flume.interceptor.TimestampInterceptor$Builder
 
#对于sink的配置描述 传递到hdfs上面
a1.sinks.k1.type = hdfs  
#集群的nameservers名字
#单节点的直接写:hdfs://主机名(ip):9000/xxx
#ns是hadoop集群名称
a1.sinks.k1.hdfs.path = hdfs://ns/flume/%Y%m%d  
a1.sinks.k1.hdfs.filePrefix = events-  
a1.sinks.k1.hdfs.fileType = DataStream  
#不按照条数生成文件  
a1.sinks.k1.hdfs.rollCount = 0  
#HDFS上的文件达到128M时生成一个文件  
a1.sinks.k1.hdfs.rollSize = 134217728  
#HDFS上的文件达到60秒生成一个文件  
a1.sinks.k1.hdfs.rollInterval = 60  
 
#对于channel的配置描述 使用内存缓冲区域做数据的临时缓存
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
 
#通过channel c1将source r1和sink k1关联起来
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1  

Kafka集群安装

一、下载

kafka的下载地址:(下载3.23的kafka版本,scala版本这里使用的是2.12.2)

https://repo.huaweicloud.com/apache/kafka

kafka_2.12-3.2.3.tgz  

二、安装

cd /opt/context/software/bigdata
tar -zxf kafka_2.12-3.2.3.tgz
mv .//opt/context/software/bigdata ./kafka
cd ./kafka/config
vim server.properties
#需要修改的配置
broker.id=0
listeners=PLAINTEXT://10.0.16.159:9092
log.dirs=/data/kafka-logs
zookeeper.connect=10.0.16.159:2181,10.0.16.160:2181,10.0.16.161:2181

#将kafka赋值到其他节点
cd /opt/context/software/bigdata
scp -r -P2406 ./kafka pukka@hadoop02:$PWD
scp -r -P2406 ./kafka pukka@hadoop03:$PWD

#修改另外两个节点的配置
#10.0.16.160
broker.id=1
listeners=PLAINTEXT://10.0.16.160:9092
#10.0.16.161
broker.id=2
listeners=PLAINTEXT://10.0.16.161:9092

#配置启动脚本
vim /opt/context/software/bigdata/kafka/bin/kafka-start.sh
/opt/context/software/bigdata/kafka/bin/kafka-server-start.sh -daemon /opt/context/software/bigdata/kafka/config/server.properties

Spark安装

一、下载

spark的下载地址:(下载3.2.0的spark版本)

https://repo.huaweicloud.com/apache/spark/spark-3.2.0/

二、安装

2.1 安装scala环境

https://www.scala-lang.org/download/2.12.2.html

2.1.1 配置scala的环境变量
[root@Bigdata-MerleWang03 bigdata]# vim /etc/profile
export SCALA_HOME=/opt/context/software/bigdata/scala2122
[root@Bigdata-MerleWang03 bigdata]# source /etc/profile

2.2 安装spark

[root@Bigdata-MerleWang01 spark320wot]# vim /etc/profile

export SPARK_HOME=/opt/context/software/bigdata/spark320wot
export PATH=$PATH:$SPARK_HOME/bin
export PATH=$PATH:$SPARK_HOME/sbin
[root@Bigdata-MerleWang01 spark320wot]# source /etc/profile

2.2.1 配置文件
[root@hadoop01 spark320wot]# cd conf/
cp workers.template slaves
[root@hadoop01 conf]# vim slaves
#hosts映射名称   三台机子都做此操作
hadoop01
hadoop02
hadoop03
[root@hadoop01 conf]# cp spark-env.sh.template spark-env.sh

  • spark-env.sh 文件配置
export SPARK_PID_DIR=/opt/context/software/bigdata/spark320wot/pids
export JAVA_HOME=/opt/context/software/bigdata/jdk1.8.0_301
export HADOOP_HOME=/opt/context/software/bigdata/hadoop-3.2.3
export HADOOP_CONF_DIR=/opt/context/software/bigdata/hadoop-3.2.3/etc/hadoop
export SCALA_HOME=/opt/context/software/bigdata/scala2122
export SPARK_HOME=/opt/context/software/bigdata/spark320wot
export SPARK_MASTER_IP=hadoop01 #主机映射名称
#export MASTER=spark://hadoop01:7077
export SPARK_MASTER_PORT=7077
export SPARK_MASTER_WEBUI_PORT=8090
export SPARK_WORKER_WEBUI_PORT=8091
#export SPARK_WORKER_MEMORY=32g
#export SPARK_WORKER_CORES=16
export YARN_CONF_DIR=/opt/context/software/bigdata/hadoop-3.2.3/etc/hadoop
export SPARK_DIST_CLASSPATH=$(/opt/context/software/bigdata/hadoop-3.2.3/bin/hadoop classpath)
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=hadoop01:2181,hadoop02:2181,hadoop03:2181 -Dspark.deploy.zookeeper.dir=/opt/context/software/bigdata/apache-zookeeper-3.6.3-bin"
export SPARK_SSH_OPTS="-p 2406"

SPARK_PID_DIR:PID目录

JAVA_HOEM:设置Java安装目录的路径

HADOOP_HOME:设置HADOOP安装目录路径

HADOOP_CONF_DIR:设置Hadoop的配置文件目录路径

SCALA_HOME:设置Scala的安装目录路径

SPARK_HOME:设置spark安装目录路径

SPARK_MASTER_IP:spark的master的IP

MASTER:设置spark的master的IP+端口

SPARK_MASTER_WEBUI_PORT:设置Spark主节点的Web界面端口号。可以通过该端口访问Spark主节点的Web界面。

YARN_CONF_DIR:配置yarn配置文件路径

SPARK_DIST_CLASSPATH:without-hadoop版本需要手动指定hadoop的classpath

SPARK_SSH_OPTS:配置SSH端口,默认22

2.2.2 同步其他机器
  1. 将hadoop01的spark分发到hadoop02和hadoop03
 scp -r /opt/context/software/bigdata/spark320wot pukka@hadoop02:/opt/context/software/bigdata/
 scp -r /opt/context/software/bigdata/spark320wot pukka@hadoop03:/opt/context/software/bigdata/
  1. 修改hadoop02和hadoop03的环境变量
2.2.3 启动集群
sh /opt/context/software/bigdata/spark320wot/sbin/start-all.sh
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值