Hadoop集群搭建

目录

一、安装部署3台联网的服务器

二、关闭防火墙

三、修改主机名

四、安装JDK

五、安装配置Hadoop

1、配置环境变量

2、修改hadoop-2.10.1/etc/hadoop/目录下的配置文件

1) 修改slaves

2) 修改core-site.xml

3) 修改hdfs-site.xml

4) 修改mapred-site.xml

5) 修改yarn-site.xml

6) 修改hadoop-env.sh

3、复制Hadoop至slave1和slave2

六、SSH免密登录

七、初始化、启动/停止Hadoop

八、Hadoop启动后jps没有DataNode进程,Incompatible clusterIDs


一、安装部署3台联网的服务器

1. 下载VMware Workstation Pro   下载 VMware Workstation Pro | CN

安装VMware Workstation Pro

2. 下载CentOS-7-x86_64-DVD-2009.iso   centos-7-isos-x86_64安装包下载_开源镜像站-阿里云

安装CentOS7   VMware 安装 Centos7 超详细过程 | 菜鸟教程VMware安装CentOS7超详细版_Xiao J.的博客-CSDN博客_vmware workstation安装centos7

3. 网络配置NAT模式   vmware centos7配置虚拟机NAT连接上网详细教程_我怀念的Wu Zhiwei:)的博客-CSDN博客

虚拟机ping不通外网:ens33:<NO-CARRIER,BROADCAST,MULTICAST,UP>_hahahahanhanhan的博客-CSDN博客_no-carrier

4. 配置完成的三台服务器的IP地址分别为:192.168.126.100、192.168.126.101、192.168.126.102

(base) [root@master ~]# cd /etc/sysconfig/network-scripts/
(base) [root@master network-scripts]# ls
ifcfg-ens33  ...
(base) [root@master network-scripts]# vim ifcfg-ens33

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO=static
NM_CONTROLLED=no
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="no"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="261d2611-1417-4370-8f5e-7657cfd0ba22"
DEVICE="ens33"
ONBOOT="yes"
HWADDR=00:0c:29:21:c1:32
IPADDR=192.168.126.100
PREFIX=24
NETMASK=255.255.255.0
GATEWAY=192.168.126.2
DNS1=114.114.114.114
DNS2=8.8.8.8
(base) [root@master ~]# vi /etc/resolv.conf

# Generated by NetworkManager
nameserver 114.114.114.114
nameserver 8.8.8.8

二、关闭防火墙

(base) [root@master ~]# systemctl stop firewalld.service
(base) [root@master ~]# firewall-cmd --state
not running

三、修改主机名

修改主机名为master、slave1和slave2。

(base) [root@localhost ~]# cat /etc/hostname
localhost.localdomain

(base) [root@localhost ~]# hostnamectl set-hostname master
(base) [root@localhost ~]# cat /etc/hostname
master

增加3台服务器IP和主机名的映射关系。

(base) [root@localhost ~]# vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.126.100 master
192.168.126.101 slave1
192.168.126.102 slave2

重启服务器生效。

(base) [root@localhost ~]# reboot

四、安装JDK

下载JDK   https://www.oracle.com/cn/java/technologies/javase/javase-jdk8-downloads.html

创建/usr/local/java目录,将jdk-8uxxx-linux-x64.tar.gz拷贝至该目录

[root@localhost ~]# cd /usr/local/java
[root@localhost java]# tar -zxvf jdk-8u271-linux-x64.tar.gz

[root@localhost java]# vim /etc/profile
export JAVA_HOME=/usr/local/java/jdk1.8.0_271
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib

export PATH=$PATH:$JAVA_HOME/bin

[root@localhost java]# source /etc/profile

[root@localhost java]# java -version
openjdk version "1.8.0_262"
OpenJDK Runtime Environment (build 1.8.0_262-b10)
OpenJDK 64-Bit Server VM (build 25.262-b10, mixed mode)

五、安装配置Hadoop

下载hadoop-2.10.1.tar.gz   Index of /apache/hadoop/common

创建/usr/local/hadoop目录,将hadoop-x.x.x.tar.gz拷贝至该目录

(base) [root@master ~]# cd /usr/local/hadoop
(base) [root@master hadoop]# tar -zxvf hadoop-2.10.1.tar.gz

1、配置环境变量

(base) [root@master hadoop]# vim /etc/profile
export HADOOP=/usr/local/hadoop/hadoop-2.10.1
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP/bin:$HADOOP/sbin

(base) [root@master hadoop]# source /etc/profile
(base) [root@master hadoop]# hadoop version
Hadoop 2.10.1

2、修改hadoop-2.10.1/etc/hadoop/目录下的配置文件

(base) [root@master hadoop]# cd /usr/local/hadoop/hadoop-2.10.1/etc/hadoop/
(base) [root@master hadoop]# ls

1) 修改slaves

配置从节点。

(base) [root@master hadoop]# vim slaves

slave1
slave2

2) 修改core-site.xml

使用fs.default.name还是fs.defaultFS,要判断是否开启了namenode的highavaliable,如果开启了NN HA,用fs.defaultFS,在单一namenode的情况下,用fs.default.name。如果在单一namenode节点的情况使用 fs.defaultFS,系统将报错:

 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.

(base) [root@master hadoop]# vim core-site.xml

<configuration>
        <!-- 指定临时文件目录 -->
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/usr/local/hadoop/hadoop-2.10.1/tmp/</value>
        </property>
        <!-- 配置HDFS的NameNode所在节点服务器 -->
        <property>
                <name>fs.default.name</name>
                <value>hdfs://192.168.126.100:9000</value>
        </property>
</configuration>
(base) [root@master hadoop]# cd /usr/local/hadoop/hadoop-2.10.1/
(base) [root@master hadoop-2.10.1]# mkdir tmp

3) 修改hdfs-site.xml

(base) [root@master hadoop]# vim hdfs-site.xml 

<configuration>
        <!-- 指定HDFS副本的数量 -->
        <property>
                <name>dfs.replication</name>
                <value>3</value>
        </property>
        <!-- 指定Hadoop辅助名称节点主机配置 -->
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>192.168.126.100:50090</value>
        </property>
        <!-- namenode节点数据保存的位置 -->
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/usr/local/hadoop/hadoop-2.10.1/tmp/dfs/name</value>
        </property>
        <!-- datanode节点数据保存的位置 -->
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:/usr/local/hadoop/hadoop-2.10.1/tmp/dfs/data</value>
        </property>
</configuration>                         

4) 修改mapred-site.xml

(base) [root@master hadoop]# cp mapred-site.xml.template mapred-site.xml
(base) [root@master hadoop]# vim mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>192.168.126.100:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>192.168.126.100:19888</value>
        </property>
</configuration>

5) 修改yarn-site.xml

(base) [root@master hadoop]# vim yarn-site.xml 

<configuration>
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>master</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
</configuration>

6) 修改hadoop-env.sh

将JAVA_HOME修改为本机Java路径。

(base) [root@master hadoop]# vim hadoop-env.sh

# The java implementation to use.
export JAVA_HOME=/usr/local/java/jdk1.8.0_271/

3、复制Hadoop至slave1和slave2

(base) [root@master hadoop]# scp -rp /usr/local/hadoop slave1:/usr/local/hadoop
(base) [root@master hadoop]# scp -rp /usr/local/hadoop slave2:/usr/local/hadoop

六、SSH免密登录

生成公钥:公钥文件id_rsa.pub、私钥文件id_rsa

[root@localhost java]# ssh-keygen
回车
回车
回车
[root@localhost java]# cd ~/.ssh/
[root@localhost .ssh]# ls
id_rsa  id_rsa.pub  known_hosts
[root@localhost .ssh]# cat id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDjytOhnCY/Oahczd/OaalebQA4...5 root@master

三台服务器免密互相ssh访问

(base) [root@master .ssh]# cp id_rsa.pub ~/.ssh/authorized_keys
(base) [root@master .ssh]# vim authorized_keys 

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQ.../WverOImLhJ root@master
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCft...qOj2ABkh root@slave1
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDEE...RVsppaFn root@slave2

(base) [root@master .ssh]# scp -rp authorized_keys slave1:~/.ssh/
(base) [root@master .ssh]# scp -rp authorized_keys slave2:~/.ssh/

验证。

(base) [root@master .ssh]# ssh slave1
Last login: Sun Aug  8 00:22:10 2021 from 192.168.126.1
[root@slave1 ~]# ssh master
Last login: Sun Aug  8 00:22:07 2021 from 192.168.126.1
(base) [root@master ~]# ssh slave2
Last login: Sun Aug  8 00:22:12 2021 from 192.168.126.1
[root@slave2 ~]# exit
logout
Connection to slave2 closed.
(base) [root@master ~]# 

七、初始化、启动/停止Hadoop

启动的命令在$HADOOP/bin:$HADOOP/sbin下面,已配置在环境变量中。

# 仅需要初始化一次
(base) [root@master ~]# hdfs namenode -format

启动Hadoop:

# 之后每次启动Hadoop执行如下命令即可
(base) [root@master ~]# start-dfs.sh
(base) [root@master ~]# start-yarn.sh
(base) [root@master ~]# jps
20114 SecondaryNameNode
20274 ResourceManager
19898 NameNode
20364 Jps

[root@slave1 ~]# jps
52115 Jps
51973 NodeManager
51846 DataNode

[root@slave2 ~]# jps
19764 Jps
19626 NodeManager
19499 DataNode

# 验证
(base) [root@master ~]# hadoop jar /usr/local/hadoop/hadoop-2.10.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.1.jar pi 10 10
...
	File Input Format Counters 
		Bytes Read=1180
	File Output Format Counters 
		Bytes Written=97
Job Finished in 308.242 seconds
Estimated value of Pi is 3.20000000000000000000

停止Hadoop:

(base) [root@master ~]# stop-yarn.sh
(base) [root@master ~]# stop-dfs.sh

八、Hadoop启动后jps没有DataNode进程,Incompatible clusterIDs

在子节点服务器slave1或slave2中,报错如下:

[root@slave1 ~]# cd /usr/local/hadoop/hadoop-2.10.1/logs/
[root@slave1 logs]# vim hadoop-root-datanode-slave1.log
java.io.IOException: Incompatible clusterIDs in /usr/local/hadoop/tmp/dfs/data: namenode clusterID = CID-596e7224-6277-4ce9-96bc-8749a5314def; datanode clusterID = CID-5bd2d27d-2581-4550-9359-43e5856d43f7

master中/usr/local/hadoop/hadoop-2.10.1/tmp/dfs/name/current/VERSION和

slave中/usr/local/hadoop/hadoop-2.10.1/tmp/dfs/data/current/VERSION的clusterID不同。

原因:

第一次格式化dfs后,启动并使用了hadoop,后来又重新执行了格式化命令hdfs namenode -format,这时namenode的clusterID会重新生成,而datanode的clusterID保持不变

解决方案1:删除如下几个文件夹

[root@master ~]# rm -rf /usr/local/hadoop/hadoop-2.10.1/tmp/dfs/

[root@slave1 ~]# rm -rf /usr/local/hadoop/hadoop-2.10.1/tmp/dfs/

[root@slave2 ~]# rm -rf /usr/local/hadoop/hadoop-2.10.1/tmp/dfs/

解决方案2:修改clusterID使其一致
 

参考文献:Hadoop集群安装配置教程_Hadoop3.1.3_Ubuntu_厦大数据库实验室博客

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值