CentOs7 Hadoop集群部署

前言

Hadoop单节点伪分布部署参考这篇,本篇来进行真正多节点部署。

角色分配

cdh01,cdh02,cdh03

nodecdh01cdh02cdh03
HDFSNameNode、DataNodeSecondNameNode、DataNodeDataNode
YARNResourceManager、NodeManagerNodeManagerNodeManager

解压安装及配置

  • cdh01:
tar -zxvf hadoop-2.6.0-cdh5.15.1.tar.gz -C /opt/
!!!修改配置文件

1.修改HDFS配置文件 /opt/hadoop-2.6.0-cdh5.15.1/etc/hadoop路径下面

hadoop-env.sh
	export JAVA_HOME=/usr/java/jdk1.8.0_261

core-site.xml
	<property>
	    <name>fs.defaultFS</name>
	    <value>hdfs://cdh01:8020</value>   # 写namenode的ip和端口,8020和9000都可以,只要其他组件对应
	</property>
	<property>
	    <name>hadoop.tmp.dir</name>
	    <value>/opt/hadoop-2.6.0-cdh5.15.1/tmp</value>  #创建一个tmp文件夹用来存储临时文件
	</property>

hdfs-site.xml
	<property>
	    <name>dfs.replication</name>
	    <value>3</value>    # 副本数量最大为datanode数量,3副本就够了
	</property> 
	<property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>cdh02:50090</value>
	</property>


slaves      #slaves文件,hdfs启动时,通过这个文件来启动datanode,只要启动节点有这个文件就可以
	cdh01
	cdh02
	cdh03
			
修改YARN配置文件 /opt/hadoop-2.6.0-cdh5.15.1/etc/hadoop路径下面
yarn-env.sh
	export JAVA_HOME=/usr/java/jdk1.8.0_261

yarn-site.xml
	<property>
       	<name>yarn.nodemanager.aux-services</name>
       	<value>mapreduce_shuffle</value>
   	</property>

   	<!-- 指定YARN的ResourceManager的地址 -->
   	<property>
       	<name>yarn.resourcemanager.hostname</name>
       	<value>cdh01</value>
   	</property>
  • cdh02-03
# 把hadoop文件传输到其他两台机器上
xsync /opt/hadoop-2.6.0-cdh5.15.1
  • 三台机器添加配置文件
vi /etc/profile
# 添加
export HADOOP_HOME=/opt/hadoop-2.6.0-cdh5.15.1
export PATH=$PATH:$HADOOP_HOME/bin

source /etc/profile
  • 启动
    NameNode机器上
# 第一次启动前,在cdh01上执行初始化
hdfs namenode -format

# 启动
[root@cdh01 sbin] ./start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [cdh01]
cdh01: starting namenode, logging to /opt/hadoop-2.6.0-cdh5.15.1/logs/hadoop-root-namenode-cdh01.out
cdh03: starting datanode, logging to /opt/hadoop-2.6.0-cdh5.15.1/logs/hadoop-root-datanode-cdh03.out
cdh02: starting datanode, logging to /opt/hadoop-2.6.0-cdh5.15.1/logs/hadoop-root-datanode-cdh02.out
cdh01: starting datanode, logging to /opt/hadoop-2.6.0-cdh5.15.1/logs/hadoop-root-datanode-cdh01.out
Starting secondary namenodes [cdh02]
cdh02: starting secondarynamenode, logging to /opt/hadoop-2.6.0-cdh5.15.1/logs/hadoop-root-secondarynamenode-cdh02.out
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.6.0-cdh5.15.1/logs/yarn-root-resourcemanager-cdh01.out
cdh03: starting nodemanager, logging to /opt/hadoop-2.6.0-cdh5.15.1/logs/yarn-root-nodemanager-cdh03.out
cdh02: starting nodemanager, logging to /opt/hadoop-2.6.0-cdh5.15.1/logs/yarn-root-nodemanager-cdh02.out
cdh01: starting nodemanager, logging to /opt/hadoop-2.6.0-cdh5.15.1/logs/yarn-root-nodemanager-cdh01.out

# jps看一下各节点的进程
[root@cdh01 sbin]# jps
8055 Jps
7454 NameNode
7806 ResourceManager
7902 NodeManager
7551 DataNode

[root@cdh02 ~]# jps
3318 SecondaryNameNode
3223 DataNode
3384 NodeManager
3498 Jps

[root@cdh03 ~]# jps
2480 NodeManager
2599 Jps
2379 DataNode
# 各个节点的进程都启动成功了

看一下webUI:cdh01:50070

在这里插入图片描述

# 可以通过命令看一下
hdfs dfsadmin -report

Configured Capacity: 119101992960 (110.92 GB)
Present Capacity: 88014856192 (81.97 GB)
DFS Remaining: 88014819328 (81.97 GB)
DFS Used: 36864 (36 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (3):

Name: xxx.xxx.xxx.xxx:50010 (cdh01)
Hostname: cdh01
Decommission Status : Normal
Configured Capacity: 39700664320 (36.97 GB)
DFS Used: 12288 (12 KB)
Non DFS Used: 20862103552 (19.43 GB)
DFS Remaining: 18838548480 (17.54 GB)
DFS Used%: 0.00%
DFS Remaining%: 47.45%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Dec 06 22:08:04 CST 2020


Name: xxx.xxx.xxx.xxx:50010 (cdh02)
Hostname: cdh02
Decommission Status : Normal
Configured Capacity: 39700664320 (36.97 GB)
DFS Used: 12288 (12 KB)
Non DFS Used: 5113217024 (4.76 GB)
DFS Remaining: 34587435008 (32.21 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.12%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Dec 06 22:08:05 CST 2020


Name: xxx.xxx.xxx.xxx:50010 (cdh03)
Hostname: cdh03
Decommission Status : Normal
Configured Capacity: 39700664320 (36.97 GB)
DFS Used: 12288 (12 KB)
Non DFS Used: 5111816192 (4.76 GB)
DFS Remaining: 34588835840 (32.21 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.12%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Dec 06 22:08:05 CST 2020
# 关闭hadoop
[root@cdh01 sbin]# ./stop-all.sh 
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [cdh01]
cdh01: stopping namenode
cdh01: stopping datanode
cdh03: stopping datanode
cdh02: stopping datanode
Stopping secondary namenodes [cdh02]
cdh02: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
cdh03: stopping nodemanager
cdh02: stopping nodemanager
cdh01: stopping nodemanager
no proxyserver to stop
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值