Hadoop分布式集群搭建(二)

5.Hadoop集群搭建

5.1下载并解压Hadoop

访问网址:https://archive.apache.org/dist/hadoop/common/
找到 hadoop-2.7.3/ 版本,点击进入

image-20220428132847634

选中 hadoop-2.7.3.tar.gz 文件鼠标右击,选择【复制链接地址】

在Hadoop001节点进行下载,执行如下命令

[root@hadoop001 software]# cd /opt/module/software
[root@hadoop001 software]# wget https://archive.apache.org/dist/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz

下载完成后,将安装包解压到固定的位置

[root@hadoop001 software]# tar -zxvf hadoop-2.7.3.tar.gz  -C  /opt/module/

5.2 配置环境变量

[root@hadoop001 software]# cd /opt/module/hadoop-2.7.3/ 
[root@hadoop001 hadoop-2.7.3]# vim /etc/profile

#在profile文件添加
大写G到末尾,小写o在下一行进入编辑模式

export HADOOP_HOME=/opt/module/hadoop-2.7.3
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin 

esc退出编辑模式,shift+zz保存退出 
#使文件生效
[root@hadoop001 hadoop-2.7.3]# source /etc/profile

image-20220428133655630

5.3 修改配置文件

[root@hadoop001 hadoop-2.7.3]# cd /opt/module/hadoop-2.7.3/etc/hadoop/ 

5.3.1 修改hadoop-env.sh文件

[root@hadoop001 hadoop]# vim  hadoop-env.sh 

#修改JAVA_HOME配置
export JAVA_HOME=/opt/module/jdk1.8.0_231 

#修改HADOOP_CONF_DIR配置
export HADOOP_CONF_DIR=/opt/module/hadoop-2.7.3/etc/hadoop

5.3.2 修改slaves文件

[root@hadoop001 hadoop]# vim slaves  
#删除localhost内容
localhost
#添加如下内容
hadoop001
hadoop002
hadoop003
保存退出

5.3.3 修改core-site.xml文件

[root@hadoop001 hadoop]# vim core-site.xml
在<configuration>标签中添加

<!-- 指定在   Zookeeper 上注册的节点的名字 --> 
<property>
<name>fs.defaultFS</name> 
<value>hdfs://ns</value>
</property>
<!-- 指定   Hadoop 数据存放目录 --> 
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/module/hadoop-2.7.3/data</value> 
</property>
<!-- 指定   zookeeper 的连接地址 --> 
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop001:2181,hadoop002:2181,hadoop003:2181</value> 
</property>
  
保存退出

image-20220428134854471

5.3.4编辑hdfs-site.xml文件

[root@hadoop001 hadoop]# vim hdfs-site.xml 

<!-- 绑定在Zookeeper上注册的节点名 --> 
<property>
<name>dfs.nameservices</name> 
<value>ns</value>
</property>
<!-- ns集群下有两个namenode,分别为nn1,nn2 --> 
<property>
<name>dfs.ha.namenodes.ns</name> 
<value>nn1,nn2</value>
</property>
<!--nn1的RPC通信--> 
<property>
<name>dfs.namenode.rpc-address.ns.nn1</name> 
<value>hadoop001:9000</value>
</property>
<!--nn1的http通信--> 
<property>
<name>dfs.namenode.http-address.ns.nn1</name> 
<value>hadoop001:50070</value>
</property>
<!-- nn2的RPC通信地址 --> 
<property>
<name>dfs.namenode.rpc-address.ns.nn2</name> 
<value>hadoop002:9000</value>
</property>
<!-- nn2的http通信地址 --> 
<property>
<name>dfs.namenode.http-address.ns.nn2</name> 
<value>hadoop002:50070</value>
</property>
<!--指定namenode的元数据在JournalNode上存放的位置,这样,namenode2可以从journalnode集群里的指定位置上获取信息,达到热备效果-->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop001:8485;hadoop002:8485;hadoop003:8485/ns</value> 
</property>
<!-- 指定JournalNode在本地磁盘存放数据的位置 --> 
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/module/hadoop-2.7.3/data/journal</value> 
</property>
<!-- 开启NameNode故障时自动切换 --> 
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value> 
</property>
<!-- 配置失败自动切换实现方式 --> 
<property>
<name>dfs.client.failover.proxy.provider.ns</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制 --> 
<property>
<name>dfs.ha.fencing.methods</name> 
<value>sshfence</value>
</property>
<!-- 使用隔离机制时需要ssh免登陆 --> 
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name> 
<value>/root/.ssh/id_rsa</value>
</property>
<!--配置namenode存放元数据的目录,可以不配置,如果不配置则默认放到hadoop.tmp.dir 下-->
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///opt/module/hadoop-2.7.3/data/hdfs/name</value> 
</property>
<!--配置datanode存放元数据的目录,可以不配置,如果不配置则默认放到 
hadoop.tmp.dir 下-->
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///opt/module/hadoop-2.7.3/data/hdfs/data</value> 
</property>
<!--配置副本数量--> 
<property>
<name>dfs.replication</name> 
<value>3</value>
</property>
<!--设置用户的操作权限,false 表示关闭权限验证,任何用户都可以操作--> 
<property>
<name>dfs.permissions</name> 
<value>false</value>
</property>

保存退出

5.3.5 编辑mapred-site.xml

[root@hadoop001 hadoop]# mv mapred-site.xml.template  mapred-site.xml 
[root@hadoop001 hadoop]# vim  mapred-site.xml

在<configuration>标签中添加
<property>
<name>mapreduce.framework.name</name> 
<value>yarn</value>
</property>

5.3.6 编辑yarn-site.xml文件

[root@hadoop001 hadoop]# vim  yarn-site.xml 

在<configuration>标签中添加
<!--配置yarn的高可用--> 
<property>
<name>yarn.resourcemanager.ha.enabled</name> 
<value>true</value>
</property>
<!--指定两个resourcemaneger的名称--> 
<property>
<name>yarn.resourcemanager.ha.rm-ids</name> 
<value>rm1,rm2</value>
</property>
<!--配置rm1的主机--> 
<property>
<name>yarn.resourcemanager.hostname.rm1</name> 
<value>hadoop001</value>
</property>
<!--配置rm2的主机--> 
<property>
<name>yarn.resourcemanager.hostname.rm2</name> 
<value>hadoop003</value>
</property>
<!--开启yarn恢复机制--> 
<property>
<name>yarn.resourcemanager.recovery.enabled</name> 
<value>true</value>
</property>
<!--执行rm恢复机制实现类--> 
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value> 
</property>
<!--配置zookeeper的地址--> 
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>hadoop001:2181,hadoop002:2181,hadoop003:2181</value> 
</property>
<!--执行yarn集群的别名--> 
<property>
<name>yarn.resourcemanager.cluster-id</name> 
<value>ns-yarn</value>
</property>
<!-- 指定nodemanager启动时加载server的方式为shuffleserve --> 
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value> 
</property>
<!-- 指定resourcemanager地址 --> 
<property>
<name>yarn.resourcemanager.hostname</name> 
<value>hadoop003</value>
</property>
  
保存退出

5.4 分发hadoop

分别拷贝到hadoop002和hadoop003 
[root@hadoop001 module]# cd /opt/module/
[root@hadoop001 module]# scp -r hadoop-2.7.3/ hadoop002:`pwd` 
[root@hadoop001 module]# scp -r hadoop-2.7.3/ hadoop003:`pwd` 

然后在其他两个机器上配置环境变量,分别在两台服务器上进行配置:
hadoop002节点:
[root@hadoop002 zookeeper-3.4.9]# vim /etc/profile 
#添加以下信息
export HADOOP_HOME=/opt/module/hadoop-2.7.3 
export PATH=.:$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin 
保存退出
[root@hadoop002 zookeeper-3.4.9]# source /etc/profile
[root@hadoop002 zookeeper-3.4.9]# hadoop version

hadoop003节点:
[root@hadoop003 zookeeper-3.4.9]# vim /etc/profile 
#添加以下信息
export HADOOP_HOME=/opt/module/hadoop-2.7.3 
export PATH=.:$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin 
保存退出
[root@hadoop003 zookeeper-3.4.9]# source /etc/profile
[root@hadoop003 zookeeper-3.4.9]# hadoop version

image-20220428140739429

image-20220428140756603

5.5 启动Hadoop集群

注意:Zookeeper需要先开启

5.5.1 在 hadoop001上格式化Zookeeper

[root@hadoop001 module]# pwd
/opt/module
[root@hadoop001 module]# hdfs zkfc -formatZK

image-20220428141534085

最后出现成功的提示

5.5.2 在三台节点上启动 JournalNode

[root@hadoop001 ~]# hadoop-daemon.sh start journalnode
starting journalnode, logging to  /opt/module/hadoop-2.7.3/logs/hadoop-root-journalnode-hadoop001.out
[root@hadoop001 ~]# jps 
注意: 三台机器都要启动

image-20220428141725035

image-20220503222240631

image-20220428141912286

5.5.3 在hadoop001上执行格式化命令并启动namenode

[root@hadoop001 module]# hadoop namenode -format

image-20220428142047796

#在hadoop001节点上,启动namenode
[root@hadoop001 module]# hadoop-daemon.sh start namenode

image-20220428142201590

5.5.4 在hadoop002上执行格式化命令并启动namenode

[root@hadoop001 module]# hadoop namenode  -bootstrapStandBy

image-20220428142355498

#在hadoop002节点上,启动namenode
[root@hadoop002 module]# hadoop-daemon.sh start namenode

image-20220428142523687

注意:在hadoop下多次格式化namenode,会导致DATANODE起不来,将data目录里所有删除(默认是在/tmp,如果core-site.xml配置了tmp路径则在自定义目录下),删除之后再格式化,然后重启集群

image-20220428171837305

原因:namenode在format初始化后,会生成clusterid,如果再次格式化namenode,会生成新的clusterid,此时与未删除的DataNode的clusterID不一致,会导致无法启动

5.5.5 启动三台节点的datanode

#三台机器都要执行
[root@hadoop001 ~]# hadoop-daemon.sh start datanode
[root@hadoop002 ~]# hadoop-daemon.sh start datanode
[root@hadoop003 ~]# hadoop-daemon.sh start datanode

image-20220428142753945

image-20220428142839858

image-20220428142951743

5.5.6 在hadoop001和hadoop002上启动FailOverController

[root@hadoop001 module]# hadoop-daemon.sh start zkfc
[root@hadoop002 ~]# hadoop-daemon.sh start zkfc

image-20220502210225131

image-20220428143158339

5.5.7 在hadoop003上执行start-yarn.sh

[root@hadoop003 ~]# start-yarn.sh

image-20220428143736072

5.5.8 在hadoop001上启动resourcemanager

[root@hadoop001 module]# yarn-daemon.sh start  resourcemanager

image-20220428144208208

5.5.9 查看进程状态

[root@hadoop001 module]# jps  8个进程
[root@hadoop002~]# jps        7个进程
[root@hadoop003~]# jps        6个进程

image-20220428144513460

image-20220428144343191

image-20220428144435329

5.6 登录HadoopWeb UI界面

5.6.1 HDFS 管理界面

访问地址:http://192.168.5.101:50070/

注:手动切换hadoop001的standby状态为active
[root@hadoop002 data]# hadoop-daemon.sh stop zkfc
#等待几秒刷新页面hadoop001就变成了active状态

#再次启动hadoop002的zkfc
[root@hadoop002 data]# hadoop-daemon.sh start zkfc

image-20220429134941735

image-20220428171140564

5.6.2 Yarn 资源管理界面

地址:http://192.168.5.103:8088/

image-20220428173700654

管理界面都可以正常访问

5.7 测试HDFS集群

[root@hadoop001 ~]# cd /opt/module/software

[root@hadoop001 software]# hdfs dfs -put zookeeper-3.4.9.tar.gz /

[root@hadoop001 software]# hdfs dfs -ls /
Found 1 items
-rw-r--r--   3 root supergroup   22724574 2022-04-29 13:52 /zookeeper-3.4.9.tar.gz

#WebUI界面查看

image-20220429135442224

5.8 测试yarn集群

#运行一个MapReduce任务

[root@hadoop001 software]# cd /opt/module/hadoop-2.7.3/

[root@hadoop001 hadoop-2.7.3]# hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar pi 10 10

Number of Maps  = 10
Samples per Map = 10
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
22/04/29 13:57:06 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
22/04/29 13:57:07 INFO input.FileInputFormat: Total input paths to process : 10
22/04/29 13:57:07 INFO mapreduce.JobSubmitter: number of splits:10
22/04/29 13:57:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1651152082276_0001
22/04/29 13:57:08 INFO impl.YarnClientImpl: Submitted application application_1651152082276_0001
22/04/29 13:57:08 INFO mapreduce.Job: The url to track the job: http://hadoop003:8088/proxy/application_1651152082276_0001/
22/04/29 13:57:08 INFO mapreduce.Job: Running job: job_1651152082276_0001
22/04/29 13:57:22 INFO mapreduce.Job: Job job_1651152082276_0001 running in uber mode : false
22/04/29 13:57:22 INFO mapreduce.Job:  map 0% reduce 0%
22/04/29 13:57:36 INFO mapreduce.Job:  map 10% reduce 0%
22/04/29 13:57:40 INFO mapreduce.Job:  map 20% reduce 0%
22/04/29 13:57:59 INFO mapreduce.Job:  map 20% reduce 7%
22/04/29 13:58:02 INFO mapreduce.Job:  map 40% reduce 7%
22/04/29 13:58:03 INFO mapreduce.Job:  map 70% reduce 7%
22/04/29 13:58:05 INFO mapreduce.Job:  map 90% reduce 7%
22/04/29 13:58:07 INFO mapreduce.Job:  map 100% reduce 7%
22/04/29 13:58:08 INFO mapreduce.Job:  map 100% reduce 100%
22/04/29 13:58:09 INFO mapreduce.Job: Job job_1651152082276_0001 completed successfully
22/04/29 13:58:10 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=226
                FILE: Number of bytes written=1335620
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=2510
                HDFS: Number of bytes written=215
                HDFS: Number of read operations=43
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=3
        Job Counters 
                Launched map tasks=10
                Launched reduce tasks=1
                Data-local map tasks=10
                Total time spent by all maps in occupied slots (ms)=332433
                Total time spent by all reduces in occupied slots (ms)=28389
                Total time spent by all map tasks (ms)=332433
                Total time spent by all reduce tasks (ms)=28389
                Total vcore-milliseconds taken by all map tasks=332433
                Total vcore-milliseconds taken by all reduce tasks=28389
                Total megabyte-milliseconds taken by all map tasks=340411392
                Total megabyte-milliseconds taken by all reduce tasks=29070336
        Map-Reduce Framework
                Map input records=10
                Map output records=20
                Map output bytes=180
                Map output materialized bytes=280
                Input split bytes=1330
                Combine input records=0
                Combine output records=0
                Reduce input groups=2
                Reduce shuffle bytes=280
                Reduce input records=20
                Reduce output records=0
                Spilled Records=40
                Shuffled Maps =10
                Failed Shuffles=0
                Merged Map outputs=10
                GC time elapsed (ms)=7687
                CPU time spent (ms)=13240
                Physical memory (bytes) snapshot=2431381504
                Virtual memory (bytes) snapshot=23346368512
                Total committed heap usage (bytes)=2021130240
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=1180
        File Output Format Counters 
                Bytes Written=97
Job Finished in 64.072 seconds
Estimated value of Pi is 3.20000000000000000000

#计算成功

5.9 关闭集群

在 hadoop001 上执行 stop-all.sh 
[root@hadoop001 hadoop-2.7.3]# stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [hadoop001 hadoop002]
hadoop001: stopping namenode
hadoop002: stopping namenode
hadoop001: stopping datanode
hadoop002: stopping datanode
hadoop003: stopping datanode
Stopping journal nodes [hadoop001 hadoop002 hadoop003]
hadoop001: stopping journalnode
hadoop002: stopping journalnode
hadoop003: stopping journalnode
Stopping ZK Failover Controllers on NN hosts [hadoop001 hadoop002]
hadoop001: stopping zkfc
hadoop002: stopping zkfc
stopping yarn daemons
stopping resourcemanager
hadoop001: stopping nodemanager
hadoop003: stopping nodemanager
hadoop002: stopping nodemanager
hadoop001: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
hadoop003: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
hadoop002: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop

5.10 启动集群

[root@hadoop001 ~]# start-all.sh
[root@hadoop001 hadoop-2.7.3]# stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [hadoop001 hadoop002]
hadoop001: stopping namenode
hadoop002: stopping namenode
hadoop001: stopping datanode
hadoop002: stopping datanode
hadoop003: stopping datanode
Stopping journal nodes [hadoop001 hadoop002 hadoop003]
hadoop001: stopping journalnode
hadoop002: stopping journalnode
hadoop003: stopping journalnode
Stopping ZK Failover Controllers on NN hosts [hadoop001 hadoop002]
hadoop001: stopping zkfc
hadoop002: stopping zkfc
stopping yarn daemons
stopping resourcemanager
hadoop001: stopping nodemanager
hadoop003: stopping nodemanager
hadoop002: stopping nodemanager
hadoop001: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
hadoop003: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
hadoop002: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop
[root@hadoop001 hadoop-2.7.3]# jps
1654 QuorumPeerMain
18687 Jps
[root@hadoop001 hadoop-2.7.3]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hadoop001 hadoop002]
hadoop001: starting namenode, logging to /opt/module/hadoop-2.7.3/logs/hadoop-root-namenode-hadoop001.out
hadoop002: starting namenode, logging to /opt/module/hadoop-2.7.3/logs/hadoop-root-namenode-hadoop002.out
hadoop001: starting datanode, logging to /opt/module/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop001.out
hadoop002: starting datanode, logging to /opt/module/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop002.out
hadoop003: starting datanode, logging to /opt/module/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop003.out
Starting journal nodes [hadoop001 hadoop002 hadoop003]
hadoop003: starting journalnode, logging to /opt/module/hadoop-2.7.3/logs/hadoop-root-journalnode-hadoop003.out
hadoop001: starting journalnode, logging to /opt/module/hadoop-2.7.3/logs/hadoop-root-journalnode-hadoop001.out
hadoop002: starting journalnode, logging to /opt/module/hadoop-2.7.3/logs/hadoop-root-journalnode-hadoop002.out
Starting ZK Failover Controllers on NN hosts [hadoop001 hadoop002]
hadoop001: starting zkfc, logging to /opt/module/hadoop-2.7.3/logs/hadoop-root-zkfc-hadoop001.out
hadoop002: starting zkfc, logging to /opt/module/hadoop-2.7.3/logs/hadoop-root-zkfc-hadoop002.out
starting yarn daemons
starting resourcemanager, logging to /opt/module/hadoop-2.7.3/logs/yarn-root-resourcemanager-hadoop001.out
hadoop002: starting nodemanager, logging to /opt/module/hadoop-2.7.3/logs/yarn-root-nodemanager-hadoop002.out
hadoop003: starting nodemanager, logging to /opt/module/hadoop-2.7.3/logs/yarn-root-nodemanager-hadoop003.out
hadoop001: starting nodemanager, logging to /opt/module/hadoop-2.7.3/logs/yarn-root-nodemanager-hadoop001.out

[root@hadoop001 hadoop-2.7.3]# jps
18944 DataNode
19620 Jps
19557 NodeManager
1654 QuorumPeerMain
19447 ResourceManager
19321 DFSZKFailoverController
19146 JournalNode
18827 NameNode

[root@hadoop002 ~]# jps
19424 Jps
19265 NodeManager
18932 DataNode
19174 DFSZKFailoverController
1624 QuorumPeerMain
19035 JournalNode
18847 NameNode

[root@hadoop003 ~]# jps
15041 JournalNode
15284 Jps
2373 QuorumPeerMain
14938 DataNode
15148 NodeManager
6445 ResourceManager

Hadoop分布式集群搭建(一)
Hadoop分布式集群搭建(三)

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值