【若泽大数据高级第三天】--基于阿里云的Hadoop HA和YARN HA部署

HDFS HA && YARN HA

多台机器同时操作

解压安装包

[hadoop@hadoop001 opt]$ tar -zxvf hadoop-2.6.0-cdh5.7.0.tar.gz -C ../app/
[hadoop@hadoop001 opt]$ cd ../app
[hadoop@hadoop001 app]$ ll
total 4
drwxr-xr-x 14 hadoop hadoop 4096 Mar 24  2016 hadoop-2.6.0-cdh5.7.0

创建软连接

[hadoop@hadoop001 app]$ ln -s /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/ /home/hadoop/app/hadoop
[hadoop@hadoop001 app]$ ll
total 4
lrwxrwxrwx  1 hadoop hadoop   39 Apr  6 17:29 hadoop -> /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/
drwxr-xr-x 14 hadoop hadoop 4096 Mar 24  2016 hadoop-2.6.0-cdh5.7.0

配置hadoop环境变量

vi ~/.bash_peofile
export HADOOP_HOME=/home/hadoop/app/hadoop
export PATH=${HADOOP_HOME}/bin:$PATH

修改配置文件

  • 在这里已经前准备好了配置文件

  • 根据配置文件新建目录

[hadoop@hadoop001 hadoop]$ mkdir /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/tmp
[hadoop@hadoop001 hadoop]$ mkdir -p /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/data/dfs/name
[hadoop@hadoop001 hadoop]$ mkdir -p /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/data/dfs/data
[hadoop@hadoop001 hadoop]$ mkdir -p /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/data/dfs/jn
  • 删除原有的配置文件
[hadoop@hadoop001 hadoop]$ pwd
/home/hadoop/app/hadoop/etc/hadoop
[hadoop@hadoop001 hadoop]$ rm -rf hdfs-site.xml core-site.xml slaves yarn-site.xml mapred-site.xml
[hadoop@hadoop001 hadoop]$ 

将准备好的配置文件上传
在这里插入图片描述

  • 确认zookeeper是否正常
    在这里插入图片描述
  • 在hadoop001 格式化namenode
[hadoop@hadoop001 hadoop]$ hadoop namenode -format

– 格式化报错:
在这里插入图片描述
在这里插入图片描述
根据报错内容,Unable to check if JNs are ready for formatting,我们需要先启动JN(多台机器同时操作)

  • 启动journalnode
[hadoop@hadoop001 hadoop]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-journalnode-hadoop001.out
[hadoop@hadoop001 hadoop]$ jps
2804 JournalNode
2855 Jps
[hadoop@hadoop001 hadoop]$ 
  • 再次在hadoop001 格式化namenode,成功格式化
    在这里插入图片描述
  • 同步hadoop001、002两个NN节点的data目录
[hadoop@hadoop001 sbin]$ cd ../
[hadoop@hadoop001 hadoop]$ scp -r data hadoop002:/home/hadoop/app/hadoop/
in_use.lock                               100%   14     0.0KB/s   00:00    
VERSION                                   100%  155     0.2KB/s   00:00    
seen_txid                                 100%    2     0.0KB/s   00:00    
fsimage_0000000000000000000.md5           100%   62     0.1KB/s   00:00    
fsimage_0000000000000000000               100%  338     0.3KB/s   00:00    
VERSION                                   100%  206     0.2KB/s   00:00    
[hadoop@hadoop001 hadoop]$ 
  • 格式化ZKFC
hdfs zkfc -formatZK

在这里插入图片描述

  • 在hadoop-env.sh中配置JAVA_HOME
[hadoop@hadoop001 hadoop]$ cd /home/hadoop/app/hadoop/etc/hadoop
[hadoop@hadoop001 hadoop]$ vi hadoop-env.sh 

启动

  • 启动hdfs:(一个节点启动即可)
[hadoop@hadoop001 hadoop]$ start-dfs.sh 
19/04/10 17:29:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop001 hadoop002]
hadoop001: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out
hadoop002: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop002.out
hadoop001: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out
hadoop003: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop003.out
hadoop002: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop002.out
Starting journal nodes [hadoop001 hadoop002 hadoop003]
hadoop001: journalnode running as process 3102. Stop it first.
hadoop003: journalnode running as process 2848. Stop it first.
hadoop002: journalnode running as process 2845. Stop it first.
19/04/10 17:29:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting ZK Failover Controllers on NN hosts [hadoop001 hadoop002]
hadoop002: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop002.out
hadoop001: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop001.out
[hadoop@hadoop001 hadoop]$ jps
4161 Jps
4081 DFSZKFailoverController
3618 NameNode
3732 DataNode
3102 JournalNode

  • 启动 YARN HA
[hadoop@hadoop001 hadoop]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop001.out
hadoop002: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop002.out
hadoop003: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop003.out
hadoop001: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop001.out


yarn-daemon.sh start resourcemanager
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值