Hadoop zookeeper HA高可靠集群部署搭建,及错误诊断

http://archive-primary.cloudera.com/cdh5/cdh/5/

一.准备工作
1.修改Linux主机名,每台都得配置
[root@h24 ~]# vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=h201
2.修改IP /etc/sysconfig/network-scripts/ifcfg-eth0
3.修改主机名和IP的映射关系(h24,h25为主,h21,h22,h23为从)
[root@h24 ~]# vim /etc/hosts
192.168.1.21 h21
192.168.1.22 h22
192.168.1.23 h23
192.168.1.24 h24
192.168.1.25 h25
######注意######如果你们公司是租用的服务器或是使用的云主机(如华为用主机、阿里云主机等)
/etc/hosts里面要配置的是内网IP地址和主机名的映射关系

4.关闭防火墙
#查看防火墙状态
[root@h24 ~]# service iptables status
#关闭防火墙
[root@h24 ~]# service iptables stop
#查看防火墙开机启动状态
[root@h24 ~]# chkconfig iptables --list
#关闭防火墙开机启动
[root@h24 ~]# chkconfig iptables off
5台机器 创建hadoop 用户
[root@h24 ~]# useradd hadoop
[root@h24 ~]# passwd hadoop
hadoop 密码:123456
前4步用root用户操作,操作完后重启机器

5.ssh免登陆hadoop用户操作
[root@h24 ~]# su - hadoop
#生成ssh免登陆密钥
#进入到我的home目录
cd ~/.ssh
ssh-keygen -t rsa (四个回车)
执行完这个命令后,会生成两个文件id_rsa(私钥)、id_rsa.pub(公钥)
将公钥拷贝到要免密登陆的目标机器上
[hadoop@h21 ~]$ ssh-keygen -t rsa
[hadoop@h22 ~]$ ssh-keygen -t rsa
[hadoop@h23 ~]$ ssh-keygen -t rsa
[hadoop@h24 ~]$ ssh-keygen -t rsa
[hadoop@h25 ~]$ ssh-keygen -t rsa

[hadoop@h21 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h21
[hadoop@h21 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h22
[hadoop@h21 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h23
[hadoop@h21 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h24
[hadoop@h21 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h25

[hadoop@h22 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h21
[hadoop@h22 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h22
[hadoop@h22 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h23
[hadoop@h22 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h24
[hadoop@h22 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h25

[hadoop@h23 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h21
[hadoop@h23 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h22
[hadoop@h23 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h23
[hadoop@h23 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h24
[hadoop@h23 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h25

[hadoop@h24 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h21
[hadoop@h24 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h22
[hadoop@h24 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h23
[hadoop@h24 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h24
[hadoop@h24 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h25

[hadoop@h25 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h21
[hadoop@h25 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h22
[hadoop@h25 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h23
[hadoop@h25 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h24
[hadoop@h25 ~]$ ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub h25

6.安装JDK,配置环境变量等root用户操作( //根据自己的路金配置)
卸载系统之前jdk版本(便于已安装的jdk生效)
[root@h24 ~]$rpm -e --nodeps java-1.4.2-gcj-compat-1.4.2.0-40jpp.115

[root@h24 tmp]# tar -zxvf jdk-7u25-linux-i586.tar.gz -C /usr/local
[root@h24 ~]# vim /etc/profile 或者在用户下 vim .bash_profile
export JAVA_HOME=/usr/local/jdk1.7.0_25
export HADOOP_HOME=/usr/local/hadoop-2.6.0
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source /etc/profile 或者 source .bash_profile
查看java版本
[root@h24 ~]# java -version
————————————————————————————————


(/etc/bashrc

export JAVA_HOME=/usr/jdk1.7.0_25

export JRE_HOME=$JAVA_HOME/jre

export PATH=$JAVA_HOME/bin:$PATH)


————————————————————————————————

二.集群规划:
主机名 IP 安装软件 运行进程
h24 192.168.1.24:jdk、hadoop
namenode resourcemanage
DFSZKFailoverController(zkfc)
h25 192.168.1.25:jdk、hadoop
namenode resourcemanage
DFSZKFailoverController(zkfc)
h21 192.168.1.21:jdk、hadoop、zookeeper
datanode nodemanage
journalnode QuorumPeerMain

h22 192.168.1.22:jdk、hadoop、zookeeper
datanode nodemanage
journalnode QuorumPeerMain
h23 192.168.1.23:jdk、hadoop、zookeeper
datanode nodemanage
journalnode QuorumPeerMain

三.安装步骤:
1.安装配置zooekeeper集群(在hadoop-server3上)
1.1解压
[root@h21 tmp]# tar zxvf zookeeper-3.4.5-cdh5.5.2.tar.gz -C /usr/local/
1.2修改配置
[root@h21 tmp]# cd /usr/localzookeeper-3.4.5/conf/
[root@h21 conf]# cp zoo_sample.cfg zoo.cfg
[root@h21 conf]# vim zoo.cfg
修改添加:
dataDir=/usr/local/zookeeper-3.4.5-cdh5.5.2/data
dataLogDir=/usr/local/zookeeper-3.4.5-cdh5.5.2/log
在最后添加:
server.1=192.168.1.23:2888:3888
server.2=192.168.1.24:2888:3888
server.3=192.168.1.25:2888:3888
保存退出
然后创建一个data文件夹
[root@h21 ~]# cd /usr/local/zookeeper-3.4.5-cdh5.5.2/
[root@h21 zookeeper-3.4.5-cdh5.5.2]# mkdir -pv data log
再创建一个空文件
touch /usr/localzookeeper-3.4.5/data/myid
最后向该文件写入ID
echo 1 > /usr/localzookeeper-3.4.5/data/myid
1.3将配置好的zookeeper拷贝到其他节点
[root@h21 ~]# scp -r /usr/localzookeeper-3.4.5/ h22:/usr/local
[root@h21 ~]# scp -r /usr/localzookeeper-3.4.5/ h23:/usr/local
注意:修改hadoop-server4、hadoop-server5对应/usr/localzookeeper-3.4.5/data/myid内容
hadoop-server4:
echo 2 > /usr/localzookeeper-3.4.5/data/myid
hadoop-server5:
echo 3 > /usr/localzookeeper-3.4.5/data/myid

2.安装配置hadoop集群(在hadoop-server1上操作)
2.1解压
[root@h24 tmp]# tar -zxvf hadoop-2.6.0-cdh5.5.2.tar.gz -C /usr/local/
mv hadoop-2.6.0-cdh5.5.2 hadoop-2.6.0
2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目录下)
#将hadoop添加到环境变量中
vim /etc/profile或者在用户下 vim .bash_profile
export JAVA_HOME=/usr/local/jdk1.7.0_25
export HADOOP_HOME=/usr/local/hadoop-2.6.0
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

#hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下
[root@h24 local]# cd /usr/local/hadoop-2.6.0/etc/hadoop

2.2.1修改vim hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.7.0_25

2.2.2修改vim core-site.xml
<configuration>
<!-- 指定hdfs的nameservice为ns1 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://ns1/</value>
</property>
<!-- 指定hadoop临时目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop-2.6.0/tmp</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>h21:2181,h22:2181,h23:2181</value>
</property>
</configuration>

2.2.3修改vim hdfs-site.xml <!--NN高可用>
<configuration>
<!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 -->
<property>
<name>dfs.nameservices</name>
<value>ns1</value>
</property>
<!-- ns1下面有两个NameNode,分别是nn1,nn2 -->
<property>
<name>dfs.ha.namenodes.ns1</name>
<value>nn1,nn2</value>
</property>
<!-- nn1的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.ns1.nn1</name>
<value>h24:9000</value>
</property>
<!-- nn1的http通信地址 -->
<property>
<name>dfs.namenode.http-address.ns1.nn1</name>
<value>h24:50070</value>
</property>
<!-- nn2的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.ns1.nn2</name>
<value>h25:9000</value>
</property>
<!-- nn2的http通信地址 -->
<property>
<name>dfs.namenode.http-address.ns1.nn2</name>
<value>h25:50070</value>
</property>
<!-- 指定NameNode的元数据在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://h21:8485;h22:8485;h23:8485/ns1</value>
</property>
<!-- 指定JournalNode在本地磁盘存放数据的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/hadoop-2.6.0/journaldata</value>
</property>
<!-- 开启NameNode失败自动切?? -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.ns1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!-- 使用sshfence隔离机制时需要ssh免登陆 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔离机制超时时间 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>

2.2.4 拷贝生成 [root@h211 hadoop]# cp mapred-site.xml.template mapred-site.xml
修改vim mapred-site.xml
<configuration>
<!-- 指定mr框架为yarn方式 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

2.2.5修改vim yarn-site.xml
<configuration>
<!-- 开启RM高可用 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>
<!-- 指定RM的名字 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!-- 分别指定RM的地址 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>h24</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>h25</value>
</property>
<!-- 指定zk集群地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>h21:2181,h22:2181,h23:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

2.2.6修改vim slaves(slaves是指定子节点的位置)
h21
h22
h23
—————————————————————————————————————————————————————————————-------------------------------------------


2.2.7配置免密码登陆
#首先要配置hadoop-server1到hadoop-server2、hadoop-server3、hadoop-server4、hadoop-server5的免密码登陆
#在hadoop-server1上生产一对钥匙
ssh-keygen -t rsa
#将公钥拷贝到其他节点,包括自己
ssh-copy-id h201
ssh-copy-id h202
ssh-copy-id h203
ssh-copy-id h204
ssh-copy-id h205
#注意:resourcemanager到nodemanager要配置免密登录
#注意:两个namenode之间要配置ssh免密码登陆,别忘了配置hadoop-server2到hadoop-server1的免登陆
在hadoop-server2上生产一对钥匙
ssh-keygen -t rsa
ssh-copy-id -i h201



2.4将配置好的hadoop拷贝到其他节点
[root@h24 ~]$ scp -r /usr/local/hadoop-2.6.0/ h21:/usr/local/
[root@h24 ~]$ scp -r /usr/local/hadoop-2.6.0/ h22:/usr/local/
[root@h24 ~]$ scp -r /usr/local/hadoop-2.6.0/ h23:/usr/local/
[root@h24 ~]$ scp -r /usr/local/hadoop-2.6.0/ h25:/usr/local/
授权
[root@h24 ~]$chown hadoop.hadoop /usr/local/hadoop-2.6.0/ -R
[root@h25 ~]$chown hadoop.hadoop /usr/local/hadoop-2.6.0/ -R
[root@h21 ~]$chown hadoop.hadoop /usr/local/hadoop-2.6.0/ -R
[root@h22 ~]$chown hadoop.hadoop /usr/local/hadoop-2.6.0/ -R
[root@h23 ~]$chown hadoop.hadoop /usr/local/hadoop-2.6.0/ -R
配置环境变量
[root@h24 ~]$ su - hadoop
[hadoop@h24 ~]$ vi .bash_profile /etc/profile

export JAVA_HOME=/usr/local/jdk1.7.0_25
export JAVA_BIN=/usr/local/jdk1.7.0_25/bin
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME JAVA_BIN PATH CLASSPATH

HADOOP_HOME=/usr/local/hadoop-2.6.0
HADOOP_SBIN=/usr/local/hadoop-2.6.0/sbin
HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
PATH=$HADOOP_HOME/bin:$PATH
export HADOOP_HOME HADOOP_CONF_DIR PATH

[root@h25 ~]$ su - hadoop
[hadoop@h25 ~]$ vi .bash_profile /etc/profile

export JAVA_HOME=/usr/local/jdk1.7.0_25
export JAVA_BIN=/usr/local/jdk1.7.0_25/bin
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME JAVA_BIN PATH CLASSPATH

HADOOP_HOME=/usr/local/hadoop-2.6.0
HADOOP_SBIN=/usr/local/hadoop-2.6.0/sbin
HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
PATH=$HADOOP_HOME/bin:$PATH
export HADOOP_HOME HADOOP_CONF_DIR PATH

[hadoop@h201 ~]$ source .bash_profile 或 su - root su - hadoop 或 reboot

hadoop@h205:/usr/local
###注意:严格按照下面的步骤

2.5启动zookeeper集群(分别在hadoop-server3、hadoop-server4、hadoop-server5上启动zk)
[root@h21 ~]$ cd /usr/local/zookeeper-3.4.5-cdh5.5.2/bin/
[root@h21 bin]$ ./zkServer.sh start
[root@h21 bin]$ ./zkServer.sh status

[root@h22 ~]$ cd /usr/local/zookeeper-3.4.5-cdh5.5.2/bin/
[root@h22 bin]$ ./zkServer.sh start
[root@h22 bin]$ ./zkServer.sh status

[root@h23 ~]$ cd /usr/local/zookeeper-3.4.5-cdh5.5.2/bin/
[root@h23 bin]$ ./zkServer.sh start
[root@h23 bin]$ ./zkServer.sh status
#查看状态:一个leader,两个follower

2.6启动journalnode(分别在在hadoop-server3、hadoop-server4、hadoop-server5上执行)
[hadoop@h21 hadoop]$ cd /usr/local/hadoop-2.6.0
[hadoop@h21 hadoop-2.6.0]$ sbin/hadoop-daemon.sh start journalnode
[hadoop@h21 hadoop-2.6.0]$ jps
12744 JournalNode
4133 QuorumPeerMain
12790 Jps

[hadoop@h22 hadoop]$ cd /usr/local/hadoop-2.6.0
[hadoop@h22 hadoop-2.6.0]$ sbin/hadoop-daemon.sh start journalnode
[hadoop@h22 hadoop-2.6.0]$ jps
12744 JournalNode
4133 QuorumPeerMain
12790 Jps

[hadoop@h23 hadoop]$ cd /usr/local/hadoop-2.6.0
[hadoop@h23 hadoop-2.6.0]$ sbin/hadoop-daemon.sh start journalnode
[hadoop@h23 hadoop-2.6.0]$ jps
12744 JournalNode
4133 QuorumPeerMain
12790 Jps

#运行jps命令检验,hadoop-server3、hadoop-server4、hadoop-server5上多了JournalNode进程



[hadoop@h21 hadoop-2.6.0]$ jps
12744 JournalNode
4133 QuorumPeerMain
12790 Jps



2.7格式化HDFS
#在hadoop-server1上执行命令:
[hadoop@h24 hadoop]$ cd /usr/local/hadoop-2.6.0/
[hadoop@h24 hadoop-2.6.0]$ bin/hdfs namenode -format
#格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件,这里我配置的 是/usr/local/hadoop-2.6.0/tmp,然

后将/usr/local/hadoop-2.6.0/tmp拷贝到hadoop-server2的/usr/local/hadoop-2.6.0/下。
[hadoop@h24 hadoop-2.6.0]$ scp -r tmp/ h25:/usr/local/hadoop-2.6.0/

##也可以这样,在hadoop-server2上执行命令:建议hdfs namenode -bootstrapStandby

2.8格式化ZKFC(在hadoop-server1上执行即可)
[hadoop@h24 hadoop-2.6.0]$ bin/hdfs zkfc -formatZK

2.9启动HDFS(在hadoop-server1上执行,输入密码两次)
[hadoop@h24 hadoop-2.6.0]$ sbin/start-dfs.sh



[hadoop@h24 hadoop-2.6.0]$ sbin/start-dfs.sh
18/06/21 02:01:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [h24 h25]
hadoop@h25's password: h24: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-namenode-h24.out

h25: Connection closed by 192.168.1.25
h21: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-datanode-h21.out
h22: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-datanode-h22.out
h23: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-datanode-h23.out
Starting journal nodes [h21 h22 h23]
h22: journalnode running as process 12589. Stop it first.
h23: journalnode running as process 12709. Stop it first.
h21: journalnode running as process 12744. Stop it first.
18/06/21 02:06:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting ZK Failover Controllers on NN hosts [h24 h25]
hadoop@h25's password: h24: starting zkfc, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-zkfc-h24.out



[hadoop@h24 hadoop-2.6.0]$ sbin/start-dfs.sh
18/06/21 02:09:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [h24 h25]
hadoop@h25's password: h24: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-namenode-h24.out

h25: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-namenode-h25.out
h22: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-datanode-h22.out
h21: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-datanode-h21.out
h23: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-datanode-h23.out
Starting journal nodes [h21 h22 h23]
h21: starting journalnode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-journalnode-h21.out
h23: starting journalnode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-journalnode-h23.out
h22: starting journalnode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-journalnode-h22.out
18/06/21 02:09:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting ZK Failover Controllers on NN hosts [h24 h25]
hadoop@h25's password: h24: starting zkfc, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-zkfc-h24.out

h25: starting zkfc, logging to /usr/local/hadoop-2.6.0/logs/hadoop-hadoop-zkfc-h25.out

2.10启动YARN(#####注意#####:是在hadoop-server1上执行start-yarn.sh)
[hadoop@h24 hadoop-2.6.0]$ sbin/start-yarn.sh
在hadoop-server2上启动 yarn-daemon.sh start resourcemanager
[hadoop@h25 sbin]$ ./yarn-daemon.sh start resourcemanager

到此,hadoop-2.6.0配置完毕,可以统计浏览器访问:
http://192.168.1.24:50070
NameNode 'h24:9000' (active)
http://192.168.1.25:50070
NameNode 'h25:9000' (standby)
验证HDFS HA

首先向hdfs上传一个文件 h24
[hadoop@h24 hadoop]$ hadoop fs -mkdir /profile
hadoop fs -put /etc/profile /profile
hadoop fs -ls /
然后再kill掉active的NameNode
——————————————————————————————————————————————————————


[hadoop@h24 hadoop-2.6.0]$ sbin/hadoop-daemon.sh start namenode
namenode running as process 17838. Stop it first.
通过这种方法也可以查看进程号


————————————————————————————————————————————————————————
[hadoop@h24 hadoop]$ ps -ef | grep 'NameNode'
结果显示
hadoop 18783 16379 0 02:40 pts/1 00:00:00 grep NameNode
kill -9 16379
kill -9 <pid of NN>
通过浏览器访问:http://192.168.1.212:50070
NameNode 'h212:9000' (active)
这个时候hadoop-server2上的NameNode变成了active
这个时候hadoop-server1的网页刷新后 无显示处于宕机状态
在执行命令:
hadoop fs -ls /
-rw-r--r-- 3 root supergroup 1926 2015-06-24 15:36 /profile
刚才上传的文件依然存在!!!
手动启动那个挂掉的NameNode
sbin/hadoop-daemon.sh start namenode
通过浏览器访问:http://192.168.1.24:50070
NameNode 'h24:9000' (standby)
验证YARN:
运行一下hadoop提供的demo中的WordCount程序:
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar wordcount /profile /out

OK,大功告成!!!

测试集群工作状态的一些指令 :
bin/hdfs dfsadmin -report 查看hdfs的各节点状态信息

bin/hdfs haadmin -getServiceState nn1 获取一个namenode节点的HA状态

sbin/hadoop-daemon.sh start namenode 单独启动一个namenode进程

./hadoop-daemon.sh start zkfc 单独启动一个zkfc进程



报错:
[hadoop@h21 bin]$ ./zkServer.sh status
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.5-cdh5.5.2/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.

解决办法
[hadoop@h21 bin]$ vim zkServer.sh
在文件最后添加jdk环境变量
export JAVA_HOME=/usr/jdk1.7.0_25
export HADOOP_HOME=/usr/local/hadoop-2.6.0
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

[hadoop@h21 bin]$ ./zkServer.sh stop
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.5-cdh5.5.2/bin/../conf/zoo.cfg

[hadoop@h21 bin]$ ./zkServer.sh start
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.5-cdh5.5.2/bin/../conf/zoo.cfg
此时其他节点也要开启

[hadoop@h21 bin]$ ./zkServer.sh status
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.5-cdh5.5.2/bin/../conf/zoo.cfg
Mode: follower
[hadoop@h22 bin]$ ./zkServer.sh status
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.5-cdh5.5.2/bin/../conf/zoo.cfg
Mode: leader
[hadoop@h23 bin]$ ./zkServer.sh status
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.5-cdh5.5.2/bin/../conf/zoo.cfg
Mode: follower

警告:
[hadoop@h24 ~]$ hadoop fs -put WordCount.txt /profile/
18/06/21 02:56:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
解决方法:
[hadoop@h24 hadoop]$ cd /usr/local/hadoop-2.6.0/etc/hadoop
[hadoop@h24 hadoop]$ vim log4j.properties
在文件末行添加
log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
而后配置所有jar包环境变量

wordcount 执行报错
[hadoop@h24 ~]$ hadoop jar wc.jar WordCount /profile/WordCount.txt /outt
18/06/21 04:18:56 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
18/06/21 04:18:56 INFO retry.RetryInvocationHandler: Exception while invoking getNewApplication of class ApplicationClientProtocolPBClientImpl over rm2 after 1 fail over attempts. Trying to fail over after sleeping for 25441ms.
java.net.ConnectException: Call From h24/192.168.1.24 to h25:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
at org.apache.hadoop.ipc.Client.call(Client.java:1470)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy17.getNewApplication(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getNewApplication(ApplicationClientProtocolPBClientImpl.java:217)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
at com.sun.proxy.$Proxy18.getNewApplication(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getNewApplication(YarnClientImpl.java:206)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createApplication(YarnClientImpl.java:214)
at org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(ResourceMgrDelegate.java:187)
at org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.java:231)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:156)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1325)
at WordCount.main(WordCount.java:83)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:708)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519)
at org.apache.hadoop.ipc.Client.call(Client.java:1442)
... 30 more

解决方法:检查/etc/hadoop/中配置文件监控页面的端口是否正确,重启集群即可生效。错误异常解除


————————————————————————————————
Hadoop zookeeper  HA高可靠集群部署搭建,及错误诊断Hadoop zookeeper  HA高可靠集群部署搭建,及错误诊断

转载于:https://blog.51cto.com/13749369/2132250

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值