在hadoop集群部署的基础上
上传zookeeper,将zookeeper解压到/opt/sxt中
部署环境
export JAVA_HOME=/usr/java/jdk1.7.0_67
export HADOOP_HOME=/opt/sxt/hadoop-2.6.5
export ZOOKEEPER_HOME=/opt/sxt/zookeeper-3.4.6
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin
配置zookeeper的conf文件zoo.cfg
dataDir=/var/sxt/hadoop/zk
最后添加几行:
server.1=192.168.56.12:2888:3888
server.2=192.168.56.13:2888:3888
server.3=192.168.56.14:2888:3888
给 zookeeper节点都分配配置好的zookeeper
scp -r ./zookeeper-3.4.6/ node4:pwd
创建zk目录,保存myid(不同节点配置不同id)
mkdir -p /var/sxt/hadoop/zk
echo 2 > /var/sxt/hadoop/zk/myid
启动zookeeper
zkServer.sh start
查看状态:
zkServer.sh status
免密钥:
ssh-keygen -t dsa -P ‘’ -f ~/.ssh/id_rsa
进入ssh目录,给authorized_keys追加公钥:
cat id_rsa.pub >> authorized_keys
对自己免密钥:
ssh node2
将密钥给node1
scp id_rsa.pub node1:pwd
/node2.pub
在node1中追加node2的密钥:
cat node2.pub >> authorized_keys
修改hdfs-site.xml
在node1中修改Hadoop的hdfs-sire.xml
<configuration>
#关于副本数的配置
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
#关于两个NameNode节点的配置,注意节点的参数
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>node1:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>node2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>node1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>node2:50070</value>
</property>
#关于jeurnaNode的配置,注意节点的参数
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://node1:8485;node2:8485;node3:8485/mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/var/sxt/hadoop/ha/jn</value>
</property>
#故障切换的实现和代理,注意id_rsa
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
#ha的zkfc
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
修改core-site.xml
<configuration>
#程序的入口,NameNode
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
#NameNode的可靠资源存放位置
<property>
<name>hadoop.tmp.dir</name>
<value>/var/sxt/hadoop/ha</value>
</property>
#zookeeper节点
<property>
<name>ha.zookeeper.quorum</name>
<value>node2:2181,node3:2181,node4:2181</value>
</property>
</configuration>
将修改后的配置文件分发给其他节点
scp hdfs-site.xml core-site.xml node2:`pwd`
scp hdfs-site.xml core-site.xml node3:`pwd`
scp hdfs-site.xml core-site.xml node4:`pwd`
部署,格式化
在node1中启动 journalnode(给每个journalnode都启动,这里是node1,node2,node3):
hadoop-daemon.sh start journalnode
node1格式化:
hdfs namenode -format
node1启动:
hadoop-daemon.sh start namenode
之后部署第二台:
hdfs namenode -bootstrapStandby
再部署第一台,格式化zookeeper
hdfs zkfc -formatZK
在部署了zookeeper的主机中找到zookeeper的客户端启动,这里使用node4()
先启动zk
zkServer.sh start
再:
zkCli.sh
至此所有配置都结束了:
正式启动:
start-dfs.sh
使用jps查看各主机
node1,node2,node3,node4
使用jps查看进程
杀死进程进行测试
kill -9 2607