接上两篇文章继续。
Centos7 安装zookeeper3.6.3_Atraceofviciss的博客-CSDN博客
Centos7 Hadoop3.1 集群安装_Atraceofviciss的博客-CSDN博客
配置Hadoop文件core-site.xml
<!--
<property>
<name>fs.default.name</name>
<value>hdfs://root1:9000</value>
</property>
-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://ns1</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>root1:2181,root2:2181,root3:2181</value>
</property>
配置hdfs-site.xml
<property>
<name>dfs.nameservices</name>
<value>ns1</value>
</property>
<property>
<name>dfs.ha.namenodes.ns1</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns1.nn1</name>
<value>root1:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn1</name>
<value>root1:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns1.nn2</name>
<value>root2:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn2</name>
<value>root2:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://root1:8485;root2:8485;root3:8485/ns1</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/hadoop/data/journal</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.ns1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence
shell(/bin/true)</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/vagrant/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
配置完毕
启动zkServer.sh
初始化zookeeper
hdfs zkfc -formatZK
报以下错误错:执行 hadoop-daemon.sh start journalnode
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 3 exceptions thrown:
初始化
./start-dfs.sh
hdfs namenode -format
同步NEMENODE副节点数据
hadoop-daemon.sh start namenode
hdfs namenode -bootstrapStandby
hadoop-daemon.sh start namenode
报错:使用root用户启动集群时
but there is no HDFS_ZKFC_USER defined. Aborting operation
以上错误,在hadoop-env.sh需要哪个添加哪个
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export HDFS_ZKFC_USER=root
export HDFS_JOURNALNODE_USER=root
默认端口为50070
每个hadoop-daemons.sh start journalnode