一、组件部署架构
- tunabook102:NameNode、ResourceManager、ZooKeeper、ZKFC
- tunabook103:NameNode、ZKFC、ZooKeeper
- tunabook104:DataNode、QJM、ZooKeeper、NodeManager
- tunabook105:DataNode、QJM、ZooKeeper、NodeManager
- tunabook106:DataNode、QJM、ZooKeeper、NodeManager
二、配置文件(tunabook102tunabook103tunabook104tunabook105tunabook106)
-
core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://cluster1</value> </property> <property> <name>hadoop.security.authorization</name> <value>false</value> </property> <property> <name>ha.zookeeper.quorum</name> <value> tunabook102:2181,tunabook103:2181,tunabook:104:2181,tunabook:105,tunabook:106 </value> </property> </configuration>
-
hadoop-env.sh
export JAVA_HOME=/export/server/jdk1.8.0_65 export HADOOP_MAPRED_HOME=/export/server/hadoop-3.3.0 export HDFS_JOURNALNODE_USER=root export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root export HDFS_ZKFC_USER=root
-
hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/export/hdfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/export/hdfs/datanode</value> </property> <property> <name>dfs.nameservices</name> <value>cluster1</value> </property> <property> <name>dfs.ha.namenodes.cluster1</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.cluster1.nn1</name> <value>tunabook102:9000</value> </property> <property> <name>dfs.namenode.rpc-address.cluster1.nn2</name> <value>tunabook103:9000</value> </property> <property> <name>dfs.namenode.http-address.cluster1.nn1</name> <value>tunabook102:9870</value> </property> <property> <name>dfs.namenode.http-address.cluster1.nn2</name> <value>tunabook103:9870</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://tunabook104:8485;tunabook105:8485;tunabook106:8485/cluster1</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/export/qjmdata</value> </property> <property> <name>dfs.client.failover.proxy.provider.cluster1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> </configuration>
-
mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.application.classpath</name> <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*, $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value> </property> </configuration>
-
workers
tunabook104 tunabook105 tunabook106
-
yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.env-whitelist</name> <value> JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR, CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME </value> </property> </configuration>
-
zoo.cfg
dataDir=/export/data/zkdata server.1=tunabook102:2888:3888 server.2=tunabook103:2888:3888 server.3=tunabook104:2888:3888 server.4=tunabook105:2888:3888 server.5=tunabook106:2888:3888
三、启动集群
-
启动ZooKeeper(tunabook102tunabook103tunabook104tunabook105tunabook106)
zkServer.sh start
-
初始化HA HDFS所使用的数据(tunabook102)
hdfs zkfc -formatZK
-
启动QJM(tunabook104tunabook105tunabook106)
hdfs --daemon start journalnode
-
格式化HFDS (tunabook102)
hdfs namenode -format
-
启动namenode(tunabook102)
hdfs --daemon start namenode
-
同步初始化结果(tunabook103)
hdfs namenode -bootstrapStandby
-
关闭namenode(tunabook102)
hdfs --daemon stop namenode
-
开启hadoop服务
start-all.sh
四、运行结果