经过了一些细节调整,完全分布式环境算是实测通过了,现就自己搭建的环境的启动步骤做一些总结与说明
三台主机构成分布式集群环境:
- master 主节点
- slave01 子节点1
- slave02 子节点2
启动步骤
- 在master、slave01、slave02上分别启动Zookeeper
zkServer.sh start
- 在主节点上运行
start-dfs.sh
- 再在slave01上运行
start-yarn.sh
- 再在slave02上单独启动resourcemanager
yarn-daemon.sh start resourcemanager
- 在master上启动HBase
start-hbase.sh
在slave01、slave02上分别启动HMaster进程
hbase-daemon.sh start master
它会自动根据先前博客那样配置的文件,在集群中的各个机器上启动服务进程,具体见下面详细说明
各节点的进程
–> [elon@master[slave01、slave02] ~]$ zkServer.sh start
JMX enabled by default
Using config: /home/elon/opt/modules/zookeeper-3.4.5/bin/../conf/zoo.cfg
Starting zookeeper … STARTED
–> [elon@master ~]$ start-dfs.sh
Starting namenodes on [master slave01]
slave01: starting namenode, logging to /home/elon/app/hadoop-2.4.1/logs/hadoop-elon-namenode-slave01.out
master: starting namenode, logging to /home/elon/app/hadoop-2.4.1/logs/hadoop-elon-namenode-master.out
slave02: starting datanode, logging to /home/elon/app/hadoop-2.4.1/logs/hadoop-elon-datanode-slave02.out
master: starting datanode, logging to /home/elon/app/hadoop-2.4.1/logs/hadoop-elon-datanode-master.out
slave01: starting datanode, logging to /home/elon/app/hadoop-2.4.1/logs/hadoop-elon-datanode-slave01.out
Starting journal nodes [master slave01 slave02]
slave02: starting journalnode, logging to /home/elon/app/hadoop-2.4.1/logs/hadoop-elon-journalnode-slave02.out
master: starting journalnode, logging to /home/elon/app/hadoop-2.4.1/logs/hadoop-elon-journalnode-master.out
slave01: starting journalnode, logging to /home/elon/app/hadoop-2.4.1/logs/hadoop-elon-journalnode-slave01.out
Starting ZK Failover Controllers on NN hosts [master slave01]
master: starting zkfc, logging to /home/elon/app/hadoop-2.4.1/logs/hadoop-elon-zkfc-master.out
slave01: starting zkfc, logging to /home/elon/app/hadoop-2.4.1/logs/hadoop-elon-zkfc-slave01.out
–> [elon@slave01 ~]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/elon/app/hadoop-2.4.1/logs/yarn-elon-resourcemanager-slave01.out
slave02: starting nodemanager, logging to /home/elon/app/hadoop-2.4.1/logs/yarn-elon-nodemanager-slave02.out
master: starting nodemanager, logging to /home/elon/app/hadoop-2.4.1/logs/yarn-elon-nodemanager-master.out
slave01: starting nodemanager, logging to /home/elon/app/hadoop-2.4.1/logs/yarn-elon-nodemanager-slave01.out
–> [elon@slave02 ~]$ yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /home/elon/app/hadoop-2.4.1/logs/yarn-elon-resourcemanager-slave02.out
You have new mail in /var/spool/mail/elon
–> [elon@master ~]$ start-hbase.sh
starting master, logging to /home/elon/opt/modules/hbase-0.96.2-hadoop2/logs/hbase-elon-master-master.out
slave02: starting regionserver, logging to /home/elon/opt/modules/hbase-0.96.2-hadoop2/bin/../logs/hbase-elon-regionserver-slave02.out
slave01: starting regionserver, logging to /home/elon/opt/modules/hbase-0.96.2-hadoop2/bin/../logs/hbase-elon-regionserver-slave01.out
–> [elon@slave01[slave02] ~]$ hbase-daemon.sh start master
starting master, logging to /home/elon/opt/modules/hbase-0.96.2-hadoop2/logs/hbase-elon-master-slave01.out
进程启动明细
**master:**
[elon@master ~]$ jps
5873 JournalNode
5589 NameNode
5688 DataNode
6268 Jps
6157 NodeManager
6043 DFSZKFailoverController
5430 QuorumPeerMain
**slave01:**
[elon@slave01 ~]$ jps
3990 JournalNode
4303 NodeManager
3897 DataNode
3828 NameNode
4925 HMaster
4104 DFSZKFailoverController
3771 QuorumPeerMain
4990 Jps
4201 ResourceManager
**slave02:**
[elon@slave02 ~]$ jps
3990 HMaster
4064 Jps
3779 HRegionServer
3674 ResourceManager
3287 QuorumPeerMain
3434 JournalNode
3344 DataNode
3530 NodeManager
集群Web控制台界面
NameNode
- master:9000 (active) http://master:50070/dfshealth.html
- slave01:9000 (standby) http://slave01:50070/dfshealth.html
Resourcemanager
- All Applications http://slave01:8088/cluster
- 当登陆slave02的resourcemanager控制台时
http://slave02:8088/cluster
会显示:
This is standby RM. Redirecting to the current active RM: http://slave01:8088/cluster
HBase
- HBase Master http://master:60010/master-status
Hive
- Hive的启动方式 http://elon33.com/2017/Hive/#Hive的启动方式
- Hive Web Interface http://slave02:9999/hwi/