Hadoop HA部署文档

一、 软件版本

组件名版本下载地址备注
CentOSCentOS Linux release 7.2.1511 (Core)查看版本号 cat /etc/redhat-release
JDKjdk-8u45-linux-x64查看 JDK 版本 java -version
Hadoop编译好的hadoop-2.6.0-cdh5.15.1.tar,支持压缩
Zookeeperzookeeper-3.4.6

二、 集群规划

IPHOSTNAME安装软件进程
192.168.146.100hadoop001Hadoop,ZookeeperQuorumPeerMain NameNode DFSZKFailoverController JournalNode DataNode ResourceManager NodeManager JobHistoryServer
192.168.146.110hadoop002Hadoop,ZookeeperQuorumPeerMain NameNode DFSZKFailoverController JournalNode DataNode ResourceManager NodeManager
192.168.146.120hadoop003Hadoop,ZookeeperQuorumPeerMain JournalNode DataNode NodeManager

三、 目录规划

在hadoop的家目录下新建

目录作用
~/app软件安装目录
~/data数据存放目录
~/lib开发的jar
~/maven_reposmaven本地仓库
~/software软件包
~/script脚本
~/source源码
~/tmp临时文件存放目录

四、 环境准备

  1. 配置三台主机的hostname,依次更改为hadoop001,hadoop002,hadoop003
[root@hadoop001 ~]# vi /etc/hostname
hadoop001
  1. 配置三台主机的hosts文件
[root@hadoop001 ~]# vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.146.100 hadoop001
192.168.146.110 hadoop002
192.168.146.120 hadoop003
  1. 安装libssl
[root@hadoop001 ~]# yum install -y libssl-dev

[hadoop@hadoop001 ~]$ hadoop checknative
19/08/24 11:15:00 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
19/08/24 11:15:00 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop:  true /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/lib/native/libhadoop.so.1.0.0
zlib:    true /lib64/libz.so.1
snappy:  true /lib64/libsnappy.so.1
lz4:     true revision:10301
bzip2:   true /lib64/libbz2.so.1
openssl: true /lib64/libcrypto.so
  1. 配置三台机器的hadoop用户(无密码)的相互免密登录
  • 创建并切换到hadoop用户
[root@hadoop001 ~]# useradd hadoop
[root@hadoop001 ~]# su - hadoop
Last login: Fri Aug 23 21:41:06 CST 2019 on pts/0
  • 执行ssh-keygen生成公钥和私钥
[hadoop@hadoop001 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
bc:8c:27:15:34:a9:42:98:cf:86:23:49:5c:61:6d:14 hadoop@hadoop001
The key's randomart image is:
+--[ RSA 2048]----+
|. .*+E. o.       |
| o+ .o ...       |
|.. =.  ..        |
|o o = .. .       |
| . o .  S        |
|       + .       |
|      o +        |
|       o         |
|                 |
+-----------------+

  • 下面需要将三台的公钥内容都追加到一个文件,然后分发到三台机器的~/.ssh目录下,从而实现免密登录
# 在hadoop001上将公钥内容追加到authorized_keys文件中
[hadoop@hadoop001 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

# 将hadoop002 和 hadoop003两台机器的公钥下载到本地,然后上传到hadoop001的~/.ssh目录下,追加到authorized_keys文件中(因为hadoop用户没有密码,只能通过这种方式绕过密码),其中id_rsa.pub2和id_rsa.pub3是hadoop002和hadoop003机器的公钥
[hadoop@hadoop001 .ssh]$ cat ~/.ssh/id_rsa.pub2 >> ~/.ssh/authorized_keys
[hadoop@hadoop001 .ssh]$ cat ~/.ssh/id_rsa.pub3 >> ~/.ssh/authorized_keys

# 将authorized_keys文件分发到hadoop002和hadoop003的~/.ssh目录下
[hadoop@hadoop001 .ssh]$ ll
total 20
-rw-rw-r-- 1 hadoop hadoop 1194 Aug 23 22:08 authorized_keys
-rw------- 1 hadoop hadoop 1679 Aug 23 22:02 id_rsa
-rw-r--r-- 1 hadoop hadoop  398 Aug 23 22:02 id_rsa.pub
-rw-r--r-- 1 hadoop hadoop  398 Aug 23 22:02 id_rsa.pub2
-rw-r--r-- 1 hadoop hadoop  398 Aug 23 22:02 id_rsa.pub3

[hadoop@hadoop002 .ssh]$ ll
total 12
-rw-r--r-- 1 hadoop hadoop 1194 Aug 23 22:08 authorized_keys
-rw------- 1 hadoop hadoop 1675 Aug 23 22:02 id_rsa
-rw-r--r-- 1 hadoop hadoop  398 Aug 23 22:02 id_rsa.pub

[hadoop@hadoop003 .ssh]$ ll
total 12
-rw-r--r-- 1 hadoop hadoop 1194 Aug 23 22:08 authorized_keys
-rw------- 1 hadoop hadoop 1675 Aug 23 22:02 id_rsa
-rw-r--r-- 1 hadoop hadoop  398 Aug 23 22:02 id_rsa.pub


# 在三台机器上都执行下面的命令,修改authorized_keys文件的权限为600
[hadoop@hadoop001 .ssh]$ chmod 600 authorized_keys


# 在三台机器上都执行下面的命令
[hadoop@hadoop001 .ssh]$ ssh hadoop001 date
[hadoop@hadoop001 .ssh]$ ssh hadoop002 date
[hadoop@hadoop001 .ssh]$ ssh hadoop003 date


# 第一次执行上面的命令要输入yes,然后会直接返回date执行的结果,不需要输入密码,再次执行该命令会直接返回结果,表示免密登录配置成功
[hadoop@hadoop001 .ssh]$ ssh hadoop002 date
The authenticity of host 'hadoop002 (192.168.146.110)' can't be established.
ECDSA key fingerprint is 88:af:35:0a:9f:b5:de:3d:b9:e8:f2:d4:70:1f:90:7c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop002,192.168.146.110' (ECDSA) to the list of known hosts.
Fri Aug 23 22:11:23 CST 2019

注:执行过ssh hadoop002 date后,对于hadoop002的认证host记录会保存在hadoop001的~/.ssh/known_hosts 文件中,如果hadoop002重新使用ssh-keygen生成了秘钥,需要将hadoop001中记录的这行数据删除,重新ssh hadoop002 date然后生成记录

  1. 安装JDK
# 切换到root
[hadoop@hadoop001 ~]$ su - root
Password:
Last login: Fri Aug 23 21:55:12 CST 2019 on pts/0

# 创建/usr/java目录
[root@hadoop001 ~]# mkdir /usr/java

# 解压JDK到 /usr/java 目录
[root@hadoop001 ~]# cd software/
[root@hadoop001 software]# tar -xzvf jdk-8u45-linux-x64.gz -C /usr/java/

# 将解压后的文件夹的所有人和组修改为root
[root@hadoop001 software]# cd /usr/java
[root@hadoop001 java]# ll
total 4
drwxr-xr-x. 8 10 143 4096 Apr 11  2015 jdk1.8.0_45

[root@hadoop001 java]# chown -R root:root jdk1.8.0_45/
[root@hadoop001 java]# ll
total 4
drwxr-xr-x. 8 root root 4096 Apr 11  2015 jdk1.8.0_45
# 配置环境变量
[root@hadoop001 java]# vi /etc/profile

export JAVA_HOME=/usr/java/jdk1.8.0_45
export PATH=${JAVA_HOME}/bin:$PATH

# 使环境变量生效
[root@hadoop001 java]# source /etc/profile

# 查看是否生效
[root@hadoop001 java]# which java
/usr/java/jdk1.8.0_45/bin/java

五、 安装Zookeeper

  1. 解压zk安装包到app目录下
[root@hadoop001 ~]# su - hadoop
Last login: Fri Aug 23 22:36:06 CST 2019 on pts/0
[hadoop@hadoop001 ~]$ tar -xzvf software/zookeeper-3.4.6.tar.gz -C app/
  1. 建立软连接
[hadoop@hadoop001 app]$ ln -s zookeeper-3.4.6/ zookeeper
  1. 修改zookeeper配置文件
[hadoop@hadoop001 app]$ cd zookeeper/conf/
[hadoop@hadoop001 conf]$ mv zoo_sample.cfg zoo.cfg
[hadoop@hadoop001 conf]$ vi zoo.cfg

# 修改dataDir
dataDir=/home/hadoop/data/zookeeper
# 添加
server.1=hadoop001:2888:3888
server.2=hadoop002:2888:3888
server.3=hadoop003:2888:3888
  1. 三台机器都创建dataDir目录,然后给当前主机指定一个id,写入myid文件
[hadoop@hadoop001 conf]$ mkdir -p /home/hadoop/data/zookeeper

[hadoop@hadoop001 conf]$ echo 1 > /home/hadoop/data/zookeeper/myid
[hadoop@hadoop002 conf]$ echo 2 > /home/hadoop/data/zookeeper/myid
[hadoop@hadoop003 conf]$ echo 3 > /home/hadoop/data/zookeeper/myid

注:> 前后要有空格才能覆写
  1. 三台机器配置zk的个人环境变量
[hadoop@hadoop001 conf]$ vi ~/.bash_profile

export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
export PATH=${ZOOKEEPER_HOME}/bin:$PATH

[hadoop@hadoop001 conf]$ source ~/.bash_profile
[hadoop@hadoop001 conf]$ which zkServer.sh
~/app/zookeeper/bin/zkServer.sh
  1. 三台机器启动zk
[hadoop@hadoop001 ~]$ zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

[hadoop@hadoop002 ~]$ zkServer.sh start

[hadoop@hadoop003 ~]$ zkServer.sh start
  1. 查看zk状态,一般启动后过1-2分钟再次查看zk状态,确保正常启动
[hadoop@hadoop001 ~]$ zkServer.sh status
JMX enabled by default
Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg
Mode: follower

[hadoop@hadoop002 ~]$ zkServer.sh status
JMX enabled by default
Using config: /home/hadoop/app/zookeeper/bin/../conf/zoo.cfg
Mode: leader

六、 安装Hadoop(NameNode HA 和 ResourceManager HA)

  1. 将hadoop安装包解压到~/app目录下,并建立软连接
[hadoop@hadoop001 ~]$ tar -xzvf software/hadoop-2.6.0-cdh5.15.1.tar.gz -C app/
[hadoop@hadoop001 ~]$ cd app
[hadoop@hadoop001 app]$ ln -s hadoop-2.6.0-cdh5.15.1/ hadoop
  1. 配置hadoop环境变量
[hadoop@hadoop001 hadoop]$ vi ~/.bash_profile

export HADOOP_HOME=/home/hadoop/app/hadoop
export PATH=${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${ZOOKEEPER_HOME}/bin:$PATH

[hadoop@hadoop001 hadoop]$ source ~/.bash_profile
  1. 修改hadoop的配置文件
  • hadoop-env.sh
[hadoop@hadoop001 app]$ cd hadoop/etc/hadoop/
[hadoop@hadoop001 hadoop]$ vi hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_45
  • 删除core-site.xml, hdfs-site.xml, slaves, yarn-site.xml 上传配置好的文件
    https://www.lanzous.com/i5rrksh
[hadoop@hadoop001 hadoop]$ rm -f core-site.xml hdfs-site.xml slaves yarn-site.xml
  • salves:配置从节点的hostname
[hadoop@hadoop003 hadoop]$ vi slaves
hadoop001
hadoop002
hadoop003
  • core-site.xml
# 命名空间
<property>
	<name>fs.defaultFS</name>
	<value>hdfs://nameservice1</value>
</property>

# tmp目录,需要创建
<property>
	<name>hadoop.tmp.dir</name>
	<value>/home/hadoop/tmp/hadoop</value>
</property>

[hadoop@hadoop001 hadoop]$ mkdir -p /home/hadoop/tmp/hadoop
[hadoop@hadoop001 hadoop]$ chmod -R 777 /home/hadoop/tmp/hadoop

# 指定zk地址
<property>
	<name>ha.zookeeper.quorum</name>
	<value>hadoop001:2181,hadoop002:2181,hadoop003:2181</value>
</property>

# 允许哪些用户和用户组进行访问,其中配置名中的第二个hadoop为部署的用户,例如部署的用户如果为root,则改为hadoop.proxyuser.root.hosts和hadoop.proxyuser.root.groups
<property>
	<name>hadoop.proxyuser.hadoop.hosts</name>
	<value>*</value>
</property>

<property>
	<name>hadoop.proxyuser.hadoop.groups</name>
	<value>*</value>
</property>

# 配置压缩
<property>
	<name>io.compression.codecs</name>
	<value>org.apache.hadoop.io.compress.GzipCodec,
		org.apache.hadoop.io.compress.DefaultCodec,
		org.apache.hadoop.io.compress.BZip2Codec,
		org.apache.hadoop.io.compress.SnappyCodec
	</value>
</property>
  • hdfs-site.xml
# HDFS 超级用户
<property>
	<name>dfs.permissions.superusergroup</name>
	<value>hadoop</value>
</property>

# 设置NN数据存放路径
<property>
	<name>dfs.namenode.name.dir</name>
	<value>/home/hadoop/data/dfs/name</value>
	<description>namenode 存放 name table(fsimage) 本地目录</description>
</property>

<property>
	<name>dfs.namenode.edits.dir</name>
	<value>${dfs.namenode.name.dir}</value>
	<description>namenode 存放 transaction file(edits) 本地目录</description>
</property>

<property>
	<name>dfs.datanode.data.dir</name>
	<value>/home/hadoop/data/dfs/data</value>
	<description>datanode 存放 block 本地目录</description>
</property>

# 设置副本数
<property>
	<name>dfs.replication</name>
	<value>3</value>
</property>

# 设置block大小(默认为134217728,即128M)
<property>
	<name>dfs.blocksize</name>
	<value>134217728</value>
</property>

# 指定hdfs的nameservice,需要与core-site.xml中配置的一致
<property>
	<name>dfs.nameservices</name>
	<value>nameservice1</value>
</property>

# 配置命名空间下挂载的nn
<property>
	<name>dfs.ha.namenodes.nameservice1</name>
	<value>nn1,nn2</value>
</property>

# 配置私钥地址,如果为root则配置为/root/.ssh/id_rsa
<property>
	<name>dfs.ha.fencing.ssh.private-key-files</name>
	<value>/home/hadoop/.ssh/id_rsa</value>
</property>
  • mapred-site.xml
# 配置map端的压缩,这里配置的是snappy
<property>
	<name>mapreduce.map.output.compress</name>
	<value>true</value>
</property>

<property>
	<name>mapreduce.map.output.compress.codec</name>
	<value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
  • yarn-site.xml
# 配置yarn ha下挂载的节点
<property>
	<name>yarn.resourcemanager.ha.rm-ids</name>
	<value>rm1,rm2</value>
</property>

# 配置yarn的内存和vcore参数
<property>
	<name>yarn.nodemanager.resource.memory-mb</name>
	<value>2048</value>
</property>
<property>
	<name>yarn.scheduler.minimum-allocation-mb</name>
	<value>1024</value>
	<discription>单个任务可申请最少内存,默认1024MB</discription>
</property>

# 开启日志聚合
<property>
	<name>yarn.log-aggregation-enable</name>
	<value>true</value>
</property>

<property>
	<name>yarn.scheduler.maximum-allocation-mb</name>
	<value>2048</value>
	<discription>单个任务可申请最大内存,默认8192MB</discription>
</property>

<property>
	<name>yarn.nodemanager.resource.cpu-vcores</name>
	<value>2</value>
</property>

七、 初次启动集群

  1. 三台机器启动zookeeper
[hadoop@hadoop001 ~]$ zkServer.sh start
[hadoop@hadoop002 ~]$ zkServer.sh start
[hadoop@hadoop003 ~]$ zkServer.sh start
  1. 在三台机器上,先启动JournalNode进程
[hadoop@hadoop001 ~]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-journalnode-hadoop001.out

[hadoop@hadoop001 ~]$ jps
3665 JournalNode
1935 QuorumPeerMain
3711 Jps
  1. 第一次启动集群前需要格式化hdfs,选取第一台机器做格式化
[hadoop@hadoop001 ~]$ hdfs namenode –format
  1. 同步namenode元数据,将namenode存储元数据的文件夹scp到hadoop002
[hadoop@hadoop001 ~]$ cd /home/hadoop/data/dfs
[hadoop@hadoop001 dfs]$ scp -r name/ hadoop002:$PWD
fsimage_0000000000000000000.md5                                                             100%   62     0.1KB/s   00:00
fsimage_0000000000000000000                                                                 100%  308     0.3KB/s   00:00
seen_txid                                                                                   100%    2     0.0KB/s   00:00
VERSION                                                                                     100%  206     0.2KB/s   00:00
  1. 选择第一台机器,初始化zkfc
[hadoop@hadoop001 dfs]$ hdfs zkfc –formatZK
…
19/08/24 10:03:29 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/nameservice1 in ZK.
…
  1. 启动HDFS,jps查看进程
[hadoop@hadoop001 ~]$ start-dfs.sh
Starting namenodes on [hadoop001 hadoop002]
hadoop002: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-namenode-hadoop002.out
hadoop001: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-namenode-hadoop001.out
hadoop003: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-datanode-hadoop003.out
hadoop002: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-datanode-hadoop002.out
hadoop001: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-datanode-hadoop001.out
Starting journal nodes [hadoop001 hadoop002 hadoop003]
hadoop002: starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-journalnode-hadoop002.out
hadoop003: starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-journalnode-hadoop003.out
hadoop001: starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-journalnode-hadoop001.out
Starting ZK Failover Controllers on NN hosts [hadoop001 hadoop002]
hadoop001: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-zkfc-hadoop001.out
hadoop002: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/hadoop-hadoop-zkfc-hadoop002.out


[hadoop@hadoop001 ~]$ jps
7876 DataNode
8404 DFSZKFailoverController
8155 JournalNode
8700 Jps
1935 QuorumPeerMain

[hadoop@hadoop002 ~]$ jps
6448 DataNode
7042 Jps
6841 DFSZKFailoverController
1916 QuorumPeerMain
6638 JournalNode

[hadoop@hadoop003 ~]$ jps
5344 Jps
5187 JournalNode
5013 DataNode
1929 QuorumPeerMain

  1. 访问web界面查看是否正常启动
  • http://192.168.146.100:50070/
    在这里插入图片描述

  • http://192.168.146.110:50070/
    在这里插入图片描述

  1. 启动yarn
[hadoop@hadoop001 ~]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/yarn-hadoop-resourcemanager-hadoop001.out
hadoop002: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/yarn-hadoop-nodemanager-hadoop002.out
hadoop003: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/yarn-hadoop-nodemanager-hadoop003.out
hadoop001: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/yarn-hadoop-nodemanager-hadoop001.out

hadoop002的RM进程需要手动启动
[hadoop@hadoop002 ~]$ yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/yarn-hadoop-resourcemanager-hadoop002.out

启动jobhistory
[hadoop@hadoop001 ~]$ mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.15.1/logs/mapred-hadoop-historyserver-hadoop001.out

[hadoop@hadoop001 ~]$ jps
7876 DataNode
8404 DFSZKFailoverController
10533 Jps
9032 NameNode
9881 NodeManager
10457 JobHistoryServer
8155 JournalNode
1935 QuorumPeerMain
9759 ResourceManager

[hadoop@hadoop002 ~]$ jps
6448 DataNode
8098 Jps
7960 ResourceManager
6841 DFSZKFailoverController
7225 NameNode
1916 QuorumPeerMain
6638 JournalNode
7742 NodeManager


[hadoop@hadoop003 ~]$ jps
5187 JournalNode
5013 DataNode
6040 Jps
1929 QuorumPeerMain
5723 NodeManager
  1. 访问web界面验证yarn是否正常启动
  • http://192.168.146.100:8088/ 在这里插入图片描述

  • 访问standby的RM界面 http://192.168.146.110:8088/cluster/cluster
    在这里插入图片描述

  • 访问JobHistory WEB界面
    http://192.168.146.100:19888
    在这里插入图片描述

八、 关闭集群

  1. 停止Hadoop
  • 停止jobhistory
[hadoop@hadoop001 ~]$ mr-jobhistory-daemon.sh stop historyserver
  • 停止yarn
[hadoop@hadoop001 ~]$ stop-yarn.sh
[hadoop@hadoop002 ~]$ yarn-daemon.sh stop resourcemanager
  • 停止hdfs
[hadoop@hadoop002 ~]$ stop-dfs.sh
  • 停止Zookeeper
[hadoop@hadoop001 ~]$ zkServer.sh stop
[hadoop@hadoop002 ~]$ zkServer.sh stop
[hadoop@hadoop003 ~]$ zkServer.sh stop

九、 再次启动集群

  1. 启动zk
[hadoop@hadoop001 ~]$ zkServer.sh start
[hadoop@hadoop002 ~]$ zkServer.sh start
[hadoop@hadoop003 ~]$ zkServer.sh start
  1. 启动hdfs
[hadoop@hadoop001 ~]$ start-dfs.sh
  1. 启动yarn
[hadoop@hadoop001 ~]$ start-yarn.sh
[hadoop@hadoop002 ~]$ yarn-daemon.sh start resourcemanager
  1. 启动jobhistory
[hadoop@hadoop001 ~]$ mr-jobhistory-daemon.sh start historyserver
  1. 查看三台机器的进程
[hadoop@hadoop001 ~]$ jps
18546 NameNode
18387 QuorumPeerMain
19272 DFSZKFailoverController
20008 JobHistoryServer
19581 NodeManager
20093 Jps
18702 DataNode
19006 JournalNode
19455 ResourceManager

[hadoop@hadoop002 ~]$ jps
17904 NameNode
18162 QuorumPeerMain
19046 Jps
18984 ResourceManager
18536 DFSZKFailoverController
18073 DataNode
18317 JournalNode
18766 NodeManager

[hadoop@hadoop003 ~]$ jps
16081 DataNode
16693 Jps
14518 QuorumPeerMain
16488 NodeManager
16287 JournalNode
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值