高可用型HA hadoop安装,免密码登录

[hadoop@hadoop002 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory ‘/home/hadoop/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
0b:b6:2a:1c:04:0f:d0:41:c7:01:3a:90:91:56:46:9a hadoop@hadoop002
The key’s randomart image is:
±-[ RSA 2048]----+
|=**Bo. |
|*o=… |
|+E |
| .o |
| . o S |
| . . o . |
| . . . . |
| o . |
| … |
±----------------+
[hadoop@hadoop002 ~]$

-rw------- 1 hadoop hadoop 1675 Nov 26 20:12 id_rsa
-rw-r–r-- 1 hadoop hadoop 398 Nov 26 20:12 id_rsa.pub
-rw-r–r-- 1 root root 398 Nov 26 20:16 id_rsa.pub2
-rw-r–r-- 1 root root 398 Nov 26 20:19 id_rsa.pub3
[hadoop@hadoop001 ~]$ cd .ssh/
[hadoop@hadoop001 .ssh]$ cat id_rsa.pub >> authorized_keys
[hadoop@hadoop001 .ssh]$ cat id_rsa.pub2 >> authorized_keys
[hadoop@hadoop001 .ssh]$ cat id_rsa.pub3 >> authorized_keys
[hadoop@hadoop001 .ssh]$ cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAvMTc9H1fcYpyluMUCIw/dkAWmSqBgH8ZKVCghs5p8HzhHekBmW6uNbd54rROcOrIWir/wyGPkm/O9at6SOYpusXp+pPm/XbrwJH7hmxoAgeYg16gwvAv7bwoC9FKiuNw6KuwKG9jyuRKhpvfv9r2PornjvgLapnSOdGb0WSlRO8dK4WKCmgHBGD6Ijm4a8AmopeMRBMjEkAxSKJCxob6V+bX3A9dVVTJKaXJq116wIZDyw59BaSHAAYSj/aI8IWJ1L3gcdjLBy1FMD+AxnBsyz9ze1WbrB9ztd+IQkWm9Rsp+GQZ7eUvg9eUQxcQLqKF3EhSPvNqnZQ6/7hAuPDHkQ== hadoop@hadoop001
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwwVGuAw3sCgQ+zcSo2vLyuHTxr1VP54th4mQCFwN4J2xOCHAOrX6qLcp2/DGs9p3cqrPnw0O+T0WBRWLfpc695SPrbLaTs7nGAl7R/+3N4A5QYS9a1dqsqvR4cp/qqvsWZ5nZzbf0OIyop35TY+exU65C3UuoRHoew34/wd0mEdbc+3+uHjzhOdKCJbdNNkVffKJu8/fLQwAbq38yrMQYRxJo2WAkhjxs9yySmHVltIBtcFBYV1mhKFzk3b2cTbjmdo2B9LpCpgWmeb14OMlC9f9nhbvInDJXqQNIpHv34bRqlUuCzQcKXQyJ+N8H+ufsx4PPTBdhzgEfuVf6d6/NQ== hadoop@hadoop002
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAr9kT9mKwS8Zfp8WjeA9jbWeN29NgYo9DHW2PLlHK1nH/KsGHXj4WUkm0IcZ2Qf/Nua6O7XvGvm2UYbo3acAScRtkx7Lchb7gZbuz4FcBnRqPieWyZPLxh/z5an/In1mT+YldNVjLW0lllRq+Gx4m/TbJG3zWrNV0AEQpuqsa5YTKzJ6bOBazAFHaTyrA6i7YmEDrI0x3QOLtIHWVLE/iT0TlkOa0BqTe/jADcmeyQgmGuJBhoONO10HlbXDuT1edBYWgspZ0nGo5ejoXU+DsDbcts3LH1o3lhn0na5yDSYG2jP9Nsgo7x2LHaZhUa0n5PHs9N8paVLpO/fTgrI/jWw== hadoop@hadoop003

[hadoop@hadoop001 .ssh]$ scp authorized_keys root@hadoop002:/home/hadoop/.ssh/
The authenticity of host ‘hadoop002 (172.26.252.119)’ can’t be established.
RSA key fingerprint is 69:c8:37:3e:31:21:67:05:9a:15:56:be:f9:f6:1b:76.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘hadoop002,172.26.252.119’ (RSA) to the list of known hosts.
root@hadoop002’s password:
authorized_keys 100% 1194 1.2KB/s 00:00
[hadoop@hadoop001 .ssh]$ scp authorized_keys root@hadoop003:/home/hadoop/.ssh/
The authenticity of host ‘hadoop003 (172.26.252.120)’ can’t be established.
RSA key fingerprint is 94:87:fe:a1:c3:5b??4c:8a:b4:72:7f:4b:8d:ee:fd.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘hadoop003,172.26.252.120’ (RSA) to the list of known hosts.
root@hadoop003’s password:
authorized_keys 100% 1194 1.2KB/s 00:00
[hadoop@hadoop001 .ssh]$
注意生产上都要做,否则会出问题,注意生产问题
操作步骤
[root@hadoop001 ~]# chown -R hadoop:hadoop /home/hadoop/.ssh/*
[root@hadoop001 ~]# chown -R hadoop:hadoop /home/hadoop/.ssh

需要修改600权限 文件操作
[hadoop@hadoop001 .ssh]$ chmod 600 authorized_keys

Warning: Permanently added ‘hadoop002,172.26.252.119’ (RSA) to the list of known hosts.
Mon Nov 26 20:37:11 CST 2018
[hadoop@hadoop002 .ssh]$ ssh hadoop001 date
Mon Nov 26 20:37:18 CST 2018
[hadoop@hadoop002 .ssh]$ ssh hadoop002 date
Mon Nov 26 20:37:27 CST 2018
[hadoop@hadoop002 .ssh]$ ssh hadoop003 date
Mon Nov 26 20:37:36 CST 2018
[hadoop@hadoop002 .ssh]$
使用 &&
[hadoop@hadoop001 .ssh]$ ssh hadoop001 date && ssh hadoop002 date && ssh hadoop003 date
Mon Nov 26 20:39:53 CST 2018
Mon Nov 26 20:39:53 CST 2018
Mon Nov 26 20:39:53 CST 2018

环境变量 大家共用的配置在etc中
如果有些环境变量是个人的就配置在家目录
全局与个人在什么场景下使用

注意冒号是追加PATH目录

注意配置sbin 路径

$ZOOKEEPER_HOME/bin/zkServer.sh status

zookeeper状态

格式化之前启动
sbin/hadoop-daemon.sh start journalnode
所有节点都启动

保持一致,但是不能够格式化
scp -r data hadoop002:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/
传输过去就OK了

初始化zkfc

yarn 要启动第二个了

Last login: Mon Nov 26 23:50:07 2018 from 223.78.248.18

Welcome to Alibaba Cloud Elastic Compute Service !

[root@hadoop001 ~]# su - hadoop
[hadoop@hadoop001 ~]$ cd app/zookeeper-3.4.6/bin/
[hadoop@hadoop001 bin]$ cd app/zookeeper-3.4.6/bin/ cd app/zookeeper-3.4.6/bin/
-bash: cd: app/zookeeper-3.4.6/bin/: No such file or directory
[hadoop@hadoop001 bin]$ zkServer.sh stop
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/…/conf/zoo.cfg
Stopping zookeeper … STOPPED
[hadoop@hadoop001 bin]$ zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/…/conf/zoo.cfg
Starting zookeeper … STARTED
[hadoop@hadoop001 bin]$ cd …
[hadoop@hadoop001 zookeeper-3.4.6]$ cd …
[hadoop@hadoop001 app]$ cd hadoop-2.6.0-cdh5.7.0/sbin/
[hadoop@hadoop001 sbin]$ ./start-dfs.sh
18/11/26 23:58:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Starting namenodes on [hadoop001 hadoop002]
hadoop001: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out
hadoop002: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop002.out
hadoop002: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop002.out
hadoop001: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out
hadoop003: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop003.out
Starting journal nodes [hadoop001 hadoop002 hadoop003]
hadoop003: starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-journalnode-hadoop003.out
hadoop001: starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-journalnode-hadoop001.out
hadoop002: starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-journalnode-hadoop002.out
18/11/26 23:58:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Starting ZK Failover Controllers on NN hosts [hadoop001 hadoop002]
hadoop002: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop002.out
hadoop001: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop001.out
[hadoop@hadoop001 sbin]$ start-yarn.sh
-bash: start-yarn.sh: command not found
[hadoop@hadoop001 sbin]$ ./start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop001.out
hadoop003: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop003.out
hadoop002: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop002.out
hadoop001: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop001.out
[hadoop@hadoop001 sbin]$ ./yarn-daemon.sh start resourcemanager
resourcemanager running as process 5340. Stop it first.
[hadoop@hadoop001 sbin]$ ./ H A D O O P H O M E / s b i n / m r − j o b h i s t o r y − d a e m o n . s h s t a r t h i s t o r y s e r v e r − b a s h : . / / h o m e / h a d o o p / a p p / h a d o o p − 2.6.0 − c d h 5.7.0 / s b i n / m r − j o b h i s t o r y − d a e m o n . s h : N o s u c h f i l e o r d i r e c t o r y [ h a d o o p @ h a d o o p 001 s b i n ] HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver -bash: .//home/hadoop/app/hadoop-2.6.0-cdh5.7.0/sbin/mr-jobhistory-daemon.sh: No such file or directory [hadoop@hadoop001 sbin] HADOOPHOME/sbin/mrjobhistorydaemon.shstarthistoryserverbash:.//home/hadoop/app/hadoop2.6.0cdh5.7.0/sbin/mrjobhistorydaemon.sh:Nosuchfileordirectory[hadoop@hadoop001sbin] cd
[hadoop@hadoop001 ~]$ H A D O O P H O M E / s b i n / m r − j o b h i s t o r y − d a e m o n . s h s t a r t h i s t o r y s e r v e r s t a r t i n g h i s t o r y s e r v e r , l o g g i n g t o / h o m e / h a d o o p / a p p / h a d o o p − 2.6.0 − c d h 5.7.0 / l o g s / m a p r e d − h a d o o p − h i s t o r y s e r v e r − h a d o o p 001. o u t [ h a d o o p @ h a d o o p 001   ] HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver starting historyserver, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/mapred-hadoop-historyserver-hadoop001.out [hadoop@hadoop001 ~] HADOOPHOME/sbin/mrjobhistorydaemon.shstarthistoryserverstartinghistoryserver,loggingto/home/hadoop/app/hadoop2.6.0cdh5.7.0/logs/mapredhadoophistoryserverhadoop001.out[hadoop@hadoop001 ] hdfs dfsadmin -report
18/11/27 00:00:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Configured Capacity: 126418354176 (117.74 GB)
Present Capacity: 111909064704 (104.22 GB)
DFS Remaining: 111908978688 (104.22 GB)
DFS Used: 86016 (84 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0


Live datanodes (3):

Name: 172.26.252.120:50010 (hadoop003)
Hostname: hadoop003
Decommission Status : Normal
Configured Capacity: 42139451392 (39.25 GB)
DFS Used: 28672 (28 KB)
Non DFS Used: 4835282944 (4.50 GB)
DFS Remaining: 37304139776 (34.74 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.53%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Nov 27 00:00:09 CST 2018

Name: 172.26.252.118:50010 (hadoop001)
Hostname: hadoop001
Decommission Status : Normal
Configured Capacity: 42139451392 (39.25 GB)
DFS Used: 28672 (28 KB)
Non DFS Used: 4837081088 (4.50 GB)
DFS Remaining: 37302341632 (34.74 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.52%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Nov 27 00:00:07 CST 2018

Name: 172.26.252.119:50010 (hadoop002)
Hostname: hadoop002
Decommission Status : Normal
Configured Capacity: 42139451392 (39.25 GB)
DFS Used: 28672 (28 KB)
Non DFS Used: 4836925440 (4.50 GB)
DFS Remaining: 37302497280 (34.74 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.52%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Nov 27 00:00:09 CST 2018

Last login: Mon Nov 26 23:56:15 2018 from 223.78.248.18

Welcome to Alibaba Cloud Elastic Compute Service !

[root@hadoop003 ~]# su - hadoop
[hadoop@hadoop003 ~]$ cd app/zookeeper-3.4.6/bin/
[hadoop@hadoop003 bin]$ zkServer.sh stop
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/…/conf/zoo.cfg
Stopping zookeeper … STOPPED
[hadoop@hadoop003 bin]$ zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/…/conf/zoo.cfg
Starting zookeeper … STARTED
[hadoop@hadoop003 bin]$

Last login: Mon Nov 26 23:54:41 2018 from 223.78.248.18

Welcome to Alibaba Cloud Elastic Compute Service !

[root@hadoop002 ~]# su - hadoop
[hadoop@hadoop002 ~]$ cd app/zookeeper-3.4.6/bin/
[hadoop@hadoop002 bin]$ zkServer.sh stop
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/…/conf/zoo.cfg
Stopping zookeeper … STOPPED
[hadoop@hadoop002 bin]$ zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/…/conf/zoo.cfg
Starting zookeeper … STARTED
[hadoop@hadoop002 bin]$ cd …
[hadoop@hadoop002 zookeeper-3.4.6]$ cd …
[hadoop@hadoop002 app]$ cd sb
-bash: cd: sb: No such file or directory
[hadoop@hadoop002 app]$ cd hadoop-2.6.0-cdh5.7.0/sbin/
[hadoop@hadoop002 sbin]$ ./yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop002.out
[hadoop@hadoop002 sbin]$

在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值