centos7 5台机器集群环境初始化

一、准备

5台机器上搭建hadoop集群环境

5台机器主机名与IP分别为

主机名IP
nn1.hadoop192.168.109.151
nn2.hadoop192.168.109.152
s1.hadoop192.168.109.153
s2.hadoop192.168.109.154
s3.hadoop192.168.109.155

搭建思路:

1.利用VMware先搭建一台linux centos7最小安装版虚拟机

2.配置好单台虚拟机,利用VMware的克隆技术,克隆其他四台虚拟机

3.更改每台虚拟机的静态Ip及主机名称

4.生成每台虚拟机的SSH公钥,统一远程复制到同一台虚拟机上

5.将生成的公钥authorized_keys远程复制到每台机器,实现集群的5台机器互相免密登录

二、配置单台机器

1.配置静态IP

修改步骤 : 

1.1 执行ip add 命令 下面的ens33才是真实IP

[root@localhost ~]# ip add 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:e8:76:84 brd ff:ff:ff:ff:ff:ff
    inet 192.168.109.145/24 brd 192.168.109.255 scope global noprefixroute dynamic ens33
       valid_lft 1188sec preferred_lft 1188sec
    inet6 fe80::2839:870b:b85b:c218/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

1.2 修改ens33这个网卡的配置

[root@localhost ~]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]# vi ifcfg-ens33

原来的内容是这样的 : 

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="dhcp"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="45741c09-63f3-44ad-b4f9-6bcdd2e5d4f0"
DEVICE="ens33"
ONBOOT="yes"

修改如下 : 

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"     #修改为static
IPADDR="192.168.109.151" #IP
NETMASK="255.255.255.0" #子网掩码
GATEWAY="192.168.109.2" #网关
DNS="192.168.109.2"    #DNS
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="45741c09-63f3-44ad-b4f9-6bcdd2e5d4f0"
DEVICE="ens33"
ONBOOT="yes"

1.3 停止 networkManager 服务 这样才能上网

[root@localhost network-scripts]# systemctl stop NetworkManager.service

1.4 关闭自动启动

[root@localhost network-scripts]# systemctl disable NetworkManager.service
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.

1.5 编辑/etc/resolv.conf 文件

[root@localhost network-scripts]# vi /etc/resolv.conf
# Generated by NetworkManager
search localdomain
nameserver 192.168.109.2 #如没有,需要手动添加 设置为网关

1.6 重启网络服务

[root@localhost network-scripts]# systemctl restart network.service

1.7 查看网卡配置

[root@localhost network-scripts]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:e8:76:84 brd ff:ff:ff:ff:ff:ff
    inet 192.168.109.151/24 brd 192.168.109.255 scope global ens33 #修改之后
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fee8:7684/64 scope link 
       valid_lft forever preferred_lft forever

1.8 验证是否可以正常上网

[root@localhost network-scripts]# ping www.baidu.com
PING www.a.shifen.com (119.75.217.109) 56(84) bytes of data.
64 bytes from 119.75.217.109 (119.75.217.109): icmp_seq=1 ttl=128 time=5.59 ms
64 bytes from 119.75.217.109 (119.75.217.109): icmp_seq=2 ttl=128 time=7.22 ms
64 bytes from 119.75.217.109 (119.75.217.109): icmp_seq=3 ttl=128 time=5.44 ms

2.修改主机名

[root@localhost ~]# hostnamectl set-hostname nn1.hadoop
[root@localhost ~]# hostname
nn1.hadoop
[root@localhost ~]# reboot    #重启

配置hosts IP与主机名映射文件

[root@nn1 ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
# 添加如下信息
192.168.109.151 nn1.hadoop
192.168.109.152 nn2.hadoop
192.168.109.153 s1.hadoop
192.168.109.154 s2.hadoop
192.168.109.155 s3.hadoop

3.关闭防火墙

[root@nn1 ~]# systemctl stop firewalld.service 
[root@nn1 ~]# systemctl disable firewalld.service 

4.关闭SELinux

SELinux(Security-Enhanced Linux) 是美国国家安全局(NSA)对于强制访问控制的实现,是 Linux历史上最杰出的新安全子系统。

4.1 查看关闭状态:

[root@nn1 ~]# /usr/sbin/sestatus -v
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      31

4.2 关闭方法:

[root@nn1 ~]# vim /etc/selinux/config 

4.3 把文件里的SELINUX设置为disabled

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
# SELINUX=enforcing
SELINUX=disable
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

5.添加hadoop用户并添加到whell 群组

5.1 添加hadoop用户

[root@nn1 ~]# useradd hadoop
[root@nn1 ~]# passwd hadoop
Changing password for user hadoop.
New password:               #密码:123456
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:        #密码:123456
passwd: all authentication tokens updated successfully.
[root@nn1 ~]# 

5.2 添加hadoop用户到wheel 用户组里面,使hadoop拥有root的权限

[root@nn1 ~]# gpasswd -a hadoop wheel
正在将用户“hadoop”加入到“wheel”组中
[root@nn1 ~]# cat /etc/group | grep wheel
wheel:x:10:hadoop

5.3 禁止非 whell 组用户切换到root,配置wheel组成员免密切换root

[root@nn1 ~]# vim /etc/pam.d/su

5.3.1 找到如下两行,解除注释
#auth           sufficient      pam_wheel.so trust use_uid    免密切换到root
#auth           required        pam_wheel.so use_uid           只允许wheel组里面的允许su 切换

#%PAM-1.0
auth            sufficient      pam_rootok.so
# Uncomment the following line to implicitly trust users in the "wheel" group.
#auth           sufficient      pam_wheel.so trust use_uid
auth            sufficient      pam_wheel.so trust use_uid
# Uncomment the following line to require a user to be in the "wheel" group.
#auth           required        pam_wheel.so use_uid
auth            required        pam_wheel.so use_uid
auth            substack        system-auth
auth            include         postlogin
account         sufficient      pam_succeed_if.so uid = 0 use_uid quiet
account         include         system-auth
password        include         system-auth
session         include         system-auth
session         include         postlogin
session         optional        pam_xauth.so

5.3.2 修改/etc/login.defs文件

只有wheel组可以su 到root

[root@nn1 ~]# cp /etc/login.defs  /etc/login_defs.bak     #先备份一下
[root@nn1 ~]# echo "SU_WHEEL_ONLY yes" >> /etc/login.defs  #在文件尾部追加
[root@nn1 ~]# tail /etc/login.defs #从文件尾部查看
UMASK           077

# This enables userdel to remove user groups if no members exist.
#
USERGROUPS_ENAB yes

# Use SHA512 to encrypt password.
ENCRYPT_METHOD SHA512 

SU_WHEEL_ONLY yes
[root@nn1 ~]# 

5.3.3 验证免密切换用户

[root@nn1 ~]# su - hadoop
上一次登录:三 5月  8 14:16:26 CST 2019pts/0 上
[hadoop@nn1 ~]$ su - root
上一次登录:三 5月  8 14:15:30 CST 2019pts/0 上
最后一次失败的登录:三 5月  8 14:16:35 CST 2019pts/0 上
最有一次成功登录后有 1 次失败的登录尝试。
[root@nn1 ~]# 

6.配置阿里云yum源

6.1.准备文件:链接:https://pan.baidu.com/s/16fgL0dY-QVk6PnNL1GyglA     提取码:wivt 

6.2.首先安装linux与windows文件互传工具 lrzsz

[root@nn1 ~]# yum install -y lrzsz

6.3.执行rz命令,选择之前下载好的的 Centos-7.repo文件 

覆盖原来的CentOS-Base.repo

[root@nn1 ~]# cd /usr
[root@nn1 usr]# rz
.......
[root@nn1 usr]# cp Centos-7.repo /etc/yum.repos.d/ 
[root@nn1 usr]# cd /etc/yum.repos.d/ 
[root@nn1 yum.repos.d]# mv CentOS-Base.repo CentOS-Base.repo.bak 
[root@nn1 yum.repos.d]# mv Centos-7.repo CentOS-Base.repo

6.4 依次执行yum源更新命令 

[root@nn1~]# yum clean all   #清空国外源的
[root@nn1~]# yum makecache   #缓存阿里云的
[root@nn1~]# yum update -y   #从阿里云更新

6.5 安装常用软件

[root@nn1~]# yum install -y openssh-server vim gcc gcc-c++ glibc-headers bzip2-devel lzo-devel curl wget openssh-clients zlib-devel autoconf automake cmake libtool openssl-devel fuse-devel snappy-devel telnet unzip zip net-tools.x86_64 firewalld systemd ntp psmisc rsync

        openssh:SSH协议        --依赖-->  openssl:加密软件

        curl :利用URL语法在命令行方式下工作的开源文件传输工具

        wget :下载工具

        zlib、snappy-devel、unzip、zip:压缩工具

        autoconf、automake、cmake、fuse-devel:源代码编译相关

        libtool:动态库

        psmisc:必须安装,否则name高可用无法实现

如果安装失败,可以尝试用 yum -reinstall 覆盖安装

[root@nn1~]# yum reinstall -y openssh-server vim gcc gcc-c++ glibc-headers bzip2-devel lzo-devel curl wget openssh-clients zlib-devel autoconf automake cmake libtool openssl-devel fuse-devel snappy-devel telnet unzip zip net-tools.x86_64 firewalld systemd ntp ntpdate psmisc

6.6  使用yum命令查看已安装软件 

7.安装jdk8

7.1 准备文件:链接:https://pan.baidu.com/s/1z6WaVzdb-RoW5G6gf87sFQ  提取码:1xof 

7.2 利用rz执行上传

[root@nn1~]# cd /usr/tmp
[root@nn1 tmp]# rz

7.3.rpm 安装

[root@nn1 tmp]# rpm -ivh jdk-8u144-linux-x64.rpm
准备中...                          ################################# [100%]
正在升级/安装...
   1:jdk1.8.0_144-2000:1.8.0_144-fcs  ################################# [100%]
Unpacking JAR files...
        tools.jar...
        plugin.jar...
.............

7.4 配置jdk环境变量

[root@localhost tmp]# vim /etc/profile

在文件尾部添加如下配置 

#在文件尾部添加如下配置
export JAVA_HOME=/usr/java/jdk1.8.0_144
export JRE_HOME=$JAVA_HOME/jre
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

7.5.使配置文件生效,查看jdk是否安装成功 java -version

[root@nn1 ~]# source /etc/profile
[root@nn1 ~]# java -version 
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
[root@nn1 ~]#

8.设置整点同步时间 定时任务

crontab -l  # 表示列出所有的定时任务
crontab -r  # 表示删除用户的定时任务,当执行此命令后,所有用户下面的定时任务会被删除
crontab -e 创建一个定时任务

[root@nn1 ~]# crontab -e

 添加定时任务  整点执行 ntpdate time1.aliyun.com 命令,校对时间

 0 * * * *  /usr/sbin/ntpdate time1.aliyun.com >> /tmp/autontpdate 2>&1

三、克隆4台机器

  执行完上面的命令,一个基础的linux系统就配置好了。然后把这个nn1.hadoop虚拟机导出,再根据这个导出的虚拟机创建4个linux系统。

 其中:nn2.hadoop: 从节点

 s1.hadoop、s2.hadoop、s3.hadoop:三个工作节点

1.配置nn2 ,s1,s2,s3的主机名与静态IP

以nn2主机的配置为例,其他主机配置步骤相同:

***配置其他主机ip地址时,首先把nn1主机关闭,打开一台配置一台,避免ip冲突。

[root@nn1 ~]# hostnamectl set-hostname nn2.hadoop 
[root@nn1 ~]# hostname
nn2.hadoop
[root@nn1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO="static"     #修改为static
#IPADDR="192.168.109.151" #IP
IPADDR="192.168.109.152" #将151 修改为 nn2对应的hosts配置的ip地址 152
NETMASK="255.255.255.0" #子网掩码

配置IP后验证ping通信是否正常

[hadoop@nn2 ~]$ ping www.baidu.com
PING www.baidu.com (182.61.200.7) 56(84) bytes of data.
64 bytes from 182.61.200.7 (182.61.200.7): icmp_seq=1 ttl=128 time=4.25 ms
64 bytes from 182.61.200.7 (182.61.200.7): icmp_seq=2 ttl=128 time=4.74 ms

四 SSH免密登录配置

5台机器都切换到hadoop用户,配置hadoop用户之间的免密登录

1.分别在nn1 nn2 s1 s2 s3 机器上执行ssh-keygen -t rsa命令 

[hadoop@nn1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:PIM/73UWa/RV+8GbeMUsHPKVDoVE4OezVkFAZ1+UEY4 hadoop@s2.hadoop
The key's randomart image is:
+---[RSA 2048]----+
|           .==oO*|
|          .  .B.+|
|           ..E.++|
|       o    o++=+|
|      . S    o**=|
|       . o   .==B|
|        o   .+=+o|
|         o ..+.  |
|         .o      |
+----[SHA256]-----+
[hadoop@nn1 ~]$ 

在home目录下生成:id_rsa 私钥和 id_rsa.pub 公钥

[hadoop@nn1 ~]$ cd .ssh
[hadoop@nn1 .ssh]$ ll
总用量 16
-rw-------. 1 hadoop hadoop 1675 5月   8 15:07 id_rsa
-rw-r--r--. 1 hadoop hadoop  398 5月   8 15:07 id_rsa.pub

2.在nn1机器上 创建 all_id 目录

[hadoop@nn1 ~]$ mkdir all_id
[hadoop@nn1 ~]$ ll
total 0
drwxrwxr-x. 2 hadoop hadoop 6 May  8 15:09 all_id

3.通过nn1 nn2 s1 s2 s3 机器 分别执行scp 远程复制命令,将本机的id_rsa.pub 发送到nn1 的id_all目录下

[hadoop@nn2 ~]$ scp .ssh/id_rsa.pub hadoop@nn1.hadoop:/home/hadoop/all_id/nn2_rsa_pub
hadoop@nn1.hadoop's password: 
id_rsa.pub                                                                                                    100%  398   185.1KB/s   00:00    
[hadoop@nn2 ~]$ 

4.将nn1 nn2 s1 s2 s3 机器的公钥分别追加到authorized_keys 文件里

[hadoop@nn1 ~]$ cd all_id
[hadoop@nn1 all_id]$ cat nn1_rsa_pub > ~/.ssh/authorized_keys
[hadoop@nn1 all_id]$ cat nn2_rsa_pub >> ~/.ssh/authorized_keys
[hadoop@nn1 all_id]$ cat s1_rsa_pub >> ~/.ssh/authorized_keys
[hadoop@nn1 all_id]$ cat s2_rsa_pub >> ~/.ssh/authorized_keys
[hadoop@nn1 all_id]$ cat s3_rsa_pub >> ~/.ssh/authorized_keys
[hadoop@nn1 all_id]$ cat ~/.ssh/authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDKCSNDL6TPDU1dVkNe3ss0dPkl8/OV9pMsOGccTKgUQKb4iJFt3PdnFn9EO+YcKX0wQzDbZkG0TDSHpOhdpgnVPXYBUNv1A6YEnEH2xdJ2Nt5ZGzclRaWuesak+iPJiWF/9AWMhWjw/DCuxh9eDGz4F7iVsvlP557yFAQR6bJM0uESMSdkA/72SCrJT7H0PQ+scdSg6lAdb3ij5EMsjWcyY5kAm7wJMX53R0w2sg9kTgpq2a/HP6hp81N9zjvckGKBNCsy6SrsOguyO0tlo13VpdBqUaYRxBpeKW1YIxfHTBCtVD2ha4dm03PGqpd2as3rnHvl/HTkrFB099fzC0n5 hadoop@nn1.hadoop
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDHowws1HajWVCC8TOzVdIj1zdWgHLaJXVRzu0RyRsv6fS5xL0HMDzpwrdH6fzc7U32t/dyTa1uAi5a5gwXm6pydyKCrmlrECX2g4dm72ZiiBpFP3oETHePIlk09AImcFlV72tXDOFmmVH3QxwnM2JkmmQUO3hZkrDF/0TtHRKHEj4b1yntTZebPgTavPK4M0Tu2NXXXhW00kcwLYcI8VCwDYBcxMQ6KWBkkQg5G6aZfXT3uGyYCSdWaDM0YFOnv5GJpaD+Ha1PfwcLrRhtuKhOsS8VoSICg6Af1O8uBfQukEy281fT7rL08Ba/SztjAWGuZ9iDZTZjaY/P8cWczC7z hadoop@nn2.hadoop
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC6ti5YCzMssYilq0f6Mg6hVKkYxho1ldlOVQrOMYgccI4k//SkfXdBoWZQcUa2Vlj8L2HmP/u3JGbUF51ANXBirWzoBC+gQ4yiK4ErjiW82u//obwZ4lte3k7FeKHqcgw32P3G82sH12ftKAl/1ni7gJBT0FdYM70cd3Z4RLQRtfW+fr+1R+J8iSBunT8SFIFw6D+fhZAEmqT26p5Dy++KnMx5nrZfgrHCMuLUmSr9WRm4PkGBxrzObmVRbhnYR0/W+wyoFtlwBQg/r2x3dwap6/Akt0Twipt3z7N4rV/8XM86NO6YugB5fuUT/L4O3xUHUc9TCHdk0GUnBq1vU4wr hadoop@s1.hadoop
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFJLVOb7KTrIZQrAP13AH7SPES4MNAY5JhGY8hdP3W1IF33rAvwgBMJNPGGDmv//pbEdhMnOS45+jXr92cDdSyKuhAjZA33QquBYapGNAznUQzim5NB1cd+IxD0m7CQQSz+jMRh917JpiHGV1ZmAj3tQ8mSaYYmSAIx5MabPGJgmTR0Rlml+wCSLnHECUPSyusQ9+KcdyN9WdQk9C9vSGaloUOj4lyjL5zYracuzemuAVk6uTLjQiy3nzk2lZM/7EzbP6I1bK4+wmFDoAHYU0gV6NIc2mH0KF4MIN32qSZA6CSEbEaaU+F3e3xADTNIMdm2T6J86aj2iFCF31FgrTP hadoop@s2.hadoop
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5uizSo1VqwQIRxlvC2wAyDKCr1YSdHwA1eS2pCnn0gjrSPlglIe5qkKj9WyUKYkqJVFKdpkcTAsNsp/G9RRGSrZKAW8gY20kMq8ggBjyfb5JcOHao7wTqwY834EZUslM9J4eZUAqQWO5IdeUnncW+mNdAl+oVSpF51uQoe1kLVkGF0vQ3bfmtT+jFyptmEmumggSHuNadvdshG3w2crJZQB8KgM+5xiA8ydOPXYpeu/i3J+R7mqhQVlrU/IQGzSyXI99dbRzeUWmAVh1UfUhO17qY+PqC6SaGcfI/9Z5RgGsxBRByI0Lj5xgDUPF1kf4PF5kiBN+5CTC3x+31say3 hadoop@s3.hadoop

5.将nn1的authorized_keys 远程复制到 nn2 s1 s2 s3 的 .ssh 目录里

[hadoop@nn1 .ssh]$ scp authorized_keys hadoop@s3.hadoop:~/.ssh/
The authenticity of host 's3.hadoop (192.168.109.155)' can't be established.
ECDSA key fingerprint is SHA256:9jCRb+TPi342pR2nLLpDmzmI/ulZQMTRFQAXVBKxwGk.
ECDSA key fingerprint is MD5:0c:42:a6:0e:00:79:86:c4:8d:e5:a2:71:eb:30:2e:c0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 's3.hadoop,192.168.109.155' (ECDSA) to the list of known hosts.
hadoop@s3.hadoop's password: 
authorized_keys   

6.修改文件及目录的权限

分别修改nn1 nn2 s1 s2 s3  的authorized_keys的权限为600

分别修改nn1 nn2 s1 s2 s3  的home目录的权限为700

分别修改nn1 nn2 s1 s2 s3  的.ssh目录的权限为700

[hadoop@nn1 .ssh]$chmod 600 authorized_keys 
[hadoop@nn1 .ssh]$chmod 700 ~
[hadoop@nn1 .ssh]$chmod 700 ~/.ssh

或使用:ssh-copy-id hadoop@hadoop101 

7.SSH免密验证

首次免密登录需要建立连接 输入yes即可。

[hadoop@nn2 ~]$ ssh s1.hadoop
The authenticity of host 's1.hadoop (192.168.109.153)' can't be established.
ECDSA key fingerprint is SHA256:9jCRb+TPi342pR2nLLpDmzmI/ulZQMTRFQAXVBKxwGk.
ECDSA key fingerprint is MD5:0c:42:a6:0e:00:79:86:c4:8d:e5:a2:71:eb:30:2e:c0.
Are you sure you want to continue connecting (yes/no)? yues
Please type 'yes' or 'no': yes
Warning: Permanently added 's1.hadoop,192.168.109.153' (ECDSA) to the list of known hosts.
Last login: Wed May  8 15:28:52 2019 from s3.hadoop
[hadoop@s1 ~]$ ssh s2.hadoop
Last login: Wed May  8 15:30:56 2019 from s3.hadoop
[hadoop@s2 ~]$ ssh s3.hadoop
Last login: Wed May  8 15:34:55 2019 from nn2.hadoop
[hadoop@s3 ~]$ ssh nn1.hadoop
Last login: Wed May  8 15:35:08 2019 from s3.hadoop
[hadoop@nn1 ~]$ ssh nn1.hadoop

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值