MAC OS 搭建大数据环境(2)——安装JDK,配置ssh 免登陆,安装hadoop
准备机器,创建用户
root 用户登录之后 创建一个新用户
useradd hadoop
设置密码
passwd hadoop
授予hadoop 用户root 权限
chmod u + w /etc/sudoers
vim /etc/sudoers
#在root ALL=(ALL) ALL 下添加 hadoop ALL=(ALL) ALL chmod u -w /etc/sudoers
安装JDK,MAVEN
安装JDK
[root@localhost ~]# yum search jdk
然后任意选择一个版本安装
[root@localhost ~]# yum install java-1.7.0-openjdk-devel.x86_64 -y
配置java 环境变量
- 查询JDK 路径
[root@localhost ~]# whereis java
java: /usr/bin/java /usr/lib/java /etc/java /usr/share/java /usr/share/man/man1/java.1.gz
[root@localhost ~]# ll /usr/bin/java
lrwxrwxrwx. 1 root root 22 2月 7 16:26 /usr/bin/java -> /etc/alternatives/java
[root@localhost ~]# ll /etc/alternatives/java
lrwxrwxrwx. 1 root root 76 2月 7 16:26 /etc/alternatives/java -> /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.261-2.6.22.2.el7_8.x86_64/jre/bin/java
最后的路径就是JDK 的路径
- 修改配置文件
[root@localhost ~]# vi /etc/profile
- 在末尾追加
export JAVA_HOME = /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.261-2.6.22.2.el7_8.x86_64
export MAVEN_HOME= /home/hadoop/local/opt/apache-maven-3.3.1
export JAR_HOME= $JAVA_HOME/jre
export PATH= $JAVA_HOME/bin:$MAVEN_HOME/bin:$PATH
export CLASSPATH= .:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
- 保存配置后在使用source命令使配置立即生效
[root@localhost ~]# source /etc/profile
安装MAVEN
- 安装wget 下载工具
[root@localhost ~]# yum -y install wget
- 下载maven 下载包,安装maven
[root@localhost ~]# wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo
[root@localhost ~]# yum -y install apache-maven
[root@localhost ~]# mvn -version
查看版本成功 安装成功
配置SSH 免登陆
因为安装了java 和maven ,其他2台节点虚拟机 可以直接克隆复制虚拟机,然后修改静态地址即可
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s8
找到静态ip 修改保存退出即可
[root@localhost ~]# service network restart
- 启动三台机器 重命名机器,分别修改机器名 master,slave1,slave2,重启系统。
[root@localhost ~]# vi /etc/sysconfig//network
修改内容如下:
#Created by anaconda
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=master
- 修改 master 上的 /etc/host
[hadoop@localhost root]$ sudo vi /etc/hosts
添加内容如下:
192.168.56.121 master
192.168.56.122 slave1
192.168.56.123 slave2
将hosts 文件复制到slave1 和 slave2
[hadoop@localhost root]$ sudo scp /etc/hosts root@slave1:/etc
[hadoop@localhost root]$ sudo scp /etc/hosts root@slave2:/etc
在slave1 节点服务器查看文件如下
说明是复制成功的
- 在 master 机器上使用hadoop 用户登录(确保接下来的操作都是通过hadoop 用户执行) 执行$ssh-keygen -t rsa 命令产生公钥
[hadoop@localhost root]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:R5+1qt1CeHP3Y25Gy1mwoS2rTkX/R27ZR6gEQd6A5aQ hadoop@localhost.localdomain
The key's randomart image is:
+---[RSA 2048]----+
| +* |
| .= + |
| E = o . |
| . + +oo |
| S ..=o++o|
| ..o*o+*=|
| .oo=oo%|
| . oo. X+|
| .+...*..|
+----[SHA256]-----+
[hadoop@localhost root]$
- 将公钥复制到slave1 和 slave2
ssh-copy-id -i ~/.ssh/id_rsa.pub slave1
ssh-copy-id -i ~/.ssh/id_rsa.pub slave2
- 再次登录,已经可以不需要密码就可以登录 slave1 和 slave1