1、修改ip地址:
cd /etc/sysconfig/network-scripts/
ls
vim ifcfg-ens33 进入编辑区
光标定位到IPADDR=192.168.182.130的末尾0处,按小写的r再按1,将130该为131
2、修改hostname
格式:hostnamectl set-hostname 机器名称
改成:hostnamectl set-hostname master
3、查看hostname
hostname
4、修改ip映射关系
vim /etc/hosts
添加如下内容:
192.xxx.xxx.130 master
192.xxx.xxx.131 slave1
192.xxx.xxx.132 slave2
5、分发hosts文件
scp -r /etc/hosts root@slave1:/etc/
scp -r /etc/hosts root@slave2:/etc/
6、免密登陆(互信)
ssh-keygen -t rsa -P ''
这个命令生成的公钥和私钥文件,我们需要的是公钥文件中的公钥,把公钥给authorized_keys,
需要记录所有节点的公钥内容,这样才能达到免密登陆(互信)的目的
7、将三台节点的公钥存储在authorized_keys文件中,并将该文件分发给其他节点:
通过ssh命令,连接到slave1节点,查看slave1节点的公钥文件,追加到authorized_keys文件中:
[root@master .ssh]# ssh slave1 cat /root/.ssh/id_rsa.pub >> authorized_keys
root@slave1's password:
同样的操作,将slave2节点的公钥追加到authorized_keys文件:
[root@master .ssh]# ssh slave2 cat /root/.ssh/id_rsa.pub >> authorized_keys
root@slave2's password:
主节点的公钥追加到authorized_keys文件中:
[root@master .ssh]# cat id_rsa.pub >> authorized_keys
对authorized_keys文件中的公钥进行查看:
[root@master .ssh]# cat authorized_keys
8、分发authorized_keys公钥文件到其他节点,使免密登陆(互信)生效
scp -r authorized_keys slave1:`pwd`(注意是反引号)
scp -r authorized_keys slave2:`pwd`
9、格式化Hadoop集群(待修改)
hadoop namenode -format
注:
(1)在哪个节点格式化,哪个节点就是主节点
(2)启动集群:[root@master hadoop-2.6.1]# sbin/start-all.sh
(3)检查集群是否正常
(a)检查进程是否正常
master(主节点)节点:
[root@master hadoop-2.6.1]# jps
2640 NameNode
2944 ResourceManager
3010 Jps
2810 SecondaryNameNode
slave1(子节点)节点:
[root@slave1 ~]# jps
1626 DataNode
1722 NodeManager
1838 Jps
slave2(子节点)节点:
[root@slave2 ~]# jps
1521 DataNode
1706 Jps
1611 NodeManager
(存疑,此处有问题)
(问题已解决)
(b)命令(主节点执行):hadoop dfsadmin -report
[root@master hadoop-2.6.1]# hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Configured Capacity: 79401328640 (73.95 GB)
Present Capacity: 74557636608 (69.44 GB)
DFS Remaining: 74557603840 (69.44 GB)
DFS Used: 32768 (32 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Live datanodes (2):
(c)通过文件上传的方式检查集群是否正常
[root@master hadoop-2.6.1]# hadoop fs -put README.txt /
[root@master hadoop-2.6.1]# hadoop fs -ls /
Found 1 items
-rw-r--r-- 2 root supergroup 1366 2023-02-11 20:49 /README.txt
(4)关闭集群:停止所有进程
[root@master hadoop-2.6.1]# sbin/stop-all.sh