实验环境:
(可在上一篇博客中查看 server1配置情况.)
主机名 | ip | |
---|---|---|
server1 | 172.25.254.1 | 主节点 |
server2 | 172.25.254.2 | 从节点 |
server3 | 172.25.254.3 | 从节点 |
[hadoop@server1 hadoop-2.7.3]$ sbin/stop-yarn.sh ## 停掉server1的服务
[hadoop@server1 hadoop-2.7.3]$ sbin/stop-dfs.sh
搭建Hadoop集群节点:
主节点server1:
[root@server1 ~]# yum install -y nfs-utils
[root@server1 ~]# vim /etc/exports
[root@server1 ~]# cat /etc/exports
/home/hadoop *(rw,anonuid=800,anongid=800)
[root@server1 ~]# /etc/init.d/rpcbind start 依次启动不然nfs启动会报错
Starting rpcbind: [ OK ]
[root@server1 ~]# /etc/init.d/nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS mountd: [ OK ]
Starting NFS daemon: [ OK ]
[root@server1 ~]# showmount -e 刷新
Export list for server1:
/home/hadoop *
从节点server2和server3:(2\3操作一致)
[root@server2 ~]# yum install -y nfs-utils ##安装nfs
[root@server2 ~]# /etc/init.d/rpcbind start ###启动服务
[root@server2 ~]# /etc/init.d/nfs start
[root@server2 ~]# useradd -u 800 hadoop ##建立用户
[root@server2 ~]# mount 172.25.45.1:/home/hadoop/ /home/hadoop/ ###挂载目录,进行同步
[root@server2 ~]# su - hadoop
[hadoop@server2 ~]$