完全分部式是真正利用多台Linux主机来进行部署Hadoop,对Linux机器集群进行规划,使得Hadoop各个模块分别部署在不同的多台机器上。
实验环境:关闭selinux,iptables
添加解析(3台):
172.25.0.117 server2
172.25.0.118 server3
172.25.0.119 server4
关闭server2服务:
[hadoop@server2 hadoop-2.7.3]$ sbin/stop-yarn.sh
[hadoop@server2 hadoop-2.7.3]$ sbin/stop-dfs.sh
1.安装nfs-utils服务(3台):
[root@server2 hadoop]# yum install nfs-utils -y
[root@server2 hadoop]# id hadoop
uid=500(hadoop) gid=500(hadoop) groups=500(hadoop)
[root@server2 hadoop]# vim /etc/exports
/home/hadoop *(rw,anonuid=500,anongid=500)
[root@server2 hadoop]# /etc/init.d/rpcbind start
Starting rpcbind: [ OK ]
[root@server2 hadoop]# /etc/init.d/nfs start
Starting NFS services: [ OK ]
Starting NFS mountd: [ OK ]
Starting NFS daemon: [ OK ]
Starting RPC idmapd: [ OK ]
挂载:
[root@server2 hadoop]# showmount -e 172.25.0.117
Export list for 172.25.0.117:
/home/hadoop *
server3和server4端安装nfs并挂载
[root@server3 ~]# yum install nfs-utils -y
[root@server3 ~]# useradd -u 500 hadoop
[root@server3 ~]# /etc/init.d/rpcbind start
Starting rpcbind: [ OK ]
[root@server3 ~]# /etc/init.d/nfs start
Starting NFS services: [ OK ]
Starting NFS mountd: