hadoop nfs网关
1. 在hadoop下的core-site.xml,加入
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
<description>允许所有用户组用户代理</description>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>localhost</value>
<description>允许挂载的主机域名</description>
</property>
2. 在hadoop下的hdfs-site.xml加入
<property>
<name>nfs.dump.dir</name>
<value>/tmp/.hdfs-nfs</value>
</property>
<property>
<name>nfs.rtmax</name>
<value>1048576</value>
<description>This is the maximum size in bytes of a READ request supported by the NFS gateway. If you change this, make sure you also upda
te the nfs mount's rsize(add rsize= # of bytes to the mount directive).</description>
</property>
<property>
<name>nfs.wtmax</name>
<value>65536</value>
<description>This is the maximum size in bytes of a WRITE request supported by the NFS gateway. If you change this, make sure you also upd
ate the nfs mount's wsize(add wsize= # of bytes to the mount directive).</description>
</property>
<property>
<name>nfs.exports.allowed.hosts</name>
<value>* rw</value>
<description>允许所有主机对文件有rw权限</description>
</property>
3.重启hadoop
/etc/init.d/hadoop-hdfs-namenode restart
/etc/init.d/hadoop-hdfs-datanode restart
4.关闭系统nfs、rpcbind
service nfs stop
service rpcbind stop
5.启动hadoop的portmap、nfs3
hdfs portmap start
hdfs nfs3 start
6. 挂载
mount -t nfs -o vers=3,proto=tcp,nolock,noacl $server:/ $mount_point
7.卸载
umount $mount_point