1.启动hdfs
start-dfs.sh (启动后使用jps查看进程是否启动)
2.执行
hdfs portmap start和
hdfs nfs3 start
使用jps查看portmap和nfs3是否启动
3.执行
mount -t nfs -o vers=3,proto=tcp,nolock 10.211.55.38:/ /hdfs
挂载到本地目录
sudo service portmap stop
sudo hdfs portmap 2>~/portmap.err & sudo -u hdfs hdfs nfs3 2>~/nfs3.err &
rpcinfo -p xxx.xxx.xxx.xxx
showmount -e xxx.xxx.xxx.xxx
sudo mount -t nfs -o vers=3,proto=tcp,nolock $HOSTNAME:/ /mnt/hdfs
备注:
hadoop配置修改
1.
core-site.xml
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
<description>
The 'nfsserver' user is allowed to proxy all members of the 'nfs-users1' and
'nfs-users2' groups. Set this to '*' to allow nfsserver user to proxy any group.
</description>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
<description>
This is the host where the nfs gateway is running. Set this to '*' to allow
requests from any hosts to be proxied.
</description>
</property>
2.
hdfs-site.xml
<property>
<name>dfs.namenode.accesstime.precision</name>
<value>3600000</value>
<description>The access time for HDFS file is precise upto this value.
The default value is 1 hour. Setting a value of 0 disables
access times for HDFS.
</description>
</property>
<property>
<name>dfs.nfs3.dump.dir</name>
<value>/tmp/.hdfs-nfs</value>
3.
log4j.properties
log4j.logger.org.apache.hadoop.hdfs.nfs=DEBUG
log4j.logger.org.apache.hadoop.oncrpc=DEBUG