由于在进行HDFS开启NFS时,遇到若干问题,在此记录下完整流程,以便后续查看:
1. 设置core-site.xml ,红色为增加部分
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop:9000</value>
</property>
<!-- The nfs setttings begin-->
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
<description>
The 'nfsserver' user is allowed to proxy all members of the 'users-group1' and
'users-group2' groups. Note that in most cases you will need to include the
group "root" because the user "root" (which usually belonges to "root" group) will
generally be the user that initially executes the mount on the NFS client system.
Set this to '*' to allow nfsserver user to proxy any group.
</description>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
<description>
This is the host where the nfs gateway is running. Set this to '*' to allow
requests from any hosts to be proxied.
</description>
</property>
<!-- The nfs setttings end-->
</configuration>
2. 编辑 hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/data/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/data/datanode</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:50010</value>
</property>
<!-- The nfs setttings begin-->
<property>
<name>dfs.namenode.accesstime.precision</name>
<value>3600000</value>
<description>The access time for HDFS file is precise upto this value.The default value is 1 hour. Setting a value of 0 disables access times for HDFS.</description>
</property>
<property>
<name>nfs.dump.dir</name>
<value>/tmp/.hdfs-nfs</value>
</property>
<property>
<name>nfs.exports.allowed.hosts</name>
<value>* rw</value>
</property>
<property>
<name>nfs.rtmax</name>
<value>1048576</value>
<description>This is the maximum size in bytes of a READ request supported by the NFS gateway. If you change this, make sure you also update the nfs mount's rsize(add rsize= # of bytes to the mount directive).</description>
</property>
<property>
<name>nfs.wtmax</name>
<value>65536</value>
<description>This is the maximum size in bytes of a WRITE request supported by the NFS gateway. If you change this, make sure you also update the nfs mount's wsize(add wsize= # of bytes to the mount directive).</description>
</property>
<!-- The nfs setttings end-->
</configuration>
3. 添加hadoop用户,并加入
$ useradd -d /home/hadoop -s /bin/bash hadoop
$ passwd hadoop
hadoop
更改 /usr/hadoop 属主为 hadoop
chown -R hadoop:hadoop hadoop
cd /usr/hadoop
mkdir tmp
4.将hadoop用户添加到sudoer里面
[root@hadoop ~]# visudo
到91行,添加如下信息
hadoop ALL=(ALL) NOPASSWD:ALL
效果如下:
5.关闭SELinux和防火墙
vi /etc/selinux/config
SELINUX=disabled
效果如下
6. 重启机器
7. 逐一执行以下命令
cd /usr/hadoop-2.7.2
systemctl stop nfs
systemctl stop portmap
su hadoop (一定不能是root用户,否则无法挂载)
/usr/hadoop-2.7.2/sbin/hadoop-daemon.sh --script /usr/hadoop-2.7.2/bin/hdfs stop nfs3
sudo su
/usr/hadoop-2.7.2/sbin/hadoop-daemon.sh --script /usr/hadoop-2.7.2/bin/hdfs stop portmap
/usr/hadoop-2.7.2/sbin/stop-dfs.sh
/usr/hadoop-2.7.2/sbin/stop-yarn.sh
/usr/hadoop-2.7.2/sbin/hadoop-daemon.sh --script /usr/hadoop-2.7.2/bin/hdfs start portmap
su hadoop (一定不能是root用户,否则无法挂载)
/usr/hadoop-2.7.2/sbin/hadoop-daemon.sh --script /usr/hadoop-2.7.2/bin/hdfs start nfs3
sudo su
/usr/hadoop-2.7.2/sbin/start-dfs.sh
/usr/hadoop-2.7.2/sbin/start-yarn.sh
rpcinfo -p localhost
showmount -e localhost
systemctl stop nfs
systemctl stop portmap
su hadoop (一定不能是root用户,否则无法挂载)
/usr/hadoop-2.7.2/sbin/hadoop-daemon.sh --script /usr/hadoop-2.7.2/bin/hdfs stop nfs3
sudo su
/usr/hadoop-2.7.2/sbin/hadoop-daemon.sh --script /usr/hadoop-2.7.2/bin/hdfs stop portmap
/usr/hadoop-2.7.2/sbin/stop-dfs.sh
/usr/hadoop-2.7.2/sbin/stop-yarn.sh
/usr/hadoop-2.7.2/sbin/hadoop-daemon.sh --script /usr/hadoop-2.7.2/bin/hdfs start portmap
su hadoop (一定不能是root用户,否则无法挂载)
/usr/hadoop-2.7.2/sbin/hadoop-daemon.sh --script /usr/hadoop-2.7.2/bin/hdfs start nfs3
sudo su
/usr/hadoop-2.7.2/sbin/start-dfs.sh
/usr/hadoop-2.7.2/sbin/start-yarn.sh
rpcinfo -p localhost
showmount -e localhost
8.挂载NFS
mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync localhost:/ /home/hadoop/tmp/
按照以上步骤基本就不会出错了,最主要的问题在于nfs3服务启动必须不能是 root 用户,否则总是出现
mount.nfs: mount system call failed