解决办法有两个
第一种:去掉hdfs的用户权限检验机制,通过在hdfs-site.xml中配置dfs.permissions.enabled为false即可
第二种:把代码打包到linux中执行
在这里为了在本地测试方便,我们先使用第一种方式
1:停止Hadoop集群
[root@bigdata01 ~]# cd /data/soft/hadoop-3.2.0
[root@bigdata01 hadoop-3.2.0]# sbin/stop-all.sh
Stopping namenodes on [bigdata01]
Last login: Wed Apr 8 20:25:17 CST 2020 from 192.168.182.1 on pts/1
Stopping datanodes
Last login: Wed Apr 8 20:25:40 CST 2020 on pts/1
Stopping secondary namenodes [bigdata01]
Last login: Wed Apr 8 20:25:41 CST 2020 on pts/1
Stopping nodemanagers
Last login: Wed Apr 8 20:25:44 CST 2020 on pts/1
Stopping resourcemanager
Last login: Wed Apr 8 20:25:47 CST 2020 on pts/1
2:修改hdfs-site.xml配置文件
注意:集群内所有节点中的配置文件都需要修改,先在bigdata01节点上修改,然后再同步到另外两个节点上
在bigdata01上操作
[root@bigdata01 hadoop-3.2.0]# vi etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>bigdata01:50090</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
</configuration>
同步到另外两个节点中
[root@bigdata01 hadoop-3.2.0]# scp -rq etc/hadoop/hdfs-site.xml bigdata02:/data/soft/hadoop-3.2.0/etc/hadoop/
[root@bigdata01 hadoop-3.2.0]# scp -rq etc/hadoop/hdfs-site.xml bigdata03:/data/soft/hadoop-3.2.0/etc/hadoop/
3:启动Hadoop集群
[root@bigdata01 hadoop-3.2.0]# sbin/start-all.sh
Starting namenodes on [bigdata01]
Last login: Wed Apr 8 20:25:49 CST 2020 on pts/1
Starting datanodes
Last login: Wed Apr 8 20:29:57 CST 2020 on pts/1
Starting secondary namenodes [bigdata01]
Last login: Wed Apr 8 20:29:59 CST 2020 on pts/1
Starting resourcemanager
Last login: Wed Apr 8 20:30:04 CST 2020 on pts/1
Starting nodemanagers
Last login: Wed Apr 8 20:30:10 CST 2020 on pts/1