写在最前注意:
1、master,slave都需要修改start-dfs.sh,stop-dfs.sh,start-yarn.sh,stop-yarn.sh四个文件
2、如果你的Hadoop是另外启用其它用户来启动,记得将root改为对应用户
HDFS格式化后启动dfs出现以下错误:
查度娘,见一仁兄的博客有次FAQ,故参考处理顺便再做一记录
原文链接:https://blog.csdn.net/lglglgl/article/details/80553828
在/hadoop/sbin路径下:
将start-dfs.sh,stop-dfs.sh两个文件顶部添加以下参数
#!/usr/bin/env bash
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
还有,start-yarn.sh,stop-yarn.sh顶部也需添加以下:
#!/usr/bin/env bash
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
Licensed to the Apache Software Foundation (ASF) under one or more
修改后重启 ./start-dfs.sh,成功!
[root@master sbin]# ./start-dfs.sh
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
Starting namenodes on [master]
上一次登录:日 6月 3 03:01:37 CST 2018从 slave1pts/2 上
master: Warning: Permanently added ‘master,192.168.43.161’ (ECDSA) to the list of known hosts.
Starting datanodes
Starting namenodes on [hadoop100]
上一次登录:六 10月 23 00:28:55 CST 2021:0 上
Starting datanodes
上一次登录:六 10月 23 00:44:27 CST 2021pts/0 上
hadoop102: WARNING: /opt/module/hadoop-3.1.3/logs does not exist. Creating.
hadoop101: WARNING: /opt/module/hadoop-3.1.3/logs does not exist. Creating.
Starting secondary namenodes [hadoop102]
上一次登录:六 10月 23 00:44:30 CST 2021pts/0 上