今天在学习启动Hadoop集群时遇到了个问题;
在执行start-dfs.sh
命令时报错了:
[root@hadoop101 hadoop-3.1.1]# start-dfs.sh
Starting namenodes on [hadoop101]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [hadoop103]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
可以看到问题是说没有找到HDFS_NAMENODE_USER
、HDFS_DATANODE_USER
以及HDFS_SECONDARYNAMENODE_USER
,这个问题的原因可能是因为安装的用户与执行命令的用户不同,因此需要在etc/hadoop/hadoop-env.sh
脚本中导出相应的环境变量:
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
为了方便起见,YARN的也写了进去,我们可以在保存该文件过后将配置文件同步到集群中的其他机器上。
本文解决问题的方式来自StackOverflow,各位有兴趣可以看下原帖