一、遇到问题1
hadoop集群执行命令:
sbin/start-dfs.sh
sbin/start-yarn.sh
报错如下:
[root@hadoop01 hadoop-3.1.4]# sbin/start-dfs.sh
Starting namenodes on [hadoop01]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [hadoop03]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
二、解决方案
(1)进入sbin目录
/opt/module/hadoop-3.1.4/sbin
(2)先后在以下命令中输入文本内容
vim start-dfs.sh
vim stop-dfs.sh
内容:
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
(3)先后在以下命令中输入文本内容
vim start-yarn.sh
vim stop-yarn.sh
内容:
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
(4)把修改后的sbin目录分发到其他子节点
xsync sbin /opt/module/hadoop-3.1.4/sbin/
三、遇到的问题2
hadoop02: ERROR: JAVA_HOME is not set and could not be found.
hadoop03: ERROR: JAVA_HOME is not set and could not be found.
Starting secondary namenodes [hadoop03]
四、解决方案
子节点没有配置环境变量
自行把主节点的jdk环境变量分发到子节点
(1)
[root@hadoop01 hadoop-3.1.4]# cd /etc/profile.d
(2)
[root@hadoop01 profile.d]# xsync my_env.sh /etc/profile.d/
启动成功: