搭建虚拟机hadoop时,输入./sbin/start-dfs.sh启动hadoop,解决ERROR: Attempting to operate on hdfs namenode

启动hadoop时发生如下错误:

[root@localhost ~]# cd /usr/local
[root@localhost local]# cd hadoop
[root@localhost hadoop]# ./sbin/start-dfs.sh
Starting namenodes on [localhost]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [localhost.localdomain]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.

解决方法: 

1输入以下命令:

vi /etc/profile

按i进入编辑模式

2添加如下内容:

export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root

(按esc退出编辑模式,按shift+:后输入wq!点回车保存!)
3输入如下命令使编辑生效:

source /etc/profile

亲测有效哦!!

  • 9
    点赞
  • 36
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,你的问题是如何在Docker容器中安装并配置Hadoop以及启动Hadoop集群呢? 以下是在Docker容器中安装并配置Hadoop以及启动Hadoop集群的步骤: 1. 首先,安装Docker并启动Docker服务。 2. 使用Docker pull命令下载Ubuntu 16.04版本的镜像文件。 ``` docker pull ubuntu:16.04 ``` 3. 创建一个新的容器并运行它。 ``` docker run -it --name hadoop ubuntu:16.04 /bin/bash ``` 4. 在容器中安装Java和SSH。 ``` apt-get update apt-get install -y openjdk-8-jdk apt-get install -y ssh ``` 5. 创建一个新用户,并为其生成ssh公钥和私钥。 ``` useradd -m hduser su - hduser ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys ``` 6. 下载并解压Hadoop安装包,并将其移动到/usr/local目录下。 ``` wget https://archive.apache.org/dist/hadoop/core/hadoop-2.7.7/hadoop-2.7.7.tar.gz tar -xzvf hadoop-2.7.7.tar.gz mv hadoop-2.7.7 /usr/local/hadoop ``` 7. 配置Hadoop环境变量。 ``` echo "export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64" >> /usr/local/hadoop/etc/hadoop/hadoop-env.sh echo "export HADOOP_HOME=/usr/local/hadoop" >> /usr/local/hadoop/etc/hadoop/hadoop-env.sh echo "export PATH=$PATH:/usr/local/hadoop/bin:/usr/local/hadoop/sbin" >> /usr/local/hadoop/etc/hadoop/hadoop-env.sh ``` 8. 配置Hadoop的核心配置文件。 ``` cd /usr/local/hadoop/etc/hadoop cp mapred-site.xml.template mapred-site.xml cp yarn-site.xml.template yarn-site.xml cp core-site.xml core-site.xml.bak cp hdfs-site.xml hdfs-site.xml.bak ``` 修改core-site.xml: ``` <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://h01:9000</value> </property> </configuration> ``` 修改hdfs-site.xml: ``` <configuration> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/usr/local/hadoop/hdfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/usr/local/hadoop/hdfs/datanode</value> </property> </configuration> ``` 修改yarn-site.xml: ``` <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> ``` 9. 配置slaves文件,并将其复制到所有的节点上。 ``` echo "h01" > /usr/local/hadoop/etc/hadoop/masters echo "h02" > /usr/local/hadoop/etc/hadoop/slaves echo "h03" >> /usr/local/hadoop/etc/hadoop/slaves echo "h04" >> /usr/local/hadoop/etc/hadoop/slaves echo "h05" >> /usr/local/hadoop/etc/hadoop/slaves ``` 复制到所有的节点上: ``` scp /usr/local/hadoop/etc/hadoop/slaves hduser@h02:/usr/local/hadoop/etc/hadoop/ scp /usr/local/hadoop/etc/hadoop/slaves hduser@h03:/usr/local/hadoop/etc/hadoop/ scp /usr/local/hadoop/etc/hadoop/slaves hduser@h04:/usr/local/hadoop/etc/hadoop/ scp /usr/local/hadoop/etc/hadoop/slaves hduser@h05:/usr/local/hadoop/etc/hadoop/ ``` 10. 启动Hadoop集群。 ``` cd /usr/local/hadoop/sbin ./start-dfs.sh ./start-yarn.sh ``` 11. 熟悉HDFS上传,下载,查看文件等操作。 上传文件到HDFS: ``` hdfs dfs -mkdir /input hdfs dfs -put /usr/local/hadoop/etc/hadoop/*.xml /input ``` 下载文件到本地: ``` hdfs dfs -get /input/*.xml /usr/local/hadoop/etc/hadoop/ ``` 查看文件: ``` hdfs dfs -ls /input ``` 12. 运行内置WordCount例子。 ``` hdfs dfs -mkdir /output hdfs dfs -put /usr/local/hadoop/LICENSE.txt /input hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.7.jar wordcount /input /output hdfs dfs -cat /output/part-r-00000 ``` 以上是在Docker容器中安装并配置Hadoop以及启动Hadoop集群的步骤,希望能帮到你!

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值