Hadoop安装过程中的问题

参考:

Hadoop3.0完全分布式集群搭建方法(CentOS7+Hadoop3.2.0)

 

我在虚拟机上安装了3台centos7,并且以普通用户安装hadoop于 /usr/local下

1.在首次启动hadoop,格式化 hadoop namenode -format  时 ,提示如下错误:

错误总结:

(1)/tmp 文件夹下权限不足,开放 /tmp 文件夹权限即可 sudo chmod -R 777 /tmp

(2) 以普通用户在hadoop01上启动start-dfs.sh时,在hadoop01上可以查到namnode进程,但是在hadoop02和hadoop03上无法查到datanode进程,因为我将配置文件中的文件夹位置定义在 /data下,子节点无权限在根目录下建立此文件夹,手动创建/data文件夹并赋予足够的权限即可。

(3)多次格式化hdfs后,无法启动datanode,原因为namenode和datanode的ID不一致导致无法启动datanode,可以选择修改ID或者删除dfs文件夹(前提是里面没有重要文件)。参考:https://blog.csdn.net/qq_42239069/article/details/83513948

 

Hadoop的启动和停止说明

sbin/start-all.sh 启动所有的Hadoop守护进程。包括NameNode、 Secondary NameNode、DataNode、ResourceManager、NodeManager

sbin/stop-all.sh 停止所有的Hadoop守护进程。包括NameNode、 Secondary NameNode、DataNode、ResourceManager、NodeManager

sbin/start-dfs.sh 启动Hadoop HDFS守护进程NameNode、SecondaryNameNode、DataNode

sbin/stop-dfs.sh 停止Hadoop HDFS守护进程NameNode、SecondaryNameNode和DataNode

sbin/hadoop-daemons.sh start namenode 单独启动NameNode守护进程

sbin/hadoop-daemons.sh stop namenode 单独停止NameNode守护进程

sbin/hadoop-daemons.sh start datanode 单独启动DataNode守护进程

sbin/hadoop-daemons.sh stop datanode 单独停止DataNode守护进程

sbin/hadoop-daemons.sh start secondarynamenode 单独启动SecondaryNameNode守护进程

sbin/hadoop-daemons.sh stop secondarynamenode 单独停止SecondaryNameNode守护进程

sbin/start-yarn.sh 启动ResourceManager、NodeManager

sbin/stop-yarn.sh 停止ResourceManager、NodeManager

sbin/yarn-daemon.sh start resourcemanager 单独启动ResourceManager

sbin/yarn-daemons.sh start nodemanager  单独启动NodeManager

sbin/yarn-daemon.sh stop resourcemanager 单独停止ResourceManager

sbin/yarn-daemons.sh stopnodemanager  单独停止NodeManager

 

sbin/mr-jobhistory-daemon.sh start historyserver 手动启动jobhistory

sbin/mr-jobhistory-daemon.sh stop historyserver 手动停止jobhistory

 

 

 

 

首次启动错误如下,对应第一个问题:

WARNING: Use of this script to execute namenode is deprecated.
WARNING: Attempting to execute replacement "hdfs namenode" instead.

/usr/local/hadoop-3.2.0/libexec/hadoop-functions.sh: line 1801: /tmp/hadoop-hadoop-namenode.pid: Permission denied
ERROR:  Cannot write namenode pid /tmp/hadoop-hadoop-namenode.pid.
2019-06-28 22:34:38,804 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoop01/192.168.161.128
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 3.2.0
STARTUP_MSG:   classpath = /usr/local/hadoop-3.2.0/etc/hadoop:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/accessors-smart-1.2.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/asm-5.0.4.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/avro-1.7.7.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/commons-codec-1.11.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/commons-io-2.5.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/commons-lang3-3.7.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/commons-net-3.6.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/commons-text-1.4.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/curator-client-2.12.0.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/curator-framework-2.12.0.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/dnsjava-2.1.7.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/hadoop-annotations-3.2.0.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/hadoop-auth-3.2.0.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/jackson-annotations-2.9.5.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/jackson-core-2.9.5.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/jackson-databind-2.9.5.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-3.2.0/share/hadoop/common/lib/jackson-xc-1.
  • 1
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值