解决Hadoop时no namenode to stop异常或则 是 jps中没有namenode

一:问题:

在jps的时候没有发现namenode
在这里插入图片描述
在stop-all.sh 的时候 ,出现 No namenode to stop

在这里插入图片描述

二:寻找解决方法

1.博客的寻找
2.在脚本中发现了一个关键的地方

[hadoop@hadoop001 sbin]$ cat hadoop-daemon.sh
,,,,

namenode|secondarynamenode|datanode|journalnode|dfs|dfsadmin|fsck|balancer|zkfc)
    if [ -z "$HADOOP_HDFS_HOME" ]; then
    ,,,,
    if [ -f $pid ]; then
  TARGET_PID=`cat $pid`

问题关键:就是没有pid文件
注释:pid是什么文件
Hadoop启动后,会把进程的PID号存储在一个文件中,这样执行stop-dfs脚本时就可以按照进程PID去关闭进程了。

现在问题原因很明确了,就是/tmp目录下的hadoop-*.pid的文件找不到了
这是因为linux会定期30天清理tmp文件
去tmp目录下面看一下
在这里插入图片描述

三:解决问题:

1.杀死所有进程,一定要清理干净
2.重新启动hadoop
3.jps查看
4.运行一个jop查看,外部界面查看

[hadoop@hadoop001 sbin]$ jps
18514 Jps
18196 NodeManager
17943 SecondaryNameNode
18093 ResourceManager
17791 DataNode
[hadoop@hadoop001 sbin]$ kill -9 18196 17943 18093 17791
[hadoop@hadoop001 sbin]$ jps
18533 Jps
[hadoop@hadoop001 sbin]$ hadoop namenode -format

[hadoop@hadoop001 sbin]$ ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hadoop001]
hadoop001: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out
hadoop001: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out
Starting secondary namenodes [hadoop001]
hadoop001: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop001.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop001.out
hadoop001: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop001.out
[hadoop@hadoop001 sbin]$ jps
19568 Jps
18832 DataNode
18705 NameNode
19250 NodeManager
18995 SecondaryNameNode
19146 ResourceManager
[hadoop@hadoop001 sbin]$ ls /tmp/
Aegis-<Guid(5A2C30A2-A87D-490A-9281-6765EDAD7CBA)>
hadoop-hadoop
hadoop-hadoop-datanode.pid
hadoop-hadoop-namenode.pid
hadoop-hadoop-secondarynamenode.pid
hsperfdata_hadoop
hsperfdata_root
Jetty_0_0_0_0_50070_hdfs____w2cu08


四:永久不出现这个问题

[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$ vi etc/hadoop/hdaoop_env .sh
# The directory where pid files are stored. /tmp by default.
# NOTE: this should be set to a directory that can only be written to by 
#       the user that will run the hadoop daemons.  Otherwise there is the
#       potential for a symlink attack.
export HADOOP_PID_DIR=${HADOOP_PID_DIR} 

mkdir /data/tmp
chmod -R 777 /data/tmp
export HADOOP_PID_DIR=/data/tmp

  • 0
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值