一:问题:
在jps的时候没有发现namenode
在stop-all.sh 的时候 ,出现 No namenode to stop
二:寻找解决方法
1.博客的寻找
2.在脚本中发现了一个关键的地方
[hadoop@hadoop001 sbin]$ cat hadoop-daemon.sh
,,,,
namenode|secondarynamenode|datanode|journalnode|dfs|dfsadmin|fsck|balancer|zkfc)
if [ -z "$HADOOP_HDFS_HOME" ]; then
,,,,
if [ -f $pid ]; then
TARGET_PID=`cat $pid`
问题关键:就是没有pid文件
注释:pid是什么文件
Hadoop启动后,会把进程的PID号存储在一个文件中,这样执行stop-dfs脚本时就可以按照进程PID去关闭进程了。
现在问题原因很明确了,就是/tmp目录下的hadoop-*.pid的文件找不到了
这是因为linux会定期30天清理tmp文件
去tmp目录下面看一下
三:解决问题:
1.杀死所有进程,一定要清理干净
2.重新启动hadoop
3.jps查看
4.运行一个jop查看,外部界面查看
[hadoop@hadoop001 sbin]$ jps
18514 Jps
18196 NodeManager
17943 SecondaryNameNode
18093 ResourceManager
17791 DataNode
[hadoop@hadoop001 sbin]$ kill -9 18196 17943 18093 17791
[hadoop@hadoop001 sbin]$ jps
18533 Jps
[hadoop@hadoop001 sbin]$ hadoop namenode -format
[hadoop@hadoop001 sbin]$ ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hadoop001]
hadoop001: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out
hadoop001: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out
Starting secondary namenodes [hadoop001]
hadoop001: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop001.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop001.out
hadoop001: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop001.out
[hadoop@hadoop001 sbin]$ jps
19568 Jps
18832 DataNode
18705 NameNode
19250 NodeManager
18995 SecondaryNameNode
19146 ResourceManager
[hadoop@hadoop001 sbin]$ ls /tmp/
Aegis-<Guid(5A2C30A2-A87D-490A-9281-6765EDAD7CBA)>
hadoop-hadoop
hadoop-hadoop-datanode.pid
hadoop-hadoop-namenode.pid
hadoop-hadoop-secondarynamenode.pid
hsperfdata_hadoop
hsperfdata_root
Jetty_0_0_0_0_50070_hdfs____w2cu08
四:永久不出现这个问题
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$ vi etc/hadoop/hdaoop_env .sh
# The directory where pid files are stored. /tmp by default.
# NOTE: this should be set to a directory that can only be written to by
# the user that will run the hadoop daemons. Otherwise there is the
# potential for a symlink attack.
export HADOOP_PID_DIR=${HADOOP_PID_DIR}
mkdir /data/tmp
chmod -R 777 /data/tmp
export HADOOP_PID_DIR=/data/tmp