Hadoop namenode 不能启动解决方案

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/yangzongzhuan/article/details/50820643
每次机器重启了,namenode都启动不了,造成到原因可能是:

     在core-site.xml配置中hadoop.tmp.dir的目录在系统启动时被清空

  1.   
  1. <property>  
  2.     <name>hadoop.tmp.dir</name>  
  3.     <value>/tmp/hadoop/hadoop-${user.name}</value>  
  4.   </property>  

两种解决方案:

     1、进入hadoop到目录

           > bin/stop-all.sh

           > hadoop namenode -format

           > bin/start-all.sh

           > jps 查看namenode已经启动

          这种方式讲以前到数据全都格式化掉了

   2、更该hadoop.tmp.dir的目录

  1. <property>     
  2.        <name>hadoop.tmp.dir</name>    
  3.        <value>/home/leecho(你的账户名)/tmp</value>     
  4. </property>    
   只要那个目录不会清空就可以了
展开阅读全文

hadoop 启动异常。jobtracker和namenode启动

03-04

master上jps,结果为rn15740 SecondaryNameNodern15906 Jpsrnrnslave上jps,结果为rn7315 TaskTrackerrn7218 DataNodern7427 Jpsrnrn#start-all.sh执行显示rnstarting namenode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-namenode-vz-121-xinhailong.outrn10.1.1.112: starting datanode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-datanode-localhost.localdomain.outrn10.1.9.121: starting secondarynamenode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-secondarynamenode-vz-121-xinhailong.outrnstarting jobtracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-jobtracker-vz-121-xinhailong.outrn10.1.1.112: starting tasktracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-tasktracker-localhost.localdomain.outrnrn#stop-all.sh 显示rnno jobtracker to stoprn10.1.1.112: stopping tasktrackerrnno namenode to stoprn10.1.1.112: stopping datanodern10.1.9.121: stopping secondarynamenodernrn配置文件如下rn#vim /etc/hadoop/core-site.xml rn rn (注意,请先在 hadoopinstall 目录下建立 tmp 文件夹)rn rn hadoop.tmp.dirrn /home/hadoop/project/tmprn A base for other temporary directories.rn rnrn rn fs.default.namern hdfs://ip_master_node:50070rn rnrnrn#vim /etc/hadoop/hdfs-site.xmlrnrn dfs.name.dirrn /home/hadoop/project/hadoop/hdfs/namernrnrn dfs.data.dirrn /home/hadoop/project/hadoop/hdfs/datarnrnrn dfs.replicationrn 1rnrnrn#vim /etc/hadoop/mapred-site.xmlrnrn mapred.job.trackerrn ip_master_node:50030rnrn请问是什么原因导致的? 论坛

hadoop HA部署,启动namenode报错

09-26

HA部署,配置内容如下:rncore-site.xml:rn[code=java]rn rn rn fs.defaultFSrn hdfs://clusterrn rn rn rn hadoop.tmp.dirrn /data1/hadoop/tmprn rn rn ha.zookeeper.quorumrn zk.cloud.ziroom.com:2181rn rn[/code]rnrnhdfs-site.xml:rn[code=java]rn rn rn dfs.replicationrn 3rn rn rn dfs.namenode.name.dirrn /data1/hadoop/hdfs/namern rn rn dfs.datanode.data.dirrn /data1/hadoop/hdfs/datarn rn rn rn dfs.nameservicesrn clusterrn rn rn rn dfs.ha.namenodes.clusterrn nn1,nn2rn rn rn rn dfs.namenode.rpc-address.cluster.nn1rn 10.216.18.100:8020rn rn rn rn dfs.namenode.rpc-address.cluster.nn2rn 10.216.18.101:8020rn rn rn rn dfs.namenode.http-address.cluster.nn1rn 10.216.18.100:50070rn rn rn rn dfs.namenode.http-address.cluster.nn2rn 10.216.18.101:50070rn rn rn rn dfs.namenode.shared.edits.dirrn qjournal://10.216.18.101:8485;10.216.18.102:8485;10.216.18.103:8485/clusterrn rn rn rn dfs.journalnode.edits.dirrn /data1/hadoop/hdfs/journalrn rn rn rn dfs.client.failover.proxy.provider.clusterrn org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProviderrn rnrn rn dfs.ha.fencing.methodsrn shell(/bin/true)rn rn rn dfs.ha.automatic-failover.enabledrn truern rn rn     dfs.namenode.datanode.registration.ip-hostname-checkrn     falsern  rnrn[/code]rnrnslaves:rn10.216.18.101rn10.216.18.102rn10.216.18.103rnrnyarn-site.xml:rn[code=java]rnrnrn rn rn yarn.resourcemanager.hostnamern 10.216.18.100rn rnrn rn rn yarn.nodemanager.aux-servicesrn mapreduce_shufflern rnrn rn yarn.nodemanager.aux-services.mapreduce.shuffle.classrn org.apache.hadoop.mapred.ShuffleHandlerrn rn[/code]rnrnmapred-site.xml:rn[code=java]rn rn mapreduce.framework.namern yarnrn rn[/code]rnrn在格式化namenode 后启动namenode报错:rn[code=java]************************************************************/rn2018-09-26 19:23:05,145 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]rn2018-09-26 19:23:05,150 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []rn2018-09-26 19:23:05,428 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.propertiesrn2018-09-26 19:23:05,512 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).rn2018-09-26 19:23:05,512 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system startedrn2018-09-26 19:23:05,515 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://clusterrn2018-09-26 19:23:05,516 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use cluster to access this namenode/service.rn2018-09-26 19:23:05,680 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://otsdb_smart_1_18_100:50070rn2018-09-26 19:23:05,736 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLogrn2018-09-26 19:23:05,744 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.rn2018-09-26 19:23:05,759 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not definedrn2018-09-26 19:23:05,764 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)rn2018-09-26 19:23:05,766 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.rnjava.lang.IllegalArgumentException: The value of property bind.address must not be nullrn at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)rn at org.apache.hadoop.conf.Configuration.set(Configuration.java:1204)rn at org.apache.hadoop.conf.Configuration.set(Configuration.java:1185)rn at org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:414)rn at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:367)rn at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:114)rn at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:296)rn at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:126)rn at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:761)rn at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:640)rn at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:820)rn at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:804)rn at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1516)rn at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1582)rn2018-09-26 19:23:05,768 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1rn2018-09-26 19:23:05,787 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: rn[/code]rnrnrn麻烦各位大神看看是怎么回事? 论坛

hadoop安装问题:namenode启动后又退出

12-18

搞了2个FC14的机器,一个做namenode,一个做datanode,来实现最简单的hadoop分布式系统。rnnamenode地址为192.168.2.198,主机名为fc.hadoop.001rndatanode地址为192.168.2.199,主机名为fc.hadoop.002rnrncore-site.xml配置如下:rnrn[code=text]rnrnrn fs.default.namern hdfs://fc.hadoop.001:9000rnrnrn hadoop.tmp.dirrn /home/hadoop/tmprnrnrn[/code]rnrnhdfs-site.xml配置如下:rn[code=text]rnrnrn dfs.replicationrn 1rnrnrn[/code]rnrnmapred-site.xml配置如下:rn[code=text]rnrnrn mapred.job.trackerrn fc.hadoop.001:9001rnrnrn[/code]rnrn但现在format完成,start-all.sh后,日志中信息如下:rn[code=text]rn2013-12-18 09:45:49,985 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070rn2013-12-18 09:45:49,986 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: startingrn2013-12-18 09:45:49,988 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: startingrn2013-12-18 09:45:49,988 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: startingrn2013-12-18 09:45:49,989 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: startingrn2013-12-18 09:45:49,990 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: startingrn2013-12-18 09:45:49,990 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: startingrn2013-12-18 09:45:49,990 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: startingrn2013-12-18 09:45:49,991 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: startingrn2013-12-18 09:45:49,991 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: startingrn2013-12-18 09:45:49,991 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: startingrn2013-12-18 09:45:49,991 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: startingrn2013-12-18 09:45:50,009 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: startingrn2013-12-18 09:45:50,058 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interruptedrn2013-12-18 09:45:50,071 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitorrnjava.lang.InterruptedException: sleep interruptedrn at java.lang.Thread.sleep(Native Method)rn at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)rn at java.lang.Thread.run(Thread.java:662)rn2013-12-18 09:45:50,073 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syrnncs: 0 SyncTimes(ms): 0 rn2013-12-18 09:45:50,079 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000rn2013-12-18 09:45:50,079 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: exitingrn2013-12-18 09:45:50,079 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: exitingrn2013-12-18 09:45:50,080 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: exitingrn[/code]rnrnrn请问这是怎么回事呢?急死个人,求速救啊!!! 论坛

没有更多推荐了,返回首页