解决journalnode与NameNode连接问题

org.apache.hadoop.ipc.Client: Retrying connect to server错误的解决的方法。一、问题描述HA按照规划配置好,启动后,NameNode不能正常启动。刚启动的时候 jps 看到了NameNode,但是隔了一两分钟,再看NameNode就不见了。测试之后,发现下面2种情况:1)先启动JournalNode,再启动Hdfs,NameNode可以启动并可以正常运行2)使用start-dfs.sh启动,众多服务都启动了,隔两分钟NameNode会退出,再
摘要由CSDN通过智能技术生成

org.apache.hadoop.ipc.Client: Retrying connect to server错误的解决的方法。

一、问题描述
HA按照规划配置好,启动后,NameNode不能正常启动。
刚启动的时候 jps 看到了NameNode,但是隔了一两分钟,再看NameNode就不见了。
测试之后,发现下面2种情况:
1)先启动JournalNode,再启动Hdfs,NameNode可以启动并可以正常运行
2)使用start-dfs.sh启动,众多服务都启动了,隔两分钟NameNode会退出,再次hadoop-daemon.sh start namenode单独启动可以成功稳定运行NameNode。

再看NameNode的日志,不要嫌日志长,其实出错的蛛丝马迹都包含其中了,如下:
2016-03-09 10:50:27,123 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = node1/192.168.56.201
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.5.1
STARTUP_MSG: build = Unknown -r Unknown; compiled by ‘root’ on 2014-10-20T05:53Z
STARTUP_MSG: java = 1.7.0_09
************************************************************/
2016-03-09 10:50:27,132 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2016-03-09 10:50:27,138 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2016-03-09 10:50:27,465 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2016-03-09 10:50:27,623 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-03-09 10:50:27,623 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2016-03-09 10:50:27,625 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://hadoopha
2016-03-09 10:50:27,626 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use hadoopha to access this namenode/service.
2016-03-09 10:50:28,048 INFO org.apache.hadoop.hdfs.DFSUtil: Starting web server as: d f s . w e b . a u t h e n t i c a t i o n . k e r b e r o s . p r i n c i p a l 2016 − 03 − 0910 : 50 : 28 , 048 I N F O o r g . a p a c h e . h a d o o p . h d f s . D F S U t i l : S t a r t i n g W e b − s e r v e r f o r h d f s a t : h t t p : / / n o d e 1 : 500702016 − 03 − 0910 : 50 : 28 , 121 I N F O o r g . m o r t b a y . l o g : L o g g i n g t o o r g . s l f 4 j . i m p l . L o g 4 j L o g g e r A d a p t e r ( o r g . m o r t b a y . l o g ) v i a o r g . m o r t b a y . l o g . S l f 4 j L o g 2016 − 03 − 0910 : 50 : 28 , 128 I N F O o r g . a p a c h e . h a d o o p . h t t p . H t t p R e q u e s t L o g : H t t p r e q u e s t l o g f o r h t t p . r e q u e s t s . n a m e n o d e i s n o t d e f i n e d 2016 − 03 − 0910 : 50 : 28 , 145 I N F O o r g . a p a c h e . h a d o o p . h t t p . H t t p S e r v e r 2 : A d d e d g l o b a l f i l t e r ′ s a f e t y ′ ( c l a s s = o r g . a p a c h e . h a d o o p . h t t p . H t t p S e r v e r 2 {dfs.web.authentication.kerberos.principal} 2016-03-09 10:50:28,048 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://node1:50070 2016-03-09 10:50:28,121 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2016-03-09 10:50:28,128 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined 2016-03-09 10:50:28,145 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2 dfs.web.authentication.kerberos.principal2016030910:50:28,048INFOorg.apache.hadoop.hdfs.DFSUtil:StartingWebserverforhdfsat:http://node1:500702016030910:50:28,121INFOorg.mortbay.log:Loggingtoorg.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)viaorg.mortbay.log.Slf4jLog2016030910:50:28,128INFOorg.apache.hadoop.http.HttpRequestLog:Httprequestlogforhttp.requests.namenodeisnotdefined2016030910:50:28,145INFOorg.apache.hadoop.http.HttpServer2:Addedglobalfiltersafety(class=org.apache.hadoop.http.HttpServer2QuotingInputFilter)
2016-03-09 10:50:28,149 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter S t a t i c U s e r F i l t e r ) t o c o n t e x t h d f s 2016 − 03 − 0910 : 50 : 28 , 149 I N F O o r g . a p a c h e . h a d o o p . h t t p . H t t p S e r v e r 2 : A d d e d f i l t e r s t a t i c u s e r f i l t e r ( c l a s s = o r g . a p a c h e . h a d o o p . h t t p . l i b . S t a t i c U s e r W e b F i l t e r StaticUserFilter) to context hdfs 2016-03-09 10:50:28,149 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter StaticUserFilter)tocontexthdfs2016030910:50:28,149INFOorg.apache.hadoop.http.HttpSe

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值