MapReduce与HBase交互时驱动方法ToolRunner.run的小细节

博客聚焦于MapReduce与HBase交互时,驱动方法Runner.run.run的相关内容,虽未给出具体细节,但明确了核心是该驱动方法在交互过程中的小细节,对相关技术领域有一定参考价值。

查看了hbase的日志发现 2025-11-11 05:10:01,994 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2025-11-11 05:10:01,996 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2025-11-11 05:10:02,022 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=master:16000 connecting to ZooKeeper ensemble=hadoop1:2181,hadoop2:2181,hadoop3:2181 2025-11-11 05:10:02,027 INFO [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.5.7-f0fdd52973d373ffd9c86b81d99842dc2c7f660e, built on 02/10/2020 11:30 GMT 2025-11-11 05:10:02,027 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=hadoop1 2025-11-11 05:10:02,027 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_212 2025-11-11 05:10:02,027 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/home/hadoop/module/jdk1.8.0_212/jre 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: hare/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.10.2-tests.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.10.2.jar:/home/hadoop/module/hadoop-2.10.2/contrib/capacity-scheduler/*.jar 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/module/hadoop-2.10.2/lib/native 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux 2025-11-11 05:10:02,028 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-1160.el7.x86_64 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=hadoop 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop/module/hbase 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:os.memory.free=189MB 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:os.memory.max=3959MB 2025-11-11 05:10:02,029 INFO [main] zookeeper.ZooKeeper: Client environment:os.memory.total=239MB 2025-11-11 05:10:02,030 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop1:2181,hadoop2:2181,hadoop3:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@1128620c 2025-11-11 05:10:02,044 INFO [main] common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation 2025-11-11 05:10:02,046 INFO [main] zookeeper.ClientCnxnSocket: jute.maxbuffer value is 4194304 Bytes 2025-11-11 05:10:02,052 INFO [main] zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled= 2025-11-11 05:10:02,079 INFO [main-SendThread(hadoop2:2181)] zookeeper.ClientCnxn: Opening socket connection to server hadoop2/192.168.249.162:2181. Will not attempt to authenticate using SASL (unknown error) 2025-11-11 05:10:02,082 INFO [main-SendThread(hadoop2:2181)] zookeeper.ClientCnxn: Socket connection established, initiating session, client: /192.168.249.161:50466, server: hadoop2/192.168.249.162:2181 2025-11-11 05:10:02,117 INFO [main-SendThread(hadoop2:2181)] zookeeper.ClientCnxn: Session establishment complete on server hadoop2/192.168.249.162:2181, sessionid = 0x200004582a10001, negotiated timeout = 40000 2025-11-11 05:10:02,197 INFO [main] util.log: Logging initialized @2907ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2025-11-11 05:10:02,299 INFO [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2025-11-11 05:10:02,300 INFO [main] http.HttpServer: Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2025-11-11 05:10:02,300 INFO [main] http.HttpServer: Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2025-11-11 05:10:02,301 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2025-11-11 05:10:02,301 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2025-11-11 05:10:02,301 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2025-11-11 05:10:02,319 INFO [main] http.HttpServer: ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2025-11-11 05:10:02,358 INFO [main] http.HttpServer: Jetty bound to port 16010 2025-11-11 05:10:02,359 INFO [main] server.Server: jetty-9.4.41.v20210516; built: 2021-05-16T23:56:28.993Z; git: 98607f93c7833e7dc59489b13f3cb0a114fb9f4c; jvm 1.8.0_212-b10 2025-11-11 05:10:02,374 INFO [main] http.SecurityHeadersFilter: Added security headers filter 2025-11-11 05:10:02,376 INFO [main] handler.ContextHandler: Started o.a.h.t.o.e.j.s.ServletContextHandler@216914{logs,/logs,file:///home/hadoop/module/hbase/logs/,AVAILABLE} 2025-11-11 05:10:02,376 INFO [main] http.SecurityHeadersFilter: Added security headers filter 2025-11-11 05:10:02,376 INFO [main] handler.ContextHandler: Started o.a.h.t.o.e.j.s.ServletContextHandler@b835727{static,/static,file:///home/hadoop/module/hbase/hbase-webapps/static/,AVAILABLE} 2025-11-11 05:10:02,517 INFO [main] webapp.StandardDescriptorProcessor: NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2025-11-11 05:10:02,524 INFO [main] server.session: DefaultSessionIdManager workerName=node0 2025-11-11 05:10:02,524 INFO [main] server.session: No SessionScavenger set, using defaults 2025-11-11 05:10:02,524 INFO [main] server.session: node0 Scavenging every 660000ms 2025-11-11 05:10:02,541 INFO [main] http.SecurityHeadersFilter: Added security headers filter 2025-11-11 05:10:02,572 INFO [main] handler.ContextHandler: Started o.a.h.t.o.e.j.w.WebAppContext@69fe0ed4{master,/,file:///home/hadoop/module/hbase/hbase-webapps/master/,AVAILABLE}{file:/home/hadoop/module/hbase/hbase-webapps/master} 2025-11-11 05:10:02,591 INFO [main] server.AbstractConnector: Started ServerConnector@36c0d0bd{HTTP/1.1, (http/1.1)}{0.0.0.0:16010} 2025-11-11 05:10:02,591 INFO [main] server.Server: Started @3300ms 2025-11-11 05:10:02,603 INFO [main] master.HMaster: hbase.rootdir=hdfs://hadoop1:9000/hbase, hbase.cluster.distributed=true 2025-11-11 05:10:02,662 INFO [master/hadoop1:16000:becomeActiveMaster] master.HMaster: Adding backup master ZNode /hbase/backup-masters/hadoop1,16000,1762866599968 2025-11-11 05:10:02,725 ERROR [master/hadoop1:16000:becomeActiveMaster] master.HMaster: Failed to become Active Master org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/backup-masters/hadoop1,16000,1762866599968 at org.apache.zookeeper.KeeperException.create(KeeperException.java:118) at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential(RecoverableZooKeeper.java:546) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.create(RecoverableZooKeeper.java:525) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createEphemeralNodeAndWatch(ZKUtil.java:744) at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.setMasterAddress(MasterAddressTracker.java:216) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2162) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:511) at java.lang.Thread.run(Thread.java:748) 2025-11-11 05:10:02,727 ERROR [master/hadoop1:16000:becomeActiveMaster] master.HMaster: ***** ABORTING master hadoop1,16000,1762866599968: Failed to become Active Master ***** org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/backup-masters/hadoop1,16000,1762866599968 at org.apache.zookeeper.KeeperException.create(KeeperException.java:118) at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential(RecoverableZooKeeper.java:546) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.create(RecoverableZooKeeper.java:525) at org.apache.hadoop.hbase.zookeeper.ZKUtil.createEphemeralNodeAndWatch(ZKUtil.java:744) at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.setMasterAddress(MasterAddressTracker.java:216) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2162) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:511) at java.lang.Thread.run(Thread.java:748) 2025-11-11 05:10:02,727 INFO [master/hadoop1:16000:becomeActiveMaster] regionserver.HRegionServer: ***** STOPPING region server 'hadoop1,16000,1762866599968' ***** 2025-11-11 05:10:02,727 INFO [master/hadoop1:16000:becomeActiveMaster] regionserver.HRegionServer: STOPPED: Stopped by master/hadoop1:16000:becomeActiveMaster 2025-11-11 05:10:02,741 WARN [master/hadoop1:16000:becomeActiveMaster] master.ActiveMasterManager: Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2025-11-11 05:10:05,668 INFO [master/hadoop1:16000] ipc.NettyRpcServer: Stopping server on /192.168.249.161:16000 2025-11-11 05:10:05,670 INFO [master/hadoop1:16000] regionserver.HRegionServer: Stopping infoServer 2025-11-11 05:10:05,679 INFO [master/hadoop1:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.w.WebAppContext@69fe0ed4{master,/,null,STOPPED}{file:/home/hadoop/module/hbase/hbase-webapps/master} 2025-11-11 05:10:05,692 INFO [master/hadoop1:16000] server.AbstractConnector: Stopped ServerConnector@36c0d0bd{HTTP/1.1, (http/1.1)}{0.0.0.0:16010} 2025-11-11 05:10:05,692 INFO [master/hadoop1:16000] server.session: node0 Stopped scavenging 2025-11-11 05:10:05,692 INFO [master/hadoop1:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b835727{static,/static,file:///home/hadoop/module/hbase/hbase-webapps/static/,STOPPED} 2025-11-11 05:10:05,693 INFO [master/hadoop1:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.s.ServletContextHandler@216914{logs,/logs,file:///home/hadoop/module/hbase/logs/,STOPPED} 2025-11-11 05:10:05,694 INFO [master/hadoop1:16000] regionserver.HRegionServer: aborting server hadoop1,16000,1762866599968 2025-11-11 05:10:05,703 INFO [master/hadoop1:16000] regionserver.HRegionServer: stopping server hadoop1,16000,1762866599968; all regions closed. 2025-11-11 05:10:05,703 INFO [master/hadoop1:16000] hbase.ChoreService: Chore service for: master/hadoop1:16000 had [] on shutdown 2025-11-11 05:10:05,706 WARN [master/hadoop1:16000] master.ActiveMasterManager: Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2025-11-11 05:10:05,824 INFO [master/hadoop1:16000] zookeeper.ZooKeeper: Session: 0x200004582a10001 closed 2025-11-11 05:10:05,824 INFO [master/hadoop1:16000] regionserver.HRegionServer: Exiting; stopping=hadoop1,16000,1762866599968; zookeeper connection closed. 2025-11-11 05:10:05,824 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: HMaster Aborted at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:261) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:149) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:152) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2962) 2025-11-11 05:10:05,825 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x200004582a10001 启动hmaster后hmaster节点过一会直接挂掉,是什么问题,结合上述日志,该怎么解决具体一点
最新发布
11-12
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值