hbase 查询设置超时_hbase scan超时问题

博客讨论了HBase查询遇到的超时问题,详细记录了异常信息,包括SocketTimeoutException。文章提到了可能的原因,如过滤器使用、集群环境变化等,并给出了排查步骤,如检查表状态、调整超时时间和考虑替换查询策略。
摘要由CSDN通过智能技术生成

下面是异常信息:

2018-11-08 16:55:52,361 INFO [main] org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl: recovered from org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:

Thu Nov 08 16:55:52 CST 2018, null, java.net.SocketTimeoutException: callTimeout=180000, callDuration=180111: row '120180358862' on table 'ubas:stats_job_user_analysis' at region=ubas:stats_job_user_analysis,1\x11201803\x1158862,1536809361468.aa5027e279ba39d6505d8b507c6aa3a0., hostname=foxlog02.engine.wx,16020,1538805121280, seqNum=61831

at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:276)

at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:210)

at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)

at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)

at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:413)

at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:371)

at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:210)

at org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:147)

at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$1.nextKeyValue(TableInputFormatBase.java:216)

at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)

at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)

at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)

at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)

at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)

at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)

at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758)

at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Caused by: java.net.SocketTimeoutException: callTimeout=180000, callDuration=180111: row '120180358862' on table 'ubas:stats_job_user_analysis' at region=ubas:stats_job_user_analysis,1\x11201803\x1158862,1536809361468.aa5027e279ba39d6505d8b507c6aa3a0., hostname=foxlog02.engine.wx,16020,1538805121280, seqNum=61831

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:169)

at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

Caused by: java.io.IOException: Call to FoxLog02.engine.wx/192.168.202.2:16020 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=2, waitTime=180001, operationTimeout=180000 expired.

at org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:292)

at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1271)

at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)

at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)

at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)

at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:220)

at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:65)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)

at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)

at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)

at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)

... 4 more

Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=2, waitTime=180001, operationTimeout=180000 expired.

at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:73)

at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1245)

... 13 more

2018-11-08 16:55:52,362 WARN [main] org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl: We are restarting the first next() invocation, if your mapper has restarted a few other times like this then you should consider killing this job and investigate why it's taking so long.

2018-11-08 17:01:45,893 INFO [main] com.tracker.offline.business.job.user.JobAnalMonthTopMR: count :0

2018-11-08 17:01:45,893 INFO [main] com.tracker.offline.business.job.user.JobAnalMonthTopMR: tablename execute end :ubas:stats_job_user_analysis

2018-11-08 17:04:45,896 WARN [main] org.apache.hadoop.hbase.client.ScannerCallable: Ignore, probably already closed

代码设置参数:

Scan scan = newScan();

scan.setAttribute(Scan.SCAN_ATTRIBUTES_TABLE_NAME, tablename.getBytes());

scan.setCaching(2000);

scan.setCacheBlocks(false);

FilterList filter= newFilterList(FilterList.Operator.MUST_PASS_ONE);for(int i = 0 ; i

filter.addFilter(new PrefixFilter(Bytes.toBytes(j+RowUtil.ROW_SPLIT +monthsList.get(i))));

}

scan.setFilter(filter);

}

HbaseMapReduce mr_1= new HbaseMapReduce(JobAnalMonthTopMR.class, JobMonthMapper.class, Text.class, Text.class, trackerconfig, outputPath + "/01");

mr_1.setReducerClass(JobMonthReducer.class)

.setScan(scan).setParameter("hbase.client.scanner.timeout.period", "180000")

.setParameter("hbase.rpc.timeout", "180000").setJarName("JobAnalMonthTopMR-statistic");

mr_1.waitForCompletion();

代码的基本设置和出现的场景:

1-超时时长180s;

2-缓存条数2000条,每条三列,值为Long型;

3-被扫描表大小,54G;

4-有一个过滤器,使用的前缀查询;

5-刚迁移集群,以前是MapReduc和Hbase在同一个集群上。迁移后,Hbase和MapReduce不再同一个集群上;

排查原因:

1-查看表有没有被破坏,参考命令hbase hbck;

2-超时时长解决方法:

(1)-增加超时时长;(2)-减少缓存条数;  但问题不再此处,因为取的每条数据很小,总的两千条也不会暂用很多的内存;

3-前缀过滤器原因:因为时过滤查询和前缀查询的速度比较慢。所以在扫描的块中分布的合格的条数比例很小时。

会很难再短时间内读取2000条数据,解决方法,用范围查询代替前缀查询。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值