Impala卡住问题排查

现象:Impala卡住时整个集群所有任务都会卡住,最基本的查询都无法返回结果

排查:

1.从调度任务卡住情况判断,预估出问题时间点在8点12分之后。

2.查看catalog server 日志,无导致问题的异常日志。

3.查看statestore日志,在8:14:13 有一个对bshadoop14的impala节点的元数据更新请求,大小为4.13GB。

此处RPC超时。之后一直处于初始化更新元数据状态直到Impala重启。

I0317 08:14:15.408370  6762 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:19:22.105964  6762 client-cache.h:313] RPC recv timed out: dest address: bshadoop14:23000, rpc: N6impala20TUpdateStateResponseE
I0317 08:19:22.106006  6762 client-cache.cc:170] Broken Connection, destroy client for bshadoop14:23000
I0317 08:19:22.109560  6762 statestore.cc:891] Unable to send topic update message to subscriber impalad@bshadoop14:22000, received error: RPC recv timed out: dest address: bshadoop14:23000, rpc: N6impala20TUpdateStateResponseE
Subscriber impalad@bshadoop14:22000 timed-out during topic-update RPC. Timeout is 300s.
I0317 08:19:26.121605  6761 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:19:37.015830  6758 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:19:48.061498  6763 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:19:59.049192  6762 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:20:10.151131  6763 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:20:20.877066  6764 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:20:31.708010  6762 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:20:42.453140  6767 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:20:53.458396  6763 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:21:04.629040  6758 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:21:15.740864  6759 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:21:26.485383  6761 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:21:37.233273  6767 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:21:48.155161  6763 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:21:59.084264  6762 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:22:10.235431  6766 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:22:21.428581  6763 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:22:32.263168  6759 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:22:42.722151  6767 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:22:53.583761  6762 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:23:04.101799  6761 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:23:14.911124  6763 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:23:25.436506  6758 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:23:35.926985  6764 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:23:46.918821  6763 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:23:57.561246  6761 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:24:08.552958  6759 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB
I0317 08:24:19.591955  6760 statestore.cc:259] Preparing initial catalog-update topic update for impalad@bshadoop14:22000. Size = 4.13 GB

那么bshadoop14这个时间出了什么问题

4.查看bshadoop14为什么会在整个时间点请求元数据更新

查看日志:该节点impalad这段时间连续出现了OOM,

I0317 08:14:12.478821 299166 krpc-data-stream-mgr.cc:293] DeregisterRecvr(): fragment_instance_id=2d437bee0eed902a:a7415302000003f0, node=221
I0317 08:14:12.478853 299166 krpc-data-stream-recvr.cc:557] cancelled stream: fragment_instance_id=2d437bee0eed902a:a7415302000003f0 node_id=221
I0317 08:14:13.023018 49277 krpc-data-stream-mgr.cc:407] Reduced stream ID cache from 412 items, to 397, eviction took: 0
I0317 08:14:13.384285 299882 hdfs-scan-node.cc:557] Error preparing scanner for scan range hdfs://nameservice1/AIRFLOW/DB/TASK_INSTANCE/dt=2021-01-15/35414e3d9108793a-29b6d72e00000000_654409867_data.0.parq(1546032:102400).
I0317 08:14:13.384629 300401 jni-util.cc:230] java.lang.OutOfMemoryError: Java heap space
        at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)
        at java.lang.StringCoding.decode(StringCoding.java:193)
        at java.lang.String.<init>(String.java:426)
        at java.lang.String.<init>(String.java:491)
        at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:355)
        at org.apache.thrift.protocol.TBinaryProtocol.readString(TBinaryProtocol.java:347)
        at org.apache.impala.thrift.THdfsPartition$THdfsPartitionStandardScheme.read(THdfsPartition.java:1935)
        at org.apache.impala.thrift.THdfsPartition$THdfsPartitionStandardScheme.read(THdfsPartition.java:1777)
        at org.apache.impala.thrift.THdfsPartition.read(THdfsPartition.java:1598)
        at org.apache.impala.thrift.THdfsTable$THdfsTableStandardScheme.read(THdfsTable.java:1144)
        at org.apache.impala.thrift.THdfsTable$THdfsTableStandardScheme.read(THdfsTable.java:1071)
        at org.apache.impala.thrift.THdfsTable.read(THdfsTable.java:936)
        at org.apache.impala.thrift.TTable$TTableStandardScheme.read(TTable.java:1502)
        at org.apache.impala.thrift.TTable$TTableStandardScheme.read(TTable.java:1399)
        at org.apache.impala.thrift.TTable.read(TTable.java:1204)
        at org.apache.impala.thrift.TCatalogObject$TCatalogObjectStandardScheme.read(TCatalogObject.java:1147)
        at org.apache.impala.thrift.TCatalogObject$TCatalogObjectStandardScheme.read(TCatalogObject.java:1098)
        at org.apache.impala.thrift.TCatalogObject.read(TCatalogObject.java:934)
        at org.apache.impala.catalog.ImpaladCatalog.updateCatalog(ImpaladCatalog.java:166)
        at org.apache.impala.service.Frontend.updateCatalogCache(Frontend.java:223)
        at org.apache.impala.service.JniFrontend.updateCatalogCache(JniFrontend.java:175)
I0317 08:14:13.385059 298935 query-state.cc:412] Instance completed. instance_id=db468fb29e1c10d7:f9034a300000051 #in-flight=164 status=OK
I0317 08:14:13.385074 298935 query-exec-mgr.cc:155] ReleaseQueryState(): query_id=db468fb29e1c10d7:f9034a300000000 refcnt=21
I0317 08:14:13.385756 298834 query-state.cc:412] Instance completed. instance_id=4c4b18eae2399966:a61a021200000037 #in-flight=163 status=OK
I0317 08:14:13.385792 298834 query-exec-mgr.cc:155] ReleaseQueryState(): query_id=4c4b18eae2399966:a61a021200000000 refcnt=1
I0317 08:14:13.387920 300401 status.cc:125] OutOfMemoryError: Java heap space
    @           0x966e5a
    @           0xcef29d
    @           0xbdf140
    @           0xbfdfa7
    @           0xca56da
    @           0xcaae83
    @          0x11559a6
    @          0x115461d
    @           0x934aea
    @           0xb19009     @           0xb19009
    @           0xb11029
    @           0xb11e22
    @           0xd61ebf
    @           0xd626ba
    @          0x12dae0a
    @       0x3475a07aa1
    @       0x34752e8c4d
E0317 08:14:13.387948 300401 impala-server.cc:1379] There was an error processing the impalad catalog update. Requesting a full topic update to recover: OutOfMemoryError: Java heap space
I0317 08:14:13.388512 268489 RetryInvocationHandler.java:150] Exception while invoking renewLease of class ClientNamenodeProtocolTranslatorPB over bshadoop13/10.3.0.13:8020. Trying to fail over immediately.
Java exception follows:
java.io.IOException: Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "bshadoop14/10.3.0.14"; destination host is: "bshadoop13":8020;
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
        at org.apache.hadoop.ipc.Client.call(Client.java:1508)
        at org.apache.hadoop.ipc.Client.call(Client.java:1441)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
        at com.sun.proxy.$Proxy10.renewLease(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:607)
        at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
        at com.sun.proxy.$Proxy11.renewLease(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:987)
        at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:398)
        at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:423)
        at org.apache.hadoop.hdfs.LeaseRenewer.access$600(LeaseRenewer.java:72)
        at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:302)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Couldn't set up IO streams
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:826)
        at org.apache.hadoop.ipc.Client$Connection.access$3000(Client.java:396)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1557)
        at org.apache.hadoop.ipc.Client.call(Client.java:1480)
        ... 16 more
Caused by: java.lang.OutOfMemoryError: Java heap space
        at java.io.BufferedOutputStream.<init>(BufferedOutputStream.java:76)
        at java.io.BufferedOutputStream.<init>(BufferedOutputStream.java:59)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:803)
        ... 19 more

8:15 -8:50 一直有OOM

I0317 08:15:02.190376 298378 status.cc:125] Failed to write data (length: 383270) to Hdfs file: hdfs://nameservice1/RTC/DWD/DWD_F_ORDER_LINE_DETAIL/_impala_insert_staging/c94bff3c15a312cb_18f23a8600000000/.c94bff3c15a312cb-18f23a860000046e_2137245465_dir/c94bff3c15a312cb-18f23a860000046e_84652990_data.0.
Error(12): Cannot allocate memory
Root cause: OutOfMemoryError: Java heap space
    @           0x966e5a
    @          0x10800bb
    @           0xe0a4a5
I0317 08:16:16.792361 298309 query-state.cc:412] Instance completed. instance_id=db456d408a852a28:358d151b00000684 #in-flight=78 status=GENERAL: Failed to write data (length: 202210) to Hdfs file: hdfs://nameservice1/RTC/DWD/DWD_F_CONTRACT_LINE_DETAIL/_impala_insert_staging/db456d408a852a28_358d151b00000000/.db456d408a852a28-358d151b00000684_1799736371_dir/db456d408a852a28-358d151b00000684_1886623842_data.0.
Error(12): Cannot allocate memory
Root cause: OutOfMemoryError: Java heap space
I0317 08:25:13.305672 298796 query-state.cc:412] Instance completed. instance_id=7e4c0c39c2ecbbb9:364e6b7e0000044e #in-flight=174 status=GENERAL: Failed to write data (length: 792236) to Hdfs file: hdfs://nameservice1/RTC/DWD/DWD_F_OUT_DETAIL/_impala_insert_staging/7e4c0c39c2ecbbb9_364e6b7e00000000/.7e4c0c39c2ecbbb9-364e6b7e0000044e_232631440_dir/7e4c0c39c2ecbbb9-364e6b7e0000044e_1522903463_data.0.
Error(12): Cannot allocate memory
Root cause: OutOfMemoryError: Java heap space
I0317 08:34:20.747110 300459 jni-util.cc:230] java.lang.OutOfMemoryError: Java heap space
E0317 08:34:20.747313 49141 DomainSocketWatcher.java:506] org.apache.hadoop.net.unix.DomainSocketWatcher$2@5848050 terminating on exception
Java exception follows:
java.lang.OutOfMemoryError: Java heap space I0317 E0317 08:42:04.733140 302869 impala-server.cc:1379] There was an error processing the impalad catalog update. Requesting a full topic update to recover: OutOfMemoryError: Java heap space
E0317 08:46:01.162238 300401 impala-server.cc:1379] There was an error processing the impalad catalog update. Requesting a full topic update to recover: OutOfMemoryError: Java heap space
E0317 08:48:15.085005 314607 impala-server.cc:1379] There was an error processing the impalad catalog update. Requesting a full topic update to recover: OutOfMemoryError: Java heap space
E0317 08:49:01.197199 300401 impala-server.cc:1379] There was an error processing the impalad catalog update. Requesting a full topic update to recover: OutOfMemoryError: Java heap space

5.查看主机内存情况并不高

Impala内存情况:
Impala的Mem_limit设置为256GB,远没没有达到,
那么根据报错信息就是JVM Heap内存满了,

[root@bshadoop14 impalad]# jinfo -flags 321935
Attaching to process ID 321935, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.191-b12
Non-default VM flags: -XX:CICompilerCount=15 -XX:InitialHeapSize=2147483648 -XX:MaxHeapSize=32210157568 -XX:MaxNewSize=10736369664 -XX:MinHeapDeltaBytes=524288 -XX:NewSize=715653120 -XX:OldSize=1431830528 -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseFastUnorderedTimeStamps -XX:+UseParallelGC
Command line:  -Djava.library.path=/opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/bin/../lib/impala/lib

JVM Heap 最大为32G,MaxHeapSize=32210157568
目前生产环境CDH5.15.2版本此参数没用开放修改,5.16就可以改了。

5.再查看其他Impala图:
在这里插入图片描述
当时Impalad角色的CPU使用率已超100%
在这里插入图片描述
查了下其他节点的角色CPU使用情况都不高:
在这里插入图片描述
当然也可能是先同步元数据失败,后CPU飙高

6.梳理一下过程:

首先是bshadoop14的Impalad的Jvm在8点14分CPU很高,(其他节点不高)并且JVM的接近最大值无法申请更多内存了。

I0317 08:14:13.384629 300401 jni-util.cc:230] java.lang.OutOfMemoryError: Java heap space

然后bshadoop14在处理impalad catalog update的时候出错,并且申请一份全量数据,但是更新元数据的时候jvm一直内存不足。

导致全局此时元数据不同步。并且分发到该节点上的任务切片无法完成。

E0317 08:14:13.387948 300401 impala-server.cc:1379] There was an error processing the impalad catalog update. Requesting a full topic update to recover: OutOfMemoryError: Java heap space

I0317 08:14:13.388512 268489 RetryInvocationHandler.java:150] Exception while invoking renewLease of class ClientNamenodeProtocolTranslatorPB over bshadoop13/10.3.0.13:8020. Trying to fail over immediately.
Java exception follows:
java.io.IOException: Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "bshadoop14/10.3.0.14"; destination host is: "bshadoop13":8020;

7.解决方法:
①在Impalad发生oom的时候重启impala节点释放内存用来加载元数据。

②减少元数据Invalidate操作,缓解压力。

③在impalad高级代码段中添加
JAVA_TOOL_OPTIONS=-Xmx64g
之后重启,CDH5.16版本之后有直接的配置项

8.后续:

①增加5张监控图表:

Impalad内存/JVM内存/角色CPU/Catalog JVM

②排查bshadoop14节点JVM内存满的原因

mem_limit: 256G
Impala Daemon JVM Heap: 32G

Impala Daemon进程其实是由两个不同的进程组成的。一个进程是用C++写的,它主要用来执行查询语句;另外一个进程是用Java写的,它主要是用来编译执行语句和存储metadata信息。Java的进程被嵌入到C++的进程中,因此他们两个进程共享一个进程号。上述的两个内存参数:mem_limit是用来控制C++进程的内存使用量,而Impala Daemon JVM Heap是用来控制Java进程的内存使用量的。

Java进程主要是用来编译执行语句和存储metadata信息,32G中元数据占到4G

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值