Hadoop-impala十大优化之(4)—根据执行计划进行性能优化及最佳实践

http://blog.csdn.net/seeyouc/article/details/53581725

1.1  Hadoop-impala十大优化之(4)—根据执行计划进行性能优化及最佳实践


1.1.1  使用解释计划进行性能调整

解释语句为您提供了一个查询将执行的逻辑步骤的概要,例如,如何将工作分配在节点

之间,以及如何将中间结果合并到生成最终结果集。在实际运行查询之前,您可以看到这些详细信息。您可以使用此信息来检查查询将不会在一些非常意想不到的或低效的方式操作。

[impalad-host:21000]> explain select count(*) from customer_address;

+----------------------------------------------------------+

| ExplainString                                          |

+----------------------------------------------------------+

| EstimatedPer-Host Requirements: Memory=42.00MB VCores=1 |

|                                                         |

|03:AGGREGATE [MERGE FINALIZE]                            |

| |  output: sum(count(*))                                 |

| |                                                       |

|02:EXCHANGE [PARTITION=UNPARTITIONED]                    |

| |                                                       |

|01:AGGREGATE                                             |

| |  output: count(*)                                      |

| |                                                       |

| 00:SCANHDFS [default.customer_address]                 |

|    partitions=1/1 size=5.25MB                            |

+----------------------------------------------------------+

从下到上阅读解释计划:

 该计划的最后一部分显示了低级别的细节,如预期的数据量,将被读取,在那里你可以判断你的分区策略的有效性,并估计将需要多长时间扫描一个表的基础上总的数据大小和大小的集群。

 然后你看到的操作,将每个节点执行并行的impala。

 在更高的层次,您可以看到当中间结果集合并和从一个节点发送到另一个节点时,数据流如何。

 看到关于explain_level查询选项的详细信息explain_level查询选项,它允许您自定义显示解释计划取决于你正在做的高级或低级调谐多少细节,处理查询的逻辑或物理方面。

解释计划还打印在使用性能调整的查询配置文件中所描述的查询配置文件的开始处,以便于检查并排的查询的逻辑和物理方面的便利性。

在解释输出的explain_level查询选项控制显示细节的数量。你通常会增加这个设置从正常的冗长(或从0到1)时,仔细检查表和列数据时性能调优,或当估计查询资源使用与CDH 5资源管理功能的结合。

1.1.1  使用性能调整的总结报告

在impala-shell解释器摘要命令给你一个容易消化的时间概述用于查询执行的不同阶段。像解释计划一样,很容易看到潜在的性能瓶颈。像配置文件输出,它是可用的查询后运行,所以显示实际的时间数。

 摘要报告还打印在使用性能调整的查询配置文件中所描述的查询概要报告的开始处,以便于检查并排的查询的高级和低级方面的问题。

 例如,这里是一个包含聚合函数的查询,在一个单一的节点上的虚拟机。的查询和他们的时间的不同阶段表现(卷起所有节点),以及估计值与实际值用于规划查询。在这种情况下,该avg()功能为每个节点上的数据的一个子集计算(01级)然后汇总结果,从所有节点结合在年底(03期)。你可以看到哪个阶段花了最多的时间,以及是否有任何估计值与实际的数据分布有明显的不同。(在检查的时间值,可以考虑后缀如我们毫秒、微秒和毫秒而不是寻找最大的数字。)


[localhost:21000] > select avg(ss_sales_price) from store_sales wheress_coupon_amt = 0;

+---------------------+

|avg(ss_sales_price) |

+---------------------+

|37.80770926328327   |

+---------------------+

[localhost:21000]> summary;

+--------------+--------+----------+----------+-------+------------+----------+---------------+-----------------+

|Operator     | #Hosts | Avg Time | MaxTime | #Rows | Est. #Rows | Peak Mem | Est. Peak Mem | Detail          |

+--------------+--------+----------+----------+-------+------------+----------+---------------+-----------------+

|03:AGGREGATE | 1      | 1.03ms   | 1.03ms  | 1     | 1          | 48.00 KB | -1 B          | MERGE FINALIZE  |

|02:EXCHANGE  | 1      | 0ns     | 0ns      | 1     | 1          | 0 B      | -1 B          | UNPARTITIONED   |

|01:AGGREGATE | 1      | 30.79ms  | 30.79ms | 1     | 1          | 80.00 KB | 10.00 MB      |                 |

| 00:SCANHDFS | 1      | 5.45s    | 5.45s   | 2.21M | -1         | 64.05 MB |432.00 MB     | tpc.store_sales |

+--------------+--------+----------+----------+-------+------------+----------+---------------+-----------------+

 

请注意查询最长的初始相位的测量单位是秒(s),而后期工作在较小的中间结果进行测量毫秒(ms)甚至纳秒(ns)。

这里有一个例子,从一个更复杂的查询,因为它会出现在配置文件输出:

Operator              #Hosts   Avg Time  Max Time    #Rows  Est. #Rows Peak Mem  Est. Peak Mem  Detail

------------------------------------------------------------------------------------------------------------------------

09:MERGING-EXCHANGE        1  79.738us   79.738us        5           5         0        -1.00 B UNPARTITIONED

05:TOP-N                   3   84.693us  88.810us        5           5 12.00 KB       120.00 B

04:AGGREGATE               3    5.263ms   6.432ms        5           5 44.00 KB       10.00 MB  MERGE FINALIZE

08:AGGREGATE               3   16.659ms  27.444ms   52.52K     600.12K  3.20 MB       15.11 MB  MERGE

07:EXCHANGE                3    2.644ms     5.1ms   52.52K     600.12K         0              0 HASH(o_orderpriority)

03:AGGREGATE               3  342.913ms 966.291ms   52.52K     600.12K 10.80 MB       15.11 MB

02:HASHJOIN               3   2s165ms    2s171ms  144.87K    600.12K  13.63 MB      941.01 KB INNER JOIN, BROADCAST

|--06:EXCHANGE             3    8.296ms   8.692ms   57.22K      15.00K         0              0 BROADCAST

|  01:SCAN HDFS            2   1s412ms    1s978ms   57.22K     15.00K  24.21 MB      176.00 MB tpch.orders o

00:SCANHDFS               3    8s032ms   8s558ms    3.79M     600.12K 32.29 MB      264.00 MB  tpch.lineitem l

 

1.1.2  使用性能调整的查询配置文件

profile语句,在impala-shell解释器,产生一个详细的报告显示低水平的最新查询被执行。不同于使用解释计划进行性能调整的解释计划,此信息仅在查询完成后才可用。它显示物理细节,如读取字节数、最大内存使用量等每个节点的物理细节。您可以使用此信息来确定如果查询是I/O密集型或CPU绑定的,是否有网络条件实施的瓶颈,是否放缓是影响而不是其他的一些节点,并检查推荐配置设置,如短路本地读取效果。

 默认情况下,配置文件输出的时间值反映了操作所采取的墙上时钟时间。指示系统的时间或用户的时间值,测量单位是反映在指标的名字,如scannerthreadssystime或scannerthreadsusertime。例如,一个多线程的I / O操作可能会显示一个小的数字墙上的时钟时间,而相应的系统时间是较大的,代表的总和所采取的每一个线程的中央处理器时间。或是一个墙时钟的时间可能会更大,因为它计算时间等待时间,而相应的系统和用户的时间数字只测量时间,而操作正在积极使用的处理器周期。

 该解释计划也打印在查询简要表报告的开始处,以便于检查并排的查询的逻辑和物理方面的便利性。的explain_level查询选项的解释也控制输出的打印命令的详细资料。

 这里是一个查询配置文件的例子,从一个相对简单的查询一个单一的节点的伪分布式集群保持输出相对较短。

[localhost:21000]> profile;

QueryRuntime Profile:

Query(id=6540a03d4bee0691:4963d6269b210ebd):

  Summary:

    Session ID:ea4a197f1c7bf858:c74e66f72e3a33ba

    Session Type: BEESWAX

    Start Time: 2013-12-02 17:10:30.263067000

    End Time: 2013-12-02 17:10:50.932044000

    Query Type: QUERY

    Query State: FINISHED

    Query Status: OK

    Impala Version: impalad version 1.2.1RELEASE (build edb5af1bcad63d410bc5d47cc203df3a880e9324)

    User: cloudera

    Network Address: 127.0.0.1:49161

    Default Db: stats_testing

    Sql Statement: select t1.s, t2.s from t1join t2 on (t1.id = t2.parent)

    Plan:

----------------

EstimatedPer-Host Requirements: Memory=2.09GB VCores=2

 

PLANFRAGMENT 0

  PARTITION: UNPARTITIONED

 

  4:EXCHANGE

     cardinality: unavailable

     per-host memory: unavailable

     tuple ids: 0 1

 

PLANFRAGMENT 1

  PARTITION: RANDOM

 

  STREAM DATA SINK

    EXCHANGE ID: 4

    UNPARTITIONED

 

  2:HASH JOIN

  |  joinop: INNER JOIN (BROADCAST)

  |  hashpredicates:

  |   t1.id = t2.parent

  | cardinality: unavailable

  | per-host memory: 2.00GB

  | tuple ids: 0 1

  |

  |----3:EXCHANGE

  |      cardinality: unavailable

  |      per-host memory: 0B

  |      tuple ids: 1

  |

  0:SCAN HDFS

     table=stats_testing.t1 #partitions=1/1size=33B

     table stats: unavailable

     column stats: unavailable

     cardinality: unavailable

     per-host memory: 32.00MB

     tuple ids: 0

 

PLANFRAGMENT 2

  PARTITION: RANDOM

 

  STREAM DATA SINK

    EXCHANGE ID: 3

    UNPARTITIONED

 

  1:SCAN HDFS

     table=stats_testing.t2 #partitions=1/1size=960.00KB

     table stats: unavailable

     column stats: unavailable

     cardinality: unavailable

     per-host memory: 96.00MB

     tuple ids: 1

----------------

    Query Timeline: 20s670ms

       - Start execution: 2.559ms (2.559ms)

       - Planning finished: 23.587ms (21.27ms)

       - Rows available: 666.199ms (642.612ms)

       - First row fetched: 668.919ms (2.719ms)

       - Unregister query: 20s668ms (20s000ms)

  ImpalaServer:

     - ClientFetchWaitTimer: 19s637ms

     - RowMaterializationTimer: 167.121ms

  Execution Profile6540a03d4bee0691:4963d6269b210ebd:(Active: 837.815ms, % non-child: 0.00%)

    Per Node Peak Memory Usage: impala-1.example.com:22000(7.42MB)

     - FinalizationTimer: 0ns

    Coordinator Fragment:(Active: 195.198ms, %non-child: 0.00%)

      MemoryUsage(500.0ms): 16.00 KB, 7.42 MB,7.33 MB, 7.10 MB, 6.94 MB, 6.71 MB, 6.56 MB, 6.40 MB, 6.17 MB, 6.02 MB, 5.79MB, 5.63 MB, 5.48 MB, 5.25 MB, 5.09 MB, 4.86 MB, 4.71 MB, 4.47 MB, 4.32 MB,4.09 MB, 3.93 MB, 3.78 MB, 3.55 MB, 3.39 MB, 3.16 MB, 3.01 MB, 2.78 MB, 2.62MB, 2.39 MB, 2.24 MB, 2.08 MB, 1.85 MB, 1.70 MB, 1.54 MB, 1.31 MB, 1.16 MB,948.00 KB, 790.00 KB, 553.00 KB, 395.00 KB, 237.00 KB

      ThreadUsage(500.0ms): 1

       - AverageThreadTokens: 1.00

       - PeakMemoryUsage: 7.42 MB

       -PrepareTime: 36.144us

       - RowsProduced: 98.30K (98304)

       - TotalCpuTime: 20s449ms

       - TotalNetworkWaitTime: 191.630ms

       - TotalStorageWaitTime: 0ns

      CodeGen:(Active: 150.679ms, % non-child:77.19%)

         - CodegenTime: 0ns

         - CompileTime: 139.503ms

         - LoadTime: 10.7ms

         - ModuleFileSize: 95.27 KB

      EXCHANGE_NODE (id=4):(Active: 194.858ms,% non-child: 99.83%)

         - BytesReceived: 2.33 MB

         - ConvertRowBatchTime: 2.732ms

         - DataArrivalWaitTime: 191.118ms

         - DeserializeRowBatchTimer: 14.943ms

         - FirstBatchArrivalWaitTime: 191.117ms

         - PeakMemoryUsage: 7.41 MB

         - RowsReturned: 98.30K (98304)

         - RowsReturnedRate: 504.49 K/sec

         - SendersBlockedTimer: 0ns

         - SendersBlockedTotalTimer(*): 0ns

    Averaged Fragment 1:(Active: 442.360ms, %non-child: 0.00%)

      split sizes:  min: 33.00 B, max: 33.00 B, avg: 33.00 B,stddev: 0.00

      completion times: min:443.720ms  max:443.720ms mean: 443.720ms  stddev:0ns

      execution rates: min:74.00 B/sec  max:74.00 B/sec  mean:74.00 B/sec  stddev:0.00 /sec

      num instances: 1

       - AverageThreadTokens: 1.00

       - PeakMemoryUsage: 6.06 MB

       - PrepareTime: 7.291ms

       - RowsProduced: 98.30K (98304)

       - TotalCpuTime: 784.259ms

       - TotalNetworkWaitTime: 388.818ms

       - TotalStorageWaitTime: 3.934ms

      CodeGen:(Active: 312.862ms, % non-child:70.73%)

         - CodegenTime: 2.669ms

         - CompileTime: 302.467ms

         - LoadTime: 9.231ms

         - ModuleFileSize: 95.27 KB

      DataStreamSender (dst_id=4):(Active:80.63ms, % non-child: 18.10%)

         - BytesSent: 2.33 MB

         - NetworkThroughput(*): 35.89 MB/sec

         - OverallThroughput: 29.06 MB/sec

         - PeakMemoryUsage: 5.33 KB

         - SerializeBatchTime: 26.487ms

         - ThriftTransmitTime(*): 64.814ms

         - UncompressedRowBatchSize: 6.66 MB

      HASH_JOIN_NODE (id=2):(Active: 362.25ms,% non-child: 3.92%)

         - BuildBuckets: 1.02K (1024)

         - BuildRows: 98.30K (98304)

         - BuildTime: 12.622ms

         - LoadFactor: 0.00

         - PeakMemoryUsage: 6.02 MB

         - ProbeRows: 3

         - ProbeTime: 3.579ms

         - RowsReturned: 98.30K (98304)

         - RowsReturnedRate: 271.54 K/sec

        EXCHANGE_NODE (id=3):(Active:344.680ms, % non-child: 77.92%)

           - BytesReceived: 1.15 MB

           - ConvertRowBatchTime: 2.792ms

           - DataArrivalWaitTime: 339.936ms

           - DeserializeRowBatchTimer: 9.910ms

           - FirstBatchArrivalWaitTime:199.474ms

           - PeakMemoryUsage: 156.00 KB

           - RowsReturned: 98.30K (98304)

           - RowsReturnedRate: 285.20 K/sec

           - SendersBlockedTimer: 0ns

           - SendersBlockedTotalTimer(*): 0ns

      HDFS_SCAN_NODE (id=0):(Active: 13.616us,% non-child: 0.00%)

         - AverageHdfsReadThreadConcurrency:0.00

         - AverageScannerThreadConcurrency:0.00

         - BytesRead: 33.00 B

         - BytesReadLocal: 33.00 B

         - BytesReadShortCircuit: 33.00 B

         - NumDisksAccessed: 1

         - NumScannerThreadsStarted: 1

         - PeakMemoryUsage: 46.00 KB

         - PerReadThreadRawHdfsThroughput:287.52 KB/sec

         - RowsRead: 3

         - RowsReturned: 3

         - RowsReturnedRate: 220.33 K/sec

         - ScanRangesComplete: 1

         -ScannerThreadsInvoluntaryContextSwitches: 26

         - ScannerThreadsTotalWallClockTime:55.199ms

           - DelimiterParseTime: 2.463us

           - MaterializeTupleTime(*): 1.226us

           - ScannerThreadsSysTime: 0ns

           - ScannerThreadsUserTime: 42.993ms

         -ScannerThreadsVoluntaryContextSwitches: 1

         - TotalRawHdfsReadTime(*): 112.86us

         - TotalReadThroughput: 0.00 /sec

    Averaged Fragment 2:(Active: 190.120ms, %non-child: 0.00%)

      split sizes:  min: 960.00 KB, max: 960.00 KB, avg: 960.00KB, stddev: 0.00

      completion times: min:191.736ms  max:191.736ms mean: 191.736ms  stddev:0ns

      execution rates: min:4.89 MB/sec  max:4.89 MB/sec  mean:4.89 MB/sec  stddev:0.00 /sec

      num instances: 1

       - AverageThreadTokens: 0.00

       - PeakMemoryUsage: 906.33 KB

       - PrepareTime: 3.67ms

       - RowsProduced: 98.30K (98304)

       - TotalCpuTime: 403.351ms

       - TotalNetworkWaitTime: 34.999ms

       - TotalStorageWaitTime: 108.675ms

      CodeGen:(Active: 162.57ms, % non-child:85.24%)

         - CodegenTime: 3.133ms

         - CompileTime: 148.316ms

         - LoadTime: 12.317ms

         - ModuleFileSize: 95.27 KB

      DataStreamSender (dst_id=3):(Active:70.620ms, % non-child: 37.14%)

         - BytesSent: 1.15 MB

         - NetworkThroughput(*): 23.30 MB/sec

         - OverallThroughput: 16.23 MB/sec

         - PeakMemoryUsage: 5.33 KB

         - SerializeBatchTime: 22.69ms

         - ThriftTransmitTime(*): 49.178ms

         - UncompressedRowBatchSize: 3.28 MB

      HDFS_SCAN_NODE (id=1):(Active: 118.839ms,% non-child: 62.51%)

         - AverageHdfsReadThreadConcurrency:0.00

         - AverageScannerThreadConcurrency:0.00

         - BytesRead: 960.00 KB

         - BytesReadLocal: 960.00 KB

         - BytesReadShortCircuit: 960.00 KB

         - NumDisksAccessed: 1

         - NumScannerThreadsStarted: 1

         - PeakMemoryUsage: 869.00 KB

         - PerReadThreadRawHdfsThroughput:130.21 MB/sec

         - RowsRead: 98.30K (98304)

         - RowsReturned: 98.30K (98304)

         - RowsReturnedRate: 827.20 K/sec

         - ScanRangesComplete: 15

         -ScannerThreadsInvoluntaryContextSwitches: 34

         - ScannerThreadsTotalWallClockTime:189.774ms

           - DelimiterParseTime: 15.703ms

           - MaterializeTupleTime(*): 3.419ms

           - ScannerThreadsSysTime: 1.999ms

           - ScannerThreadsUserTime: 44.993ms

         -ScannerThreadsVoluntaryContextSwitches: 118

         - TotalRawHdfsReadTime(*): 7.199ms

         - TotalReadThroughput: 0.00 /sec

    Fragment 1:

      Instance6540a03d4bee0691:4963d6269b210ebf (host=impala-1.example.com:22000):(Active:442.360ms, % non-child: 0.00%)

        Hdfs split stats (<volumeid>:<# splits>/<split lengths>): 0:1/33.00 B

        MemoryUsage(500.0ms): 69.33 KB

        ThreadUsage(500.0ms): 1

         - AverageThreadTokens: 1.00

         - PeakMemoryUsage: 6.06 MB

         - PrepareTime: 7.291ms

         - RowsProduced: 98.30K (98304)

         - TotalCpuTime: 784.259ms

         - TotalNetworkWaitTime: 388.818ms

         - TotalStorageWaitTime: 3.934ms

        CodeGen:(Active: 312.862ms, %non-child: 70.73%)

           - CodegenTime: 2.669ms

           - CompileTime: 302.467ms

           - LoadTime: 9.231ms

           - ModuleFileSize: 95.27 KB

        DataStreamSender (dst_id=4):(Active:80.63ms, % non-child: 18.10%)

           - BytesSent: 2.33 MB

           - NetworkThroughput(*): 35.89 MB/sec

           - OverallThroughput: 29.06 MB/sec

           - PeakMemoryUsage: 5.33 KB

           - SerializeBatchTime: 26.487ms

           - ThriftTransmitTime(*): 64.814ms

           - UncompressedRowBatchSize: 6.66 MB

        HASH_JOIN_NODE (id=2):(Active:362.25ms, % non-child: 3.92%)

          ExecOption: Build Side Codegen Enabled,Probe Side Codegen Enabled, Hash Table Built Asynchronously

           - BuildBuckets: 1.02K (1024)

           - BuildRows: 98.30K (98304)

           - BuildTime: 12.622ms

           - LoadFactor: 0.00

           - PeakMemoryUsage: 6.02 MB

           - ProbeRows: 3

           - ProbeTime: 3.579ms

           - RowsReturned: 98.30K (98304)

           - RowsReturnedRate: 271.54 K/sec

          EXCHANGE_NODE (id=3):(Active:344.680ms, % non-child: 77.92%)

             - BytesReceived: 1.15 MB

             - ConvertRowBatchTime: 2.792ms

             - DataArrivalWaitTime: 339.936ms

             - DeserializeRowBatchTimer:9.910ms

             - FirstBatchArrivalWaitTime:199.474ms

             - PeakMemoryUsage: 156.00 KB

             - RowsReturned: 98.30K (98304)

             - RowsReturnedRate: 285.20 K/sec

             - SendersBlockedTimer: 0ns

             - SendersBlockedTotalTimer(*): 0ns

        HDFS_SCAN_NODE (id=0):(Active:13.616us, % non-child: 0.00%)

          Hdfs split stats (<volumeid>:<# splits>/<split lengths>): 0:1/33.00 B

          Hdfs Read Thread Concurrency Bucket:0:0% 1:0%

          File Formats: TEXT/NONE:1

          ExecOption: Codegen enabled: 1 out of1

           - AverageHdfsReadThreadConcurrency:0.00

           - AverageScannerThreadConcurrency: 0.00

           - BytesRead: 33.00 B

           - BytesReadLocal: 33.00 B

           - BytesReadShortCircuit: 33.00 B

           - NumDisksAccessed: 1

           - NumScannerThreadsStarted: 1

           - PeakMemoryUsage: 46.00 KB

           - PerReadThreadRawHdfsThroughput:287.52 KB/sec

           - RowsRead: 3

           - RowsReturned: 3

           - RowsReturnedRate: 220.33 K/sec

           - ScanRangesComplete: 1

           -ScannerThreadsInvoluntaryContextSwitches: 26

           - ScannerThreadsTotalWallClockTime:55.199ms

             - DelimiterParseTime: 2.463us

             - MaterializeTupleTime(*): 1.226us

             - ScannerThreadsSysTime: 0ns

             - ScannerThreadsUserTime: 42.993ms

           -ScannerThreadsVoluntaryContextSwitches: 1

           - TotalRawHdfsReadTime(*): 112.86us

           - TotalReadThroughput: 0.00 /sec

    Fragment 2:

      Instance6540a03d4bee0691:4963d6269b210ec0 (host=impala-1.example.com:22000):(Active: 190.120ms,% non-child: 0.00%)

        Hdfs split stats (<volumeid>:<# splits>/<split lengths>): 0:15/960.00 KB

         - AverageThreadTokens: 0.00

         - PeakMemoryUsage: 906.33 KB

         - PrepareTime: 3.67ms

         - RowsProduced: 98.30K (98304)

         - TotalCpuTime: 403.351ms

         - TotalNetworkWaitTime: 34.999ms

         - TotalStorageWaitTime: 108.675ms

        CodeGen:(Active: 162.57ms, % non-child:85.24%)

           - CodegenTime: 3.133ms

           - CompileTime: 148.316ms

           - LoadTime: 12.317ms

           - ModuleFileSize: 95.27 KB

        DataStreamSender (dst_id=3):(Active:70.620ms, % non-child: 37.14%)

           - BytesSent: 1.15 MB

           - NetworkThroughput(*): 23.30 MB/sec

           - OverallThroughput: 16.23 MB/sec

           - PeakMemoryUsage: 5.33 KB

           - SerializeBatchTime: 22.69ms

           - ThriftTransmitTime(*): 49.178ms

           - UncompressedRowBatchSize: 3.28 MB

        HDFS_SCAN_NODE (id=1):(Active:118.839ms, % non-child: 62.51%)

          Hdfs split stats (<volumeid>:<# splits>/<split lengths>): 0:15/960.00 KB

          Hdfs Read Thread Concurrency Bucket:0:0% 1:0%

          File Formats: TEXT/NONE:15

          ExecOption: Codegen enabled: 15 outof 15

           - AverageHdfsReadThreadConcurrency:0.00

           - AverageScannerThreadConcurrency:0.00

           - BytesRead: 960.00 KB

           - BytesReadLocal: 960.00 KB

           - BytesReadShortCircuit: 960.00 KB

           - NumDisksAccessed: 1

           - NumScannerThreadsStarted: 1

           - PeakMemoryUsage: 869.00 KB

           - PerReadThreadRawHdfsThroughput:130.21 MB/sec

           - RowsRead: 98.30K (98304)

           - RowsReturned: 98.30K (98304)

           - RowsReturnedRate: 827.20 K/sec

           - ScanRangesComplete: 15

           -ScannerThreadsInvoluntaryContextSwitches: 34

           - ScannerThreadsTotalWallClockTime:189.774ms

             - DelimiterParseTime: 15.703ms

             - MaterializeTupleTime(*): 3.419ms

             - ScannerThreadsSysTime: 1.999ms

             - ScannerThreadsUserTime: 44.993ms

           -ScannerThreadsVoluntaryContextSwitches: 118

           - TotalRawHdfsReadTime(*): 7.199ms

           - TotalReadThroughput: 0.00 /sec

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值