执行语句为:
explain analyze select count(*) from L_T054 a join L_T120 b on a.d020=b.d020
执行计划为:
"Aggregate (cost=1520822218.88..1520822218.89 rows=1 width=8)"
" Rows out: Avg 1.0 rows x 1 workers. Max/Last(seg-1:hawq2/seg-1:hawq2) 1/1 rows with 6860764/6860764 ms to end, start offset by 1.168/1.168 ms."
" -> Gather Motion 6:1 (slice1; segments: 6) (cost=1520822218.78..1520822218.86 rows=1 width=8)"
" Rows out: Avg 6.0 rows x 1 workers at destination. Max/Last(seg-1:hawq2/seg-1:hawq2) 6/6 rows with 3320407/3320407 ms to first row, 6860764/6860764 ms to end, start offset by 1.170/1.170 ms."
" -> Aggregate (cost=1520822218.78..1520822218.79 rows=1 width=8)"
" Rows out: Avg 1.0 rows x 6 workers. Max/Last(seg5:hawq2/seg0:hawq2) 1/1 rows with 4544506/6860758 ms to end, start offset by 6.410/6.929 ms."
" -> Hash Join (cost=7349612.80..1269920578.75 rows=16726776002 width=0)"
" Hash Cond: b.d020 = a.d020"
" Rows out: Avg 26902546221.0 rows x 6 workers. Max/Last(seg0:hawq2/seg0:hawq2) 40017551225/40017551225 rows with 20615/20615 ms to first row, 4431187/4431187 ms to end, start offset by 6.929/6.929 ms."
" Executor memory: 450570K bytes avg, 450570K bytes max (seg5:hawq2)."
" Work_mem used: 206760K bytes avg, 207884K bytes max (seg0:hawq2). Workfile: (6 spilling, 0 reused)"
" Work_mem wanted: 1633616K bytes avg, 1638375K bytes max (seg4:hawq2) to lessen workfile I/O affecting 6 workers."
" (seg0) Initial batch 0:"
" (seg0) Wrote 1003408K bytes to inner workfile."
" (seg0) Wrote 1805360K bytes to outer workfile."
" (seg0) Initial batches 1..7:"
" (seg0) Read 2006660K bytes from inner workfile: 286666K avg x 7 nonempty batches, 291073K max."
" (seg0) Read 3610383K bytes from outer workfile: 515769K avg x 7 nonempty batches, 529994K max."
" (seg0) Hash chain length 6.7 avg, 24681 max, using 6266927 of 16777688 buckets."
" (seg4) Initial batch 0:"
" (seg4) Wrote 1003360K bytes to inner workfile."
" (seg4) Wrote 1797552K bytes to outer workfile."
" (seg4) Initial batches 1..7:"
" (seg4) Read 2006596K bytes from inner workfile: 286657K avg x 7 nonempty batches, 289127K max."
" (seg4) Read 3594787K bytes from outer workfile: 513541K avg x 7 nonempty batches, 527403K max."
" (seg4) Hash chain length 6.7 avg, 33890 max, using 6269397 of 16777688 buckets."
" -> Parquet table Scan on l_t120 b (cost=0.00..5267903.24 rows=74506971 width=17)"
" Rows out: Avg 74506971.2 rows x 6 workers. Max/Last(seg0:hawq2/seg0:hawq2) 75291213/75291213 rows with 16/16 ms to first row, 11700/11700 ms to end, start offset by 6.930/6.930 ms."
" -> Hash (cost=3845507.80..3845507.80 rows=41820547 width=17)"
" Rows in: Avg 5245166.5 rows x 6 workers. Max/Last(seg5:hawq2/seg5:hawq2) 5310331/5310331 rows with 21133/21133 ms to end, start offset by 41/41 ms."
" -> Parquet table Scan on l_t054 a (cost=0.00..3845507.80 rows=41820547 width=17)"
" Rows out: Avg 41820546.7 rows x 6 workers. Max/Last(seg4:hawq2/seg4:hawq2) 41942400/41942400 rows with 6.674/6.674 ms to first row, 12494/12494 ms to end, start offset by 48/48 ms."
"Slice statistics:"
" (slice0) Executor memory: 273K bytes."
" (slice1) * Executor memory: 451832K bytes avg x 6 workers, 452502K bytes max (seg5:hawq2). Work_mem: 207884K bytes max, 1638375K bytes wanted."
"Statement statistics:"
" Memory used: 262144K bytes"
" Memory wanted: 1639807K bytes"
"Settings: default_hash_table_bucket_number=6"
"Dispatcher statistics:"
" executors used(total/cached/new connection): (6/6/0); dispatcher time(total/connection/dispatch data): (0.443 ms/0.000 ms/0.279 ms)."
" dispatch data time(max/min/avg): (0.156 ms/0.017 ms/0.045 ms); consume executor data time(max/min/avg): (0.047 ms/0.034 ms/0.039 ms); free executor time(max/min/avg): (0.000 ms/0.000 ms/0.000 ms)."
"Data locality statistics:"
" data locality ratio: 1.000; virtual segment number: 6; different host number: 1; virtual segment number per host(avg/min/max): (6/6/6); segment size(avg/min/max): (11653172379.833 B/11604600658 B/11715153390 B); segment size with penalty(avg/min/max): (0 (...)"
"Total runtime: 6860774.422 ms"