count(distinct xx) 优化 postgres=# CREATE TABLE t1(f1 serial not null unique,f2 text,f3 text,f4 text,f5 text,f6 text,f7 text,f8 text,f9 text,f10 text,f11 text,f12 text) distribute by shard(f1); NOTICE: Replica identity is needed for shard table, please add to this table through "alter table" command. CREATE TABLE Time: 89.938 ms postgres=# insert into t1 select t,md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text) from generate_series(1,1000000) as t; INSERT 0 1000000 Time: 14849.045 ms (00:14.849) postgres=# analyze t1; ANALYZE Time: 1340.387 ms (00:01.340) postgres=# explain (verbose) select count(distinct f2) from t1;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=103320.00..103320.01 rows=1 width=8) Output: count(DISTINCT f2) -> Remote Subquery Scan on all (dn01,dn02,dn03,dn04,dn05,dn06,dn07,dn08,dn09,dn10) (cost=100.00..100820.00 rows=1000000 width=33) Output: f2 -> Seq Scan on public.t1 (cost=0.00..62720.00 rows=1000000 width=33) Output: f2 (6 rows) Time: 0.748 ms postgres=# select count(distinct f2) from t1;
count
--------- 1000000 (1 row) Time: 6274.684 ms (00:06.275) postgres=# select count(distinct f2) from t1 where f1 <10;
count ------- 9 (1 row) Time: 19.261 ms 如上 count(distinct f2) 发生在 cn 节点,对于 TP 类业务,需要操作的数据量少的情况下,性能开销没有问题,且比下推执行的性能开销还要小。 但对于一次要操作的数据量比较大的 AP 类业务,网络传输就会瓶颈,如下是改写后的执行计划:
postgres=# explain (verbose) select count(1) from (select f2 from t1 group by f2) as t ; QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------- Finalize Aggregate (cost=355600.70..355600.71 rows=1 width=8) Output: count(1) -> Remote Subquery Scan on all (dn01,dn02,dn03,dn04,dn05,dn06,dn07,dn08,dn09,dn10) (cost=355600.69..355600.70 rows=1 width=0) Output: PARTIAL count(1) -> Partial Aggregate (cost=355500.69..355500.70 rows=1 width=8) Output: PARTIAL count(1) -> Group (cost=340500.69..345500.69 rows=1000000 width=33) Output: t1.f2 Group Key: t1.f2 -> Sort (cost=340500.69..343000.69 rows=1000000 width=0) Output: t1.f2 Sort Key: t1.f2 -> Remote Subquery Scan on all (dn01,dn02,dn03,dn04,dn05,dn06,dn07,dn08,dn09,dn10) (cost=216192.84..226192.84 rows=1000000 width=0) Output: t1.f2 Distribute results by S: f2 -> Group (cost=216092.84..221092.84 rows=1000000 width=33) Output: t1.f2 Group Key: t1.f2 -> Sort (cost=216092.84..218592.84 rows=1000000 width=33) Output: t1.f2 Sort Key: t1.f2 -> Seq Scan on public.t1 (cost=0.00..62720.00 rows=1000000 width=33) Output: t1.f2 (23 rows) 改写后,并行推到 dn 执行,此时查看执行的效果,可以看到对于大量数据计算的 AP 类业务,性能提高了5倍。
postgres=# select count(1) from (select f2 from t1 group by f2) as t ; count
--------- 1000000 (1 row) Time: 1328.431 ms (00:01.328) postgres=# select count(1) from (select f2 from t1 where f1<10 group by f2) as t ; count ------- 9 (1 row) Time: 24.991 ms postgres=# 增大 work_mem 减少 io 访问 增大 work_mem 后,性能提高了40倍,因为 work_mem 足够放下 filter 的数据,不需要再做 Materialize 物化,filter 由原来的 subplan 变成了 hash subplan,直接在内存 hash 表中 filter,性能提升。 注意,work_mem 默认不宜过大,建议在某个具体的查询语句中再根据需要进行调整即可。
postgres=# CREATE TABLE t1(f1 serial not null unique,f2 text,f3 text,f4 text,f5 text,f6 text,f7 text,f8 text,f9 text,f10 text,f11 text,f12 text) distribute by shard(f1); NOTICE: Replica identity is needed for shard table, please add to this table through "alter table" command. CREATE TABLE Time: 70.545 ms postgres=# CREATE TABLE t2(f1 serial not null unique,f2 text,f3 text,f4 text,f5 text,f6 text,f7 text,f8 text,f9 text,f10 text,f11 text,f12 text) distribute by shard(f1); NOTICE: Replica identity is needed for shard table, please add to this table through "alter table" command. CREATE TABLE Time: 61.913 ms postgres=# insert into t1 select t,md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text) from generate_series(1,1000) as t;
INSERT 0 1000 Time: 48.866 ms postgres=# insert into t2 select t,md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text),md5(t::text) from generate_series(1,50000) as t;
INSERT 0 50000 Time: 792.858 ms postgres=# analyze t1; ANALYZE Time: 175.946 ms postgres=# analyze t2; ANALYZE Time: 318.802 ms postgres=#
postgres=