postgresql系列之并行查询

本文是《postgresql实战》的读书笔记,感兴趣可以参考该书对应章节

一、并行查询

postgresql在9.6开始支持并行查询,但支持的范围非常有限,在postgresql10得到进一步了增强。

1.1 并行查询相关参数

参数描述
max_work_processer(integer)设置系统支持的最大后台进程,默认值为8,此参数调整后需要重启数据库才生效。备库需要大于主库
max_parallel_workers(integer)系统并行查询进程,默认值为8。其值不超过max_work_processer
max_parallel_workers_per_gather(integer)运行启用并行进程的进程数,默认值2。设置成0则禁用并行查询。
parallel_setup_cost(floating point)优化器启动并允许进程的成本,默认为1000
parallel_tuple_cost(floating point)设置优化器通过并行进程处理一行数据的成本,默认是0.1
min_parallel_table_scan_size(integer)设置开启并行的条件之一,表占用空间小于此值将不会开启并行,并行顺序扫描场景下扫描的数据大小通常等于表大小,默认值8MB
min_parallel_index_scan_size(integer)设置开启并行的条件之一,实际上并行扫描不会扫描索引所有数据块,只是扫描索引相关数据块,默认值512kb
force_parallel_model(num)强制开启并行,OLTP生产环境开启需要慎重,不建议开启

并行查询需要消耗更多的CPU、IO、内存资源,对生产环境有一定的影响。使用时应该考虑这些因素。

这些参数都是在postgresql.conf配置文件中设置:

# max_worker_processes = 8
# max_parallel_workers_per_gather = 2
# max_parallel_workers = 8
# cpu_tuple_cost = 0.01	
# parallel_tuple_cost = 0.1		
# parallel_setup_cost = 1000.0	
# min_parallel_table_scan_size = 8MB
# min_parallel_index_scan_size = 512kB
# force_parallel_mode = off

可以通过命令进行设置。比如:SET max_parallel_workers_per_gather = 4; 。将允许启用的并行进程的进程数修改成4.


注:下面的案例的环境:centos7 、4CPU、4G 、postgresql10

1.2 并行顺序扫描

在介绍并行顺序扫描之前先介绍顺序扫描(sequential scan),顺序扫描通常也称之为全表扫描,全表扫描会扫描整张表数据,当表很大时,全表扫描会占用大量CPU、内存、IO资源,对数据库性能有较大影响,在OLTP事务型数据库系统中应当尽量避免

-- 创建表
CREATE TABLE test_big(
	id int4,
	name character varying(32),
	create_time timestamp without time zone DEFAULT clock_timestamp()
);
-- 插入数据
INSERT INTO test_big(id,name)
SELECT n, n||'_test' FROM generate_series(1,50000000) n;

顺序扫描案例:

EXPLAIN SELECT * FROM test_big WHERE name = '1_test';
-- 执行计划如下
 Seq Scan on test_big  (cost=0.00..492704.75 rows=50416 width=94)
        Filter: ((name)::text = '1_test'::text)

Seq Scan on test_big 说明了表test_big 上进行了顺序扫描。9.6开始支持并行,并行顺序扫描会产生多个子进程,并利用多个逻辑CPU并行全表扫描。

--  一个并行顺序扫描的执行计划如下:
EXPLAIN ANALYSE SELECT * FROM test_big WHERE name = '1_test';

-- 执行计划
Gather  (cost=1000.00..628070.20 rows=1 width=25) (actual time=38.430..23407.448 rows=1 loops=1)
  Workers Planned: 2
  Workers Launched: 2
  ->  Parallel Seq Scan on test_big  (cost=0.00..627070.10 rows=1 width=25) (actual time=15478.344..23268.016 rows=0 loops=3)
        Filter: ((name)::text = '1_test'::text)
        Rows Removed by Filter: 16666666
Planning time: 0.254 ms
Execution time: 23745.391 ms
  • Workers Planned 执行计划预估的并行进程数
  • Workers Launched 查询实际获得的并行进程数。
  • Parallel Seq Scan on test_big 并行顺序扫描
  • Planning time生成执行计划的时间
  • Execution time SQL实际执行时间
-- 不开启并行,执行计划
set max_parallel_workers_per_gather = 0;

-- 执行查询
EXPLAIN ANALYSE SELECT * FROM test_big WHERE name = '1_test';

Seq Scan on test_big  (cost=0.00..991638.65 rows=1 width=25) (actual time=0.328..8264.828 rows=1 loops=1)
  Filter: ((name)::text = '1_test'::text)
  Rows Removed by Filter: 49999999
Planning time: 0.084 ms
Execution time: 8264.860 ms

不开启并行时此sql执行时间明显比开启时慢了不少。23745.391 ms -->8264.860ms


1.3 并行索引扫描

现在test_big表的id上创建索引。

-- 创建索引
CREATE INDEX idx_test_big_id ON test_big USING btree(id);
EXPLAIN SELECT * FROM test_big WHERE id = 1;
-- 执行计划
Index Scan using idx_test_big_id on test_big  (cost=0.56..8.58 rows=1 width=25)
  Index Cond: (id = 1)

Index Scan using 表示执行计划预计索引扫描(index scan)。索引扫描也支持并行,称为并行索引扫描(Parallel index scan)。

(一)、执行以下SQL,统计ID小于1千万的记录数。

EXPLAIN ANALYSE SELECT count(name) FROM test_big WHERE id < 10000000;

-- 执行计划
Finalize Aggregate  (cost=288429.73..288429.74 rows=1 width=8) (actual time=2424.169..2424.169 rows=1 loops=1)
  ->  Gather  (cost=288429.31..288429.72 rows=4 width=8) (actual time=2423.668..2424.161 rows=5 loops=1)
        Workers Planned: 4
        Workers Launched: 4
        ->  Partial Aggregate  (cost=287429.31..287429.32 rows=1 width=8) (actual time=2308.612..2308.612 rows=1 loops=5)
              ->  Parallel Index Scan using idx_test_big_id on test_big  (cost=0.56..281219.03 rows=2484112 width=13) (actual time=0.284..1937.246 rows=2000000 loops=5)
                    Index Cond: (id < 10000000)
Planning time: 8.895 ms
Execution time: 2444.675 ms

可以看出进行了并行索引扫描,开启了4个并行进程。执行时间为2444.675毫秒。

(二)、关闭并行查询,查看执行计划

-- 会话级别关闭并行查询
SET max_parallel_workers_per_gather = 0;
-- 查询
EXPLAIN ANALYSE SELECT count(name) FROM test_big WHERE id < 10000000;

Aggregate  (cost=380583.48..380583.50 rows=1 width=8) (actual time=3634.703..3634.703 rows=1 loops=1)
  ->  Index Scan using idx_test_big_id on test_big  (cost=0.56..355742.37 rows=9936446 width=13) (actual time=0.202..2687.545 rows=9999999 loops=1)
        Index Cond: (id < 10000000)
Planning time: 0.123 ms
Execution time: 3634.746 ms

进行了索引扫描,没有开启并行,执行时间为3634.746,查询比开启时慢


postgresql10 对并行扫描的支持将提升范围扫描SQL的性能,由于开启并行将消耗更多的CPU、内存、IO资源,设置并行进程数时得合理考虑。目前postgresql10不支持非btree索引类型的并行索引扫描。


1.4 并行index-only扫描

index-only 是指扫只需扫描索引,也就是说SQL仅根据索引就能获得所需检索的数据,而不需要通过索引回表查询数据。

(一)、使用SQL统计ID小于100万的记录

-- 会话级别关闭并行
SET max_parallel_workers_per_gather = 0;
-- 执行查询
EXPLAIN SELECT count(*) FROM test_big WHERE id < 1000000;
-- 执行计划
Aggregate  (cost=39101.47..39101.48 rows=1 width=8)
  ->  Index Only Scan using idx_test_big_id on test_big  (cost=0.56..36549.48 rows=1020795 width=0)
        Index Cond: (id < 1000000)

由于ID字段上建立了索引,统计记录数不需要再回表查询其他信息,因此进行index-only扫描,

-- SET max_parallel_workers_per_gather = 0;
EXPLAIN ANALYSE SELECT count(*) FROM test_big WHERE id < 1000000;

-- 执行计划
Aggregate  (cost=39101.47..39101.48 rows=1 width=8) (actual time=251.160..251.160 rows=1 loops=1)
  ->  Index Only Scan using idx_test_big_id on test_big  (cost=0.56..36549.48 rows=1020795 width=0) (actual time=0.039..185.308 rows=999999 loops=1)
        Index Cond: (id < 1000000)
        Heap Fetches: 999999
Planning time: 0.096 ms
Execution time: 251.196 ms

index-only扫描也支持并行,称为并行index-only扫描。

(二)、开启并行,查看执行计划

-- 开启
SET max_parallel_workers_per_gather = 4;
EXPLAIN ANALYSE SELECT count(*) FROM test_big WHERE id < 1000000;

-- 执行计划
Finalize Aggregate  (cost=26783.21..26783.22 rows=1 width=8) (actual time=180.263..180.263 rows=1 loops=1)
  ->  Gather  (cost=26782.79..26783.20 rows=4 width=8) (actual time=180.242..187.945 rows=5 loops=1)
        Workers Planned: 4
        Workers Launched: 4
        ->  Partial Aggregate  (cost=25782.79..25782.80 rows=1 width=8) (actual time=169.890..169.890 rows=1 loops=5)
              ->  Parallel Index Only Scan using idx_test_big_id on test_big  (cost=0.56..25225.83 rows=222782 width=0) (actual time=0.109..138.962 rows=200000 loops=5)
                    Index Cond: (id < 1000000)
                    Heap Fetches: 160308
Planning time: 0.098 ms
Execution time: 187.998 ms


1.5 并行bitmap heap扫描

介绍并行bitmap heap 扫描之前先了解下Bitmap Index 扫描和 Bitmap Heap 扫描,当SQL的where条件中出现 or 时 很有可能出现Bitmap Index 扫描

SET max_parallel_workers_per_gather = 0;
-- where 中 使用 or
EXPLAIN ANALYSE SELECT * FROM test_big WHERE id = 1 OR id = 2;
-- 执行计划
Bitmap Heap Scan on test_big  (cost=9.15..17.16 rows=2 width=25) (actual time=0.215..0.216 rows=2 loops=1)
  Recheck Cond: ((id = 1) OR (id = 2))
  Heap Blocks: exact=1
  ->  BitmapOr  (cost=9.15..9.15 rows=2 width=0) (actual time=0.172..0.172 rows=0 loops=1)
        ->  Bitmap Index Scan on idx_test_big_id  (cost=0.00..4.57 rows=1 width=0) (actual time=0.158..0.158 rows=1 loops=1)
              Index Cond: (id = 1)
        ->  Bitmap Index Scan on idx_test_big_id  (cost=0.00..4.57 rows=1 width=0) (actual time=0.007..0.007 rows=1 loops=1)
              Index Cond: (id = 2)
Planning time: 0.202 ms
Execution time: 0.317 ms

首先执行两次Bitmap Index 扫描获取索引项,之后将Bitmap Index 扫描获取的结果合起来回表查询,这时在表test_big 上进行Bitmap Heap 扫描。 Bitmap Heap 扫描也支持并行。


-- 扩大id的选择范围
SET max_parallel_workers_per_gather = 4;

EXPLAIN ANALYSE SELECT count(*) FROM test_big WHERE id < 1000000 OR id > 49000000;

-- 执行计划
Finalize Aggregate  (cost=413005.69..413005.70 rows=1 width=8) (actual time=382.901..382.901 rows=1 loops=1)
  ->  Gather  (cost=413005.27..413005.68 rows=4 width=8) (actual time=381.860..386.543 rows=5 loops=1)
        Workers Planned: 4
        Workers Launched: 4
        ->  Partial Aggregate  (cost=412005.27..412005.28 rows=1 width=8) (actual time=360.527..360.527 rows=1 loops=5)
              ->  Parallel Bitmap Heap Scan on test_big  (cost=36852.15..410802.58 rows=481075 width=0) (actual time=103.308..288.004 rows=400000 loops=5)
                    Recheck Cond: ((id < 1000000) OR (id > 49000000))
                    Heap Blocks: exact=2518
                    ->  BitmapOr  (cost=36852.15..36852.15 rows=1943049 width=0) (actual time=120.364..120.364 rows=0 loops=1)
                          ->  Bitmap Index Scan on idx_test_big_id  (cost=0.00..16460.02 rows=891127 width=0) (actual time=55.150..55.150 rows=999999 loops=1)
                                Index Cond: (id < 1000000)
                          ->  Bitmap Index Scan on idx_test_big_id  (cost=0.00..19429.98 rows=1051922 width=0) (actual time=65.211..65.211 rows=1000000 loops=1)
                                Index Cond: (id > 49000000)
Planning time: 0.133 ms
Execution time: 386.611 ms

从执行计划可以看出,进行了并行Bitmap Heap 扫描,并行进程数为4,执行时间为612毫秒。

(二)、关闭并行,查看执行计划结果

-- 会话关闭并行
SET max_parallel_workers_per_gather = 0;
-- 执行查询
EXPLAIN ANALYSE SELECT count(*) FROM test_big WHERE id < 1000000 OR id > 49000000;
-- 执行计划

Aggregate  (cost=442077.03..442077.04 rows=1 width=8) (actual time=624.710..624.711 rows=1 loops=1)
  ->  Bitmap Heap Scan on test_big  (cost=39247.64..436956.43 rows=2048239 width=0) (actual time=168.749..494.715 rows=1999999 loops=1)
        Recheck Cond: ((id < 1000000) OR (id > 49000000))
        Heap Blocks: exact=13724
        ->  BitmapOr  (cost=39247.64..39247.64 rows=2069653 width=0) (actual time=166.711..166.711 rows=0 loops=1)
              ->  Bitmap Index Scan on idx_test_big_id  (cost=0.00..18852.53 rows=1020795 width=0) (actual time=84.026..84.026 rows=999999 loops=1)
                    Index Cond: (id < 1000000)
              ->  Bitmap Index Scan on idx_test_big_id  (cost=0.00..19370.99 rows=1048857 width=0) (actual time=82.682..82.682 rows=1000000 loops=1)
                    Index Cond: (id > 49000000)
Planning time: 0.220 ms
Execution time: 624.987 ms

从执行计划看出进行了Bitmap Heap 扫描,执行时间624毫秒。关闭和开启差别不大。


二、并行聚合

聚合操作是指使用count()、sum()等聚合函数的SQL。以下执行count()函数统计表记录总是。

EXPLAIN ANALYSE SELECT count(*) FROM test_big;

-- 执行计划
Finalize Aggregate  (cost=523914.42..523914.43 rows=1 width=8) (actual time=6533.963..6533.963 rows=1 loops=1)
  ->  Gather  (cost=523914.00..523914.41 rows=4 width=8) (actual time=6533.284..6533.955 rows=5 loops=1)
        Workers Planned: 4
        Workers Launched: 4
        ->  Partial Aggregate  (cost=522914.00..522914.01 rows=1 width=8) (actual time=6393.259..6393.259 rows=1 loops=5)
              ->  Parallel Seq Scan on test_big  (cost=0.00..491664.00 rows=12500000 width=0) (actual time=0.042..4975.841 rows=10000000 loops=5)
Planning time: 0.126 ms
Execution time: 6554.860 ms

首先进行Partial Aggregate ,开启了四个并行进程,最后进行Finalize Aggregate。此SQL执行时间为6554.860 ms。 其中sum()、min()、max() 聚合函数都支持并行查询


三、多表关联

多表关联也能用到并行扫描,例如:nested loopmerge joinhash join。 多表关联场景能够使用并行并不是指多表关联本身使用并行,而是指多表关联涉及的表数据检索时能够使用并行处理

3.1 Nested loop 多表关联

多表关联Nested loop 实际上是一个嵌套循环。

  • 素材准备,创建一张表test_small
CREATE TABLE test_small(
	id int4,
	name character varying(32)
);
INSERT INTO test_small(id,name)
SELECT n, n || '_small' FROM generate_series(1,8000000) n;
-- 创建索引
CREATE INDEX idx_test_small_id ON test_small USING btree(id);

(一)、开启并行扫描,查看多表关联

EXPLAIN ANALYSE 
SELECT test_small.name
FROM test_big,test_small
WHERE test_big.id = test_small.id 
AND test_small.id < 10000;

-- 执行计划
Gather  (cost=1171.64..57375.90 rows=9115 width=13) (actual time=2.129..98.716 rows=9999 loops=1)
  Workers Planned: 2
  Workers Launched: 2
  ->  Nested Loop  (cost=171.64..55464.40 rows=3798 width=13) (actual time=0.556..31.328 rows=3333 loops=3)
        ->  Parallel Bitmap Heap Scan on test_small  (cost=171.07..23503.44 rows=3798 width=17) (actual time=0.502..1.706 rows=3333 loops=3)
              Recheck Cond: (id < 10000)
              Heap Blocks: exact=44
              ->  Bitmap Index Scan on idx_test_small_id  (cost=0.00..168.80 rows=9115 width=0) (actual time=1.292..1.292 rows=9999 loops=1)
                    Index Cond: (id < 10000)
        ->  Index Only Scan using idx_test_big_id on test_big  (cost=0.56..8.41 rows=1 width=4) (actual time=0.007..0.008 rows=1 loops=9999)
              Index Cond: (id = test_small.id)
              Heap Fetches: 8140
Planning time: 0.333 ms
Execution time: 104.107 ms

执行计划解读:首先在表test_big 上进行了Index Only扫描,用于检索id 小于10000的记录。之后两表进行Nested loop关联同时在表test_small上进行了并行Bitmap Heap扫描,用于检索id小于10000的记录。(解读正确 ?)


(二) 关闭并行,查看执行计划

-- 关闭并行
SET max_parallel_workers_per_gather = 0;

EXPLAIN ANALYSE 
SELECT test_small.name
FROM test_big,test_small
WHERE test_big.id = test_small.id 
AND test_small.id < 10000;

-- 执行计划
Nested Loop  (cost=1.00..77025.58 rows=9115 width=13) (actual time=0.027..29.018 rows=9999 loops=1)
  ->  Index Scan using idx_test_small_id on test_small  (cost=0.43..320.95 rows=9115 width=17) (actual time=0.014..2.300 rows=9999 loops=1)
        Index Cond: (id < 10000)
  ->  Index Only Scan using idx_test_big_id on test_big  (cost=0.56..8.41 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=9999)
        Index Cond: (id = test_small.id)
        Heap Fetches: 9999
Planning time: 0.365 ms
Execution time: 29.761 ms

居然,关闭并行性能更好。


3.2 Merge Join 多表关联

Merge join 多表关联首先将两个表进行排序,之后进行关联字段匹配,merge join 示例。

(一)、开启并行

SET max_parallel_workers_per_gather = 4;

EXPLAIN ANALYSE 
SELECT test_small.name
FROM test_big,test_small
WHERE test_big.id = test_small.id 
AND test_small.id < 200000;

-- 执行计划
Gather  (cost=1002.01..240217.66 rows=180749 width=13) (actual time=0.914..264.491 rows=199999 loops=1)
  Workers Planned: 4
  Workers Launched: 4
  ->  Merge Join  (cost=2.01..221142.76 rows=45187 width=13) (actual time=5.045..186.567 rows=40000 loops=5)
        Merge Cond: (test_big.id = test_small.id)
        ->  Parallel Index Only Scan using idx_test_big_id on test_big  (cost=0.56..1290063.56 rows=12500000 width=4) (actual time=0.057..26.560 rows=40001 loops=5)
              Heap Fetches: 24889
        ->  Index Scan using idx_test_small_id on test_small  (cost=0.43..6300.54 rows=180749 width=17) (actual time=0.051..93.843 rows=199999 loops=5)
              Index Cond: (id < 200000)
Planning time: 0.437 ms
Execution time: 280.914 ms

开启了四个并行。

(二)、关闭并行

-- 关闭并行
SET max_parallel_workers_per_gather = 0;

EXPLAIN ANALYSE 
SELECT test_small.name
FROM test_big,test_small
WHERE test_big.id = test_small.id 
AND test_small.id < 200000;

-- 执行计划
Merge Join  (cost=2.01..298395.15 rows=180749 width=13) (actual time=0.035..209.833 rows=199999 loops=1)
  Merge Cond: (test_big.id = test_small.id)
  ->  Index Only Scan using idx_test_big_id on test_big  (cost=0.56..1665063.56 rows=50000000 width=4) (actual time=0.017..58.203 rows=200000 loops=1)
        Heap Fetches: 200000
  ->  Index Scan using idx_test_small_id on test_small  (cost=0.43..6300.54 rows=180749 width=17) (actual time=0.014..53.690 rows=199999 loops=1)
        Index Cond: (id < 200000)
Planning time: 0.306 ms
Execution time: 221.971 ms

关闭之后执行时间更短了。


3.3 Hash join 多表关联

Postgresql 多表关联也支持Hash join ,当关联字段没有索引情况下两表关联通常会进行Hash join 。接下来查看Hash join 的执行计划。先将两张表上的索引删除,同时关闭并行。

-- 删除索引
DROP INDEX idx_test_big_id;
DROP INDEX idx_test_small_id;
-- 关闭并行
SET max_parallel_workers_per_gather = 0;

-- 查询结果
EXPLAIN ANALYSE 
SELECT test_small.name
FROM test_big JOIN test_small ON test_big.id = test_small.id
AND test_small.id < 100;

-- 执行计划
Hash Join  (cost=150864.60..1205036.60 rows=800 width=13) (actual time=1036.150..12603.367 rows=99 loops=1)
  Hash Cond: (test_big.id = test_small.id)
  ->  Seq Scan on test_big  (cost=0.00..866664.00 rows=50000000 width=4) (actual time=0.028..7157.523 rows=50000000 loops=1)
  ->  Hash  (cost=150854.60..150854.60 rows=800 width=17) (actual time=1036.071..1036.071 rows=99 loops=1)
        Buckets: 1024  Batches: 1  Memory Usage: 13kB
        ->  Seq Scan on test_small  (cost=0.00..150854.60 rows=800 width=17) (actual time=0.020..1036.049 rows=99 loops=1)
              Filter: (id < 100)
              Rows Removed by Filter: 7999901
Planning time: 0.396 ms
Execution time: 12603.426 ms

(二)、开启并行,查看结果

-- 开启并行
SET max_parallel_workers_per_gather = 4;

EXPLAIN ANALYSE 
SELECT test_small.name
FROM test_big JOIN test_small ON test_big.id = test_small.id
AND test_small.id < 100;

-- 执行计划
Gather  (cost=151864.60..690485.60 rows=800 width=13) (actual time=3204.610..9727.023 rows=99 loops=1)
  Workers Planned: 4
  Workers Launched: 4
  ->  Hash Join  (cost=150864.60..689405.60 rows=200 width=13) (actual time=8290.986..9595.376 rows=20 loops=5)
        Hash Cond: (test_big.id = test_small.id)
        ->  Parallel Seq Scan on test_big  (cost=0.00..491664.00 rows=12500000 width=4) (actual time=0.036..4637.234 rows=10000000 loops=5)
        ->  Hash  (cost=150854.60..150854.60 rows=800 width=17) (actual time=2991.532..2991.532 rows=99 loops=5)
              Buckets: 1024  Batches: 1  Memory Usage: 13kB
              ->  Seq Scan on test_small  (cost=0.00..150854.60 rows=800 width=17) (actual time=2241.200..2991.434 rows=99 loops=5)
                    Filter: (id < 100)
                    Rows Removed by Filter: 7999901
Planning time: 0.161 ms
Execution time: 9756.984 ms

开启后执行时间下降了不少。


总结

开启并行与关闭并行的性能在我的这个PC上有高有底。本文主要认识并行查询相关配置参数、并行顺序扫描、并行索引扫描、并行index-only扫描、并行bitmap heap扫描、同时介绍了多表关联场景中并行的使用

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值