系统开发过程中,我们经常遇到这样的问题:接口表的数量越来越多,性能越来越慢。想要提高集成效率,减少接口表的数量是否是一种好的方法了??我们来验证一下:
1. 用test2来模拟接口表
SQL> select count(1) from test2;
COUNT(1)
----------
4980400
2. 查询id=100,看消耗时间以及逻辑读和物理读情况
SQL> select * from test2 where object_id=100;
100 rows selected.
Elapsed: 00:00:00.55
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
23527 consistent gets
16685 physical reads
0 redo size
1775 bytes sent via SQL*Net to client
451 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
100 rows processed
3. 删除一半的数据
SQL> delete/*+parallel(t 8)*/from test2 where object_id>30000;
2025000 rows deleted.
SQL> select count(1) from test2;
COUNT(1)
----------
2955400
4. 再次查询id=100的数据
SQL> select * from test2 where object_id=100;
100 rows selected.
Elapsed: 00:00:00.69
Statistics
----------------------------------------------------------
164 recursive calls
0 db block gets
23544 consistent gets
21810 physical reads
0 redo size
1775 bytes sent via SQL*Net to client
451 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
100 rows processed
通过比较可以看到像这种全盘扫描的执行方式,删除数据并不能有效提高执行效率。。。。。。数据显示删除数据后,性能反而更加低下。。。至于扫描同样数量的数据块,为什么删除数据后效率更低,没想明白,求大虾指点。。