某客户真实案例--通过分区表及分区索引提速sql查询性能

前言

1,背景
      近期为某客户优化核心数据库系统,发现某SQL物理读及逻辑读消耗很大,且执行频次很高。
sql性能数据如下:
  
  sql每小时运行100次左右
# Plan Hash Value Total Elapsed Time(ms) Executions 1st Capture Snap ID Last Capture Snap ID
1 2301468429 10,456,910 3,297 61159 61184 

  sql每次处理4条记录,逻辑读达到121,658.57左右
Stat Name Statement Total Per Execution % Snap Total
Elapsed Time (ms) 10,456,910 3,171.64 0.14
CPU Time (ms) 7,466,224 2,264.55 0.63
Executions 3,297    
Buffer Gets 401,108,302 121,658.57 0.81
Disk Reads 771,154 233.90 0.02
Parse Calls 3,242 0.98 0.06
Rows 14,587 4.42  
User I/O Wait Time (ms) 2,744,342    
Cluster Wait Time (ms) 263,173    
Application Wait Time (ms) 0    
Concurrency Wait Time (ms)  163    
Invalidations 0    
Version Count 34    
Sharable Mem(KB) 3,100     


分析


2,sql执行计划,标红部分单表消耗时间达到3分钟以上,走全表扫描合理吗,小伙伴?



3,标红sql部分的sql文本


SELECT
      DECODE(cp.pick_type, 'M6', '?????', 'A6', '???????', '?????', 'M13',
      '??-????') transfer_type,
      cpd.pick_id,
      cpd.card_type,
      cpd.count,
      cpd.par_value,
      cpd.min_card_id,
      cpd.max_card_id,
      cp.from_depot,
      cp.TO_DEPOT,
      cp.from_area_id,
      cp.from_county_ id,
      cp.creator,
      cp.create_time,
      NULL,
      '01'
    FROM
      CARD_PICK_DETAIL cpd,
      CARD_PICK cp
    WHERE
      cp.pick_id            = cpd.pick_id
    AND card_type           = :16
    AND :17                >= min_card_id
    AND :18                <= max_card_id
    AND cp.FROM_PROVINCE_ID = :19 


4,上述2个表card_pick,card_pick_detail记录各为600多万和130万左右,数据量还是比较大的
OWNER TABLE_NAME TABLESPACE_NAME STATUS NUM_ROWS BLOCKS AVG_ROW_LEN CHAIN_CNT PAR LAST_ANALYZED
PCRM2 CARD_PICK_DETAIL TS_CRM2_CARD_4M_02 VALID 6273085 72328 77 0 NO 2014-05-12 21:40:18
PCRM2 CARD_PICK TS_CRM2_CARD_4M_02 VALID 1304520 29024 131 0 NO 2012-11-24 02:22:49 


5,测试
  • 数据库版本
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.5.0 - Production
CORE    10.2.0.4.0      Production
TNS for AIX: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production

  • 创建2个测试表并插入数据
SQL> create table card_pick(pick_id int,province_id int,pick_type int,primary key(pick_id));

Table created.

SQL> insert into card_pick select level,mod(level,31),mod(level,10) from dual connect by level<=1300000;

1300000 rows created.

SQL> commit;

Commit complete.

SQL> create table card_pick_detail(pick_id int,card_type int,min_card_id int,max_card_id int);

Table created.

SQL> insert into card_pick_detail select level,mod(level,15),mod(level,1000000),mod(level,1000000)  from dual connect by level<=1300000;

1300000 rows created.

SQL> commit;

Commit complete.

SQL> insert into card_pick_detail select level+1300000,mod(level+1300000,15),mod(level+1300000,1000000),mod(level+1300000,1000000)  from dual connect by level<=1300000;

1300000 rows created.

SQL> insert into card_pick_detail select level+1300000+1300000,mod(level+1300000+1300000,15),mod(level+1300000+1300000,1000000),mod(level+1300000+1300000,1000000)  from dual connect by level<=1300000;

1300000 rows created.

SQL> insert into card_pick_detail select level+1300000+1300000+1300000,mod(level+1300000+1300000+1300000,15),mod(level+1300000+1300000+1300000,1000000),mod(level+1300000+1300000+1300000,1000000)  from dual connect by level<=1300000;

1300000 rows created.

SQL> commit;

Commit complete

SQL> insert into card_pick_detail select level+1300000+1300000+1300000+1300000,mod(level+1300000+1300000+1300000+1300000,15),mod(level+1300000+1300000+1300000+1300000,1000000),mod(level+1300000+1300000+1300000+1300000,1000000) from dual connect by level<=1300000;

1300000 rows created.

SQL> commit;

Commit complete.

  • 备份card_pick_detail
SQL> create table card_pick_detail_bak as select * from card_pick_detail;

Table created.

  • 在表card_pick_detail基于列pick_id建索引
SQL> create index idx_card_pick_detail on card_pick_detail(pick_id) nologging;

Index created.

SQL> alter index idx_card_pick_detail nologging;

Index altered.

  • 收集统计信息
SQL> exec dbms_stats.gather_table_stats(user,'card_pick',cascade=>true);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(user,'card_pick_detail',cascade=>true);

PL/SQL procedure successfully completed.

  • 逻辑读为21361
SQL> set autot traceonly
SQL> select card_pick.pick_id,card_pick.province_id,card_pick_detail.min_card_id from card_pick,card_pick_detail where card_pick.pick_id=card_pick_detail.pick_id and card_pick.province_id=20 and 20>=min_card_id and 30<=max_card_id and card_type=11;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 1590952718

---------------------------------------------------------------------------------------
| Id  | Operation          | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                  |     1 |    27 |  5431   (2)| 00:01:06 |
|*  1 |  HASH JOIN         |                  |     1 |    27 |  5431   (2)| 00:01:06 |
|*  2 |   TABLE ACCESS FULL| CARD_PICK_DETAIL |     1 |    18 |  4774   (2)| 00:00:58 |
|*  3 |   TABLE ACCESS FULL| CARD_PICK        | 42063 |   369K|   657   (2)| 00:00:08 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("CARD_PICK"."PICK_ID"="CARD_PICK_DETAIL"."PICK_ID")
   2 - filter("MIN_CARD_ID"<=20 AND "CARD_TYPE"=11 AND "MAX_CARD_ID">=30)
   3 - filter("CARD_PICK"."PROVINCE_ID"=20)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      21361  consistent gets
          0  physical reads
          0  redo size
        462  bytes sent via SQL*Net to client
        481  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

SQL> 

  • 经与客户业务人员沟通,表card_pick_detail列card_type不同值共有15个,且此列数据值为静态数据,不会再有变化,基于此列变更表card_pick_detail为分区表


  • 重构测试表分区表
 SQL> drop table card_pick_detail;

Table dropped. 

SQL> create table card_pick_detail(pick_id int,card_type int,min_card_id int,max_card_id int)
  2  partition by list(card_type)
  3  (partition p1 values(0),
 partition p2 values(1),
  5   partition p3 values(2),
  6   partition p4 values(3),
 partition p5 values(4),
  8   partition p6 values(5),
  9   partition p7 values(6),
 partition p8 values(7),
 11   partition p9 values(8),
 12   partition p10 values(9),
 partition p11 values(10),
 partition p12 values(11),
 partition p13 values(12),
 16   partition p14 values(13),
 17   partition p15 values(14));

Table created.

SQL> insert into card_pick_detail select * from card_pick_detail_bak;

6500000 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(user,'card_pick_detail',cascade=>true);

PL/SQL procedure successfully completed.


  • 运行查询sql,逻辑读为1451,相比之前的21361,提升了近20倍
SQL> set autot traceonly
SQL> select card_pick.pick_id,card_pick.province_id,card_pick_detail.min_card_id from card_pick,card_pick_detail where card_pick.pick_id=card_pick_detail.pick_id and card_pick.province_id=20 and 20>=min_card_id and 30<=max_card_id and card_type=11;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 746139065

-----------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name             | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |                  |     6 |   162 |   992   (2)| 00:00:12 |       |       |
|*  1 |  HASH JOIN             |                  |     6 |   162 |   992   (2)| 00:00:12 |       |       |
|   2 |   PARTITION LIST SINGLE|                  |     6 |   108 |   334   (2)| 00:00:05 |    12 |    12 |
|*  3 |    TABLE ACCESS FULL   | CARD_PICK_DETAIL |     6 |   108 |   334   (2)| 00:00:05 |    12 |    12 |
|*  4 |   TABLE ACCESS FULL    | CARD_PICK        | 42063 |   369K|   657   (2)| 00:00:08 |       |       |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("CARD_PICK"."PICK_ID"="CARD_PICK_DETAIL"."PICK_ID")
   3 - filter("MIN_CARD_ID"<=20 AND "MAX_CARD_ID">=30)
   4 - filter("CARD_PICK"."PROVINCE_ID"=20)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1451  consistent gets
          0  physical reads
          0  redo size
        462  bytes sent via SQL*Net to client
        481  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

  • 构建min_card_id本地分区索引,注意还是有filter操作,还是比较消耗资源
SQL> set autot off
SQL> create index idx_card_pick_detail_min on card_pick_detail(min_card_id) local;

Index created.

SQL> exec dbms_stats.gather_table_stats(user,'card_pick_detail',cascade=>true);

PL/SQL procedure successfully completed.

SQL> set autot traceonly
SQL> set autot traceonly
SQL> select card_pick.pick_id,card_pick.province_id,card_pick_detail.min_card_id from card_pick,card_pick_detail where card_pick.pick_id=card_pick_detail.pick_id and card_pick.province_id=20 and 20>=min_card_id and 30<=max_card_id and card_type=11;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 724889292

--------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name                     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                          |     2 |    54 |   664   (2)| 00:00:08 |       |       |
|*  1 |  HASH JOIN                          |                          |     2 |    54 |   664   (2)| 00:00:08 |       |       |
|   2 |   PARTITION LIST SINGLE             |                          |     2 |    36 |     6   (0)| 00:00:01 |    12 |    12 |
|*  3 |    TABLE ACCESS BY LOCAL INDEX ROWID| CARD_PICK_DETAIL         |     2 |    36 |     6   (0)| 00:00:01 |    12 |    12 |
|*  4 |     INDEX RANGE SCAN                | IDX_CARD_PICK_DETAIL_MIN |     2 |       |     3   (0)| 00:00:01 |    12 |    12 |
|*  5 |   TABLE ACCESS FULL                 | CARD_PICK                | 42063 |   369K|   657   (2)| 00:00:08 |       |       |
--------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("CARD_PICK"."PICK_ID"="CARD_PICK_DETAIL"."PICK_ID")
   3 - filter("MAX_CARD_ID">=30)
   4 - access("MIN_CARD_ID"<=20)
   5 - filter("CARD_PICK"."PROVINCE_ID"=20)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
         12  consistent gets
          0  physical reads
          0  redo size
        462  bytes sent via SQL*Net to client
        481  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed
  • 尝试构建min_card_id与max_card_id的联合本地分区索引,以避免上述的filter操作
SQL> set autot off
SQL> drop index idx_card_pick_detail_min;

Index dropped.

SQL> create index idx_card_pick_detail_min on card_pick_detail(min_card_id,max_card_id) local;

Index created.

SQL> exec dbms_stats.gather_table_stats(user,'card_pick_detail',cascade=>true);

PL/SQL procedure successfully completed. 

SQL> set autot traceonly
SQL> select card_pick.pick_id,card_pick.province_id,card_pick_detail.min_card_id from card_pick,card_pick_detail where card_pick.pick_id=card_pick_detail.pick_id and card_pick.province_id=20 and 20>=min_card_id and 30<=max_card_id and card_type=11;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 724889292

--------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name                     | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                          |     2 |    54 |   664   (2)| 00:00:08 |       |       |
|*  1 |  HASH JOIN                          |                          |     2 |    54 |   664   (2)| 00:00:08 |       |       |
|   2 |   PARTITION LIST SINGLE             |                          |     2 |    36 |     6   (0)| 00:00:01 |    12 |    12 |
|   3 |    TABLE ACCESS BY LOCAL INDEX ROWID| CARD_PICK_DETAIL         |     2 |    36 |     6   (0)| 00:00:01 |    12 |    12 |
|*  4 |     INDEX RANGE SCAN                | IDX_CARD_PICK_DETAIL_MIN |     2 |       |     3   (0)| 00:00:01 |    12 |    12 |
|*  5 |   TABLE ACCESS FULL                 | CARD_PICK                | 42063 |   369K|   657   (2)| 00:00:08 |       |       |
--------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("CARD_PICK"."PICK_ID"="CARD_PICK_DETAIL"."PICK_ID")
   4 - access("MAX_CARD_ID">=30 AND "MIN_CARD_ID"<=20)
       filter("MAX_CARD_ID">=30)
   5 - filter("CARD_PICK"."PROVINCE_ID"=20)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        462  bytes sent via SQL*Net to client
        481  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

SQL> 

总结

      1,sql优化问题定位主要查看哪些部分或模块最消耗资源,具体即消耗时间最多,或者扫描数据最大,可以尝试以
            分而治之思想集中精力分析相关部分的子SQL

      2,定位到SQL子模块后,要进行如下几方面的分析:
  • 获取相关表的记录,如果表记录数据量比较大,结合sql where条件,尝试可否分区,但注意分区键的选择一定要仔细考量
                  分区键列,必须出现在sql where条件列,最好把业务高峰及非峰值相关的sql进行综合分析,以选取适合所有相关sql
                  分区键列如果取值是一些固定的值,可以尝试列表分区
                        如果取值不是一些固定的值,可以尝试哈希分区,但同时要考虑对于其它相关sql影响
                        因为哈希分区用于打散热点数据块,用于高并发INSERT,且不适用于范围扫描
                        如果表列取值具备时间范围的性质,可以尝试范围分区
                        如果分区后sql的物理及逻辑读还是很高,可以尝试复合分区,实现每次扫描较小的数据量

  • 每个表列的统计信息,每个表上索引的构建情况,分析这些索引是否使用,这里要细分如一些情况
                 如果使用了索引,是否合理呢,比如复合索引,前导列和非前导列的选择度对于扫描成本是有区别的
                 如果表上建了索引,但没有使用,就要分析下为何没走索引了,是索引构建不合适,或是其它因素
                 每个表是否存在行迁移或行链接,二者会消耗大量不必要的IO,可以消除不必要的物理及逻辑读
          

个人简介


   8年oracle从业经验,具备丰富的oracle技能,目前在国内北京某专业oracle服务公司从事高级技术顾问。
   
   服务过的客户:
          中国电信
          中国移动
          中国联通
          中国电通
          国家电网
          四川达州商业银行
          湖南老百姓大药房
          山西省公安厅
          中国邮政
          北京302医院     
          河北廊坊新奥集团公司
  
 项目经验:
           中国电信3G项目AAA系统数据库部署及优化
           中国联通CRM数据库性能优化
           中国移动10086电商平台数据库部署及优化
           湖南老百姓大药房ERR数据库sql优化项目
           四川达州商业银行TCBS核心业务系统数据库模型设计和RAC部署及优化
           四川达州商业银行TCBS核心业务系统后端批处理存储过程功能模块编写及优化
           北京高铁信号监控系统RAC数据库部署及优化
           河南宇通客车数据库性能优化
           中国电信电商平台核心采购模块表模型设计及优化
           中国邮政储蓄系统数据库性能优化及sql优化
           北京302医院数据库迁移实施
           河北廊坊新奥data guard部署及优化
           山西公安厅身份证审计数据库系统故障评估
           
  
 联系方式:
          手机:18201115468
           qq   :   305076427
           qq微博: wisdomone1
           
           新浪微博:wisdomone9
          
           qq群:275813900    
          
           itpub博客名称:wisdomone1     http://blog.itpub.net/9240380/




来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/9240380/viewspace-1265261/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/9240380/viewspace-1265261/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值