本文讨论多列统计信息在CBO计算cardinality中的帮助
创建测试表
create table t1(id int,name varchar2(10),other varchar2(20));
插入数据
declare
begin
for i in 1..10000 loop
insert into t1 values(1,'A',dbms_random.string('u',20));
insert into t1 values(2,'B',dbms_random.string('u',20));
insert into t1 values(3,'C',dbms_random.string('u',20));
end loop;
commit;
end;
/
insert into t1 values(10,'AA','AAAAAAA');
insert into t1 values(20,'BB','BBBBBBB');
insert into t1 values(30,'CC','CCCCCCC');
commit;
数据分布:
select count(*) from t1;
COUNT(*)
----------
30003
select id,name,count(*) from t1 group by id,name order by 3;
ID NAME COUNT(*)
---------- ---------- ----------
30 CC 1
20 BB 1
10 AA 1
3 C 10000
2 B 10000
1 A 10000
收集表的统计信息,但是不收集直方图
exec dbms_stats.gather_table_stats(ownname=>'ming',tabname=>'t1',method_opt=>'FOR ALL COLUMNS SIZE 1');
查看统计信息:
col COLUMN_NAME for a30
select a.column_name,
b.num_rows,
a.num_distinct Cardinality,
round(a.num_distinct / b.num_rows * 100, 2) selectivity,
a.histogram,
a.num_buckets,
a.last_analyzed
from dba_tab_col_statistics a, dba_tables b
where a.owner = b.owner
and a.table_name = b.table_name
and a.owner = upper('ming')
and a.table_name = upper('t1');
COLUMN_NAME NUM_ROWS CARDINALITY SELECTIVITY HISTOGRAM NUM_BUCKETS LAST_ANALYZE
------------- ---------- ----------- ----------- ----------- ----------- ------------
ID 30003 6 .02 NONE 1 02-SEP-18
NAME 30003 6 .02 NONE 1 02-SEP-18
OTHER 30003 30003 100 NONE 1 02-SEP-18
范例sql:
select count(*) from t1 where id=1 and name='A';
COUNT(*)
----------
10000
Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 69 (2)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 6 | | |
|* 2 | TABLE ACCESS FULL| T1 | 833 | 4998 | 69 (2)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("ID"=1 AND "NAME"='A')
估算值833与实际返回值10000差距很大。
这个833是怎么来的呢?
select 30003*(1/6)*(1/6) from dual;
30003*(1/6)*(1/6)
-----------------
833.416667
即总行数*(1/id列不同值个数)*(1/name列不同值个数)
下面再来看看直方图对估算值的影响
收集直方图统计信息
exec dbms_stats.gather_table_stats(ownname=>'ming',tabname=>'t1',method_opt=>'for all columns size auto');
COLUMN_NAME NUM_ROWS CARDINALITY SELECTIVITY HISTOGRAM NUM_BUCKETS LAST_ANALYZE
------------- ---------- ----------- ----------- ----------- ----------- ------------
ID 30003 6 .02 FREQUENCY 6 02-SEP-18
NAME 30003 6 .02 FREQUENCY 6 02-SEP-18
OTHER 30003 30003 100 NONE 1 02-SEP-18
再次运行上面的sql:
select count(*) from t1 where id=1 and name='A';
COUNT(*)
----------
10000
执行计划
Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 69 (2)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 6 | | |
|* 2 | TABLE ACCESS FULL| T1 | 3333 | 19998 | 69 (2)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("ID"=1 AND "NAME"='A')
此时的估算值已经变成了3333.
这个3333又是怎么算出来的呢?
select 30003*(10000/30003)*(10000/30003) from dual;
30003*(10000/30003)*(10000/30003)
---------------------------------
3333.00003
即总行数*(bucket桶里值的个数/id列不同值个数)*(bucket桶里值的个数/name列不同值个数)
bucket桶里值的个数可以通过dba_tab_histograms查:
select owner,table_name,column_name,endpoint_number,endpoint_value from dba_tab_histograms where table_name='T1';
OWNER TABLE_NAME COLUMN_NAME ENDPOINT_NUMBER ENDPOINT_VALUE
-------- ------------- -------------- --------------- --------------
MING T1 ID 10000 1
MING T1 ID 20000 2
MING T1 ID 30000 3
MING T1 ID 30001 10
MING T1 ID 30002 20
MING T1 ID 30003 30
MING T1 NAME 10000 3.3750E+35
MING T1 NAME 10001 3.3882E+35
MING T1 NAME 20001 3.4269E+35
MING T1 NAME 20002 3.4403E+35
MING T1 NAME 30002 3.4788E+35
MING T1 NAME 30003 3.4924E+35
MING T1 OTHER 0 3.3882E+35
MING T1 OTHER 1 4.6914E+35
可以看到直方图确实有助于CBO进行更准确的估算。但这里离实际值还是有较大差距。真实生产环境下数据量大了之后偏差会更大,更加不利于CBO得到准确的执行计划。
多列统计(multicolumns statistics)又叫列组统计(column group statistics),其实是扩展统计信息的一种,它将具有相关性的列放入列组(column group)中,其实也可以将列组看做一种虚拟列,有助于理解。这里id和name显然相关性是较高的。这里1只会和A相关,不会和B先关,和B相关的只有2,所以这里列的名字我取的其实不好,但是实际情况相关性是很强的。大学学的概率论与数理统计中有相关系数的概念,如果还记得怎么计算的话,可以算算。
收集列组统计信息也有两种方法,自动和手动。自动是根据系统根据特定的工作负载来探测得到列的相关性,然后生成建议值,进而得到列组统计信息。手动则需要自己找到列之间的相关性。
用sys用户开启工作监控
BEGIN
DBMS_STATS.SEED_COL_USAGE(null,null,300);
END;
/
这里监控系统300秒的时间。系统会自动检测300秒的时间出现的sql,内对于一些较长的sql可能三百秒内无法跑完,这里用explain plan for的方法也可以。
explain plan for select count(*) from t1 where id=1 and name='A';
生成报告:
SET LONG 100000
SET LINES 120
SET PAGES 0
SELECT DBMS_STATS.REPORT_COL_USAGE('MING', 'T1') FROM DUAL;
LEGEND:
.......
EQ : Used in single table EQuality predicate
RANGE : Used in single table RANGE predicate
LIKE : Used in single table LIKE predicate
NULL : Used in single table is (not) NULL predicate
EQ_JOIN : Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
FILTER : Used in single table FILTER predicate
JOIN : Used in JOIN predicate
GROUP_BY : Used in GROUP BY expression
...............................................................................
###############################################################################
COLUMN USAGE REPORT FOR MING.T1
...............................
1. ID : EQ
2. NAME : EQ
3. (ID, NAME) : FILTER
###############################################################################
根据工作负载探查结果生成列组信息
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('ming', 'T1') FROM DUAL;
###############################################################################
EXTENSIONS FOR MING.T1
......................
1. (ID, NAME) : SYS_STUIA0V924QODN5R5SCAKM60G# created
###############################################################################
此时查询每一列的统计信息看不到SYS_STUIA0V924QODN5R5SCAKM60G#,需要再次收集,之后就可以看到了。
注意,记得生成列组信息之后再次收集统计信息,不然不会生效
exec dbms_stats.gather_table_stats(ownname=>'ming',tabname=>'t1',method_opt=>'for all columns size SKEWONLY');
COLUMN_NAME NUM_ROWS CARDINALITY SELECTIVITY HISTOGRAM NUM_BUCKETS LAST_ANALYZE
------------- ---------- ----------- ----------- ----------- ----------- ------------
ID 30003 6 .02 FREQUENCY 6 02-SEP-18
NAME 30003 6 .02 FREQUENCY 6 02-SEP-18
OTHER 30003 30003 100 HYBRID 254 02-SEP-18
SYS_STUIA0V924QODN5R5SCAKM60G# 30003 6 .02 FREQUENCY 6 02-SEP-18
此时再运行sql查询
select count(*) from t1 where id=1 and name='A';
COUNT(*)
----------
10000
执行计划
Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 69 (2)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 6 | | |
|* 2 | TABLE ACCESS FULL| T1 | 10000 | 60000 | 69 (2)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("ID"=1 AND "NAME"='A')
可以看到正好是返回值的10000,十分准确。
手动收集可以用下面的方式,在收集统计信息的时候直接生成并收集列组统计信息
BEGIN
DBMS_STATS.GATHER_TABLE_STATS( 'ming','t1',
METHOD_OPT => 'FOR ALL COLUMNS SIZE SKEWONLY ' ||
'FOR COLUMNS SIZE SKEWONLY (id,name)' );
END;
/
附
删除多列统计信息:
BEGIN
DBMS_STATS.DROP_EXTENDED_STATS(OWNNAME => 'MING',
TABNAME => 't1',
EXTENSION => '(id,name)');
END;
/
总结一下:
如果表中的数据倾斜程度较高,那么收集直方图可以帮助CBO得到更加准确的cardinality,如果谓语动词中出现多个这种倾斜列,且列之间相关系数较高,那么收集带有直方图的多列统计信息通常是有益的。
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/31480688/viewspace-2213411/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/31480688/viewspace-2213411/