一、statspack : 8.1.7
采样过程:建表空间,脚本建用户,抓快照点,生成报告,删除快照点
--1.几个参数
job_queue_processes=10
timed_statistics=true
--2.检查数据文件和磁盘空间,确定建立表空间的数据文件的位置:
SQL> ho df -h
SQL> create tablespace perfstat datafile '/oracle/oradata/ocm/perfstat.dbf' size 500m;
--3.用脚本建立用户perfstat
SQL> @?/rdbms/admin/spcreate.sql
密码
默认表空间
临时表空间
SQL> show user 自动连接到此用户
USER is "PERFSTAT"
--4.测试安装好的statspack
SQL> alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';
SQL> execute statspack.snap; -->生成快照点
SQL> execute statspack.snap; -->生成快照点
SQL> desc stats$snapshot
SQL> select snap_id,snap_time from stats$snapshot;
SNAP_ID SNAP_TIME
---------- -------------------
1 2010-08-23 10:02:46
2 2010-08-23 10:17:08
--5.生成报告:
SQL> @?/rdbms/admin/spreport.sql
begin snap
end snap
report.txt
--6.运行脚本创建自动job定时抓快照点:
SQL> @?/rdbms/admin/spauto.sql
SQL> select job, next_date, next_sec from user_jobs; -->查看job
JOB NEXT_DATE NEXT_SEC
------ ------------------- --------
61 2010-08-23 11:00:00 11:00:00
SQL> execute dbms_job.remove('61'); -->移出job
关于采样间隔,建议1小时,对于特殊环境需要的,可以设置成更短,如半小时
因为statspack执行本身就消耗资源,对于繁忙的生产系统,太短采样对系统产生较大影响
抓一个快照占用几十M的表空间,生产环境里,应定时监控perfstat表空间的使用情况
--7.清理statspack
方法一:delete
SQL> delete from stats$snapshot where snap_id<3;
SQL> commit;
删除Stats$snapshot数据表中的数据时候,其他表中的数据应级联删除,加锁了
如果采样了大量数据,直接delete是非常慢的,可以考虑下面方法截断表:
方法二:先截断子表,在删除stats$snapshot
提供了一个脚本 sptrunc.sql
SQL> @?/rdbms/admin/sptrunc.sql
方法三:
SQL> ho exp perfstat/perfstat owner=perfstat file=/u01/app/oracle/perfstat.dmp
SQL> @?/rdbms/admin/sppurge.sql
--8.快照收集级别:
3种快照级别默认为5
a.level 0 : 一般性能统计包含等待事件,系统事件,回滚段统计,行缓存,sga......
b.level 5 :增加了sql语句,包括sql语句收集,收集结果记录在stats$sql_summary中
v.level 10: 增加子锁统计 stats$lathc_children中
SQL> execute statspack.snap(i_snap_level=>0,i_modify_parameter=>'true');
通过这样的设置,以后的收集解别将是0级别
如果你想本次改变收集级别,可以忽略i_modify_parameter参数
二、AWR,RDA
AWR工具 :10g新特性
采样过程:直接生成报告
--生成报告:sys用户
SQL> @?/rdbms/admin/awrrpt.sql
html /text
4
begind snap
end snap
report.txt
这些采样数据性能统计和度量 以供分析和调整,在sysaux表空间 ,属sys用户,为了节省空间,采集数据在7天后自动清除
mmon新的后台进程自动每一小时采集一次快照点,(mman自动调配内存,rvwr记录闪回日志)
--查看收集时间间隔和保留时间:
SQL> select snap_interval,retention from dba_hist_wr_control;
SNAP_INTERVAL RETENTION
-------------------- --------------------
+00000 01:00:00.0 +00007 00:00:00.0 -->1小时收集一次,保留7天
--要修改设置,例如:时间间隔30分钟,保留时间为20天
SQL> begin
2 dbms_workload_repository.modify_snapshot_settings(
3 interval=>30,
4 retention=>20*24*60);
5 end;
6 /
SQL> select snap_interval,retention from dba_hist_wr_control;
SNAP_INTERVAL RETENTION
-------------------- --------------------
+00000 00:30:00.0 +00020 00:00:00.0
应及时监控sysaux表空间的剩余空间
=================================================================================
<RDA工具>(需要买oracle的服务):->oracle用户
除了收集oracle信息,还会收集系统信息,网络信息等
$ sh .rda.sh
Enter an Oracle User ID (userid only) to view DBA_ and V$ tables. If RDA will
be run under the Oracle software owner s ID, enter a '/' here, and select Y at
the SYSDBA prompt to avoid being prompted for the database password at
runtime.
Hit 'Return' to accept the default (system)
> /
Is '/' a sysdba user (will connect as sysdba) (Y/N)?
Hit 'Return' to accept the default (N)
> y
其他的一路回车,会生成配置信息 setup.cfg,下次再收集就用这个配置,会覆盖以前收集的数据
cd output 目录下:
firefox RDA__index.htm rdbms-->为oracle信息
rm setup.cfg (配置信息) 之前信息就没有了 ,再运行rda.sh 重新配置
三、会话、SQL、自动跟踪
<场景1:通过sid找到该会话执行的语句:>
--1.查看会话:
SQL> select sid,serial#,machine,username from v$session where username is not null;
SID SERIAL# MACHINE USERNAME
---------- ---------- -------------------- ------------------------------
117 258 www.nick.net SYS
131 131 emrep.uplooking.com SYS
SQL> save logon.sql -->保存上条sql语句,执行这个logon.sql脚本就是查找当前连接的会话
SQL> @logon.sql -->执行上条语句
--2.根据sid查看会话执行的语句:
SQL> select sql_text from v$sqltext where hash_value=(select sql_hash_value from v$session where sid='&sid')
order by piece asc;
Enter value for sid: 120
no rows selected
SQL> save sql.sql -->这个sql.sql脚本就是查询当前会话正在执行的sql
SQL> @sql.sql
Enter value for sid: 131
SQL_TEXT
----------------------------------------------------------------
select SQL_TEXT from v$sqltext where hash_value= (select sql_hash_value
from v$session where sid='131') order by piece asc
<场景2:通过pid 找到该会话执行的应用>
从nt系统查找消耗资源过大的sql
先用top,topas,glance.....工具查看哪个oracle进程消耗资源过大(cpu)
根据进程使用sid查询其对应的sql语句
--1.查找sid:
SQL> select distinct sid from v$mystat;
SID
----------
131
--2.根据sid查找spid和pid:
SQL> select spid,pid from v$process where addr=(select paddr from v$session where sid=131);
SPID PID
------------ ----------
27813 16
SQL> select sql_text from v$sqltext a where a.hash_value=
(select sql_hash_value from v$session b where b.paddr=(select addr from v$process c where c.spid='&spid'))
order by piece asc
SQL> save process.sql -->这个process脚本就是查出spid正在执行的sql
SQL> @process.sql
Enter value for spid: 27813
SQL_TEXT
----------------------------------------------------------------
select sql_text from v$sqltext a where a.hash_value= (select sql_hash_value
from v$session b where b.paddr=( select addr from v$process c where c.spid='27813') )
order by piece asc
<场景三:跟踪某一个用户一段时间的所有操作>
特定应用cpu,i/o等情况
-------------------------------------
oracle报警文件:
a).alert_sid.log ---time ora-600 ora-07445 ,ora-01555 错误
b).后台跟踪文件 SQL> ho ls $ORACLE_BASE/admin/pod/bdump
c).用户跟踪文件 SQL> ho ls $ORACLE_BASE/admin/pod/udump
-------------------------------------
--1.查询有哪些会话:( sid和serial# 惟一标识一个会话)
SQL> @logon.sql
SID SERIAL# MACHINE USERNAME
---------- ---------- -------------------- ------------------------------
159 346 www.hello.com SCOTT
--2.跟踪会话:
SQL> exec dbms_system.set_sql_trace_in_session(159,346,true);
--3.根据sid查spid:
select spid,pid from v$process where addr=(select paddr from v$session where sid=159);
SPID PID
------------ ----------
27953 38
--4.格式化跟踪文件:
[oracle@emrep ~]$ tkprof $ORACLE_BASE/admin/pod/udump/pod_ora_27953.trc aaa.txt
[oracle@emrep ~]$ more aaa.txt
--5.关闭跟踪会话:
SQL> exec dbms_system.set_sql_trace_in_session(159,346,false); --->关闭跟踪会话
--6.会话自跟踪:
SQL> alter session set sql_trace =true;
SQL> alter session set sql_trace =false;
四、共享池的优化
======================================================================================
<shared_pool_size>
1.库缓存--->语句文本,解析码和执行执行 lru算法管理
2.字典缓存--->包含表,定义栏和数据字典表的权限
注意: 过大的shared_pool_size设置可能是个有害的选择,由于管理的开销可能会折磨你的数据库
SQL> select namespace,gets,gethits,gethitratio,pins,reloads,invalidations from v$librarycache where namespace='SQL AREA';
NAMESPACE GETS GETHITS GETHITRATIO PINS RELOADS INVALIDATIONS
---------- ---------- ---------- ----------- ---------- ---------- -------------
SQL AREA 14793 9103 .615358616 126329 34 115
gets:查找名称空间对象的次数
gethitratio: OLTP系统,这个命中率应该大于90%
pins:执行名称对象的次数
reloads:由于miss而重新解析的次数,理想为0
invalidations: 运行期间,一个对象被修改,那么所有跟这个对象相关的执行计划都被标记为失效,再次使用需要重新解析
实例:模拟一个库缓存等待
-- session 1: 建表,建查询表的过程,执行过程
sys@ocm> create table t_p(a int);
表已创建。
sys@ocm> insert into t_p values(1);
已创建 1 行。
sys@ocm> commit;
提交完成。
sys@ocm> create or replace procedure pin
2 as
3 b int;
4 begin
5 select count(*) into b from t_p;
6 dbms_lock.sleep(1000); /*-->1000秒以后执行,现在挂在这*/
7 end pin;
8 /
过程已创建。
sys@ocm> exec pin -->没有出来结果,这时一直挂在这里
-- session 2: 查看过程状态为有效,删除表后,过程状态为无效
sys@ocm> select object_name ,status from dba_objects where object_name='PIN';
OBJECT_NAME STATUS
-------------------- -------
PIN VALID --valid
sys@ocm> drop table t_p;
表已删除。
sys@ocm> select object_name ,status from dba_objects where object_name='PIN';
OBJECT_NAME STATUS
-------------------- -------
PIN INVALID --此时为invalid
sys@ocm> select event,count(*) from v$session_wait group by event; --查看等待的会话,有12行
EVENT COUNT(*)
---------------------------------------------------------------- ----------
SQL*Net message from client 3
PL/SQL lock timer 1
pmon timer 1
rdbms ipc message 16
smon timer 1
Streams AQ: qmn slave idle wait 1
SQL*Net message to client 1
Space Manager: slave idle wait 1
Streams AQ: qmn coordinator idle wait 1
Streams AQ: waiting for time management or cleanup tasks 1
VKTM Logical Idle Wait 1
DIAG idle wait 2
已选择12行。
-- session 3: 执行过程
sys@ocm> exec pin --这时一直挂在这里
-- 回到session 2,再查等待的会话,有13行
sys@ocm> select event,count(*) from v$session_wait group by event;
EVENT COUNT(*)
---------------------------------------------------------------- ----------
SQL*Net message from client 3
library cache pin 1 -->多出来这个库缓存等待
PL/SQL lock timer 1
pmon timer 1
rdbms ipc message 16
smon timer 1
Streams AQ: qmn slave idle wait 1
SQL*Net message to client 1
Space Manager: slave idle wait 1
Streams AQ: qmn coordinator idle wait 1
Streams AQ: waiting for time management or cleanup tasks 1
VKTM Logical Idle Wait 1
DIAG idle wait 2
已选择13行。
-- 查看过程执行的结果,session 1 的过程执行成功
sys@ocm> exec pin
PL/SQL 过程已成功完成。
-- session 4 的过程执行失败
sys@ocm> exec pin
BEGIN pin; END;
*
第 1 行出现错误:
ORA-06550: 第 1 行, 第 7 列:
PLS-00905: 对象 SYS.PIN 无效
ORA-06550: 第 1 行, 第 7 列:
PL/SQL: Statement ignored
-----------------------------------------------------------------
在业务高峰期,不要执行表ddl 杀会话,尽量通过sid查出spid ,kill -9 spid
select spid,pid from v$process where addr=(select paddr from v$session where sid=131);
-----------------------------------------------------------------
==================================================================================
<保持大存储对象到共享池的保留空间:>
--1.查看共享池及其保留空间
SQL> select name,value from v$parameter where name like 'shared_pool%';
NAME VALUE
------------------------------ --------------------
shared_pool_size 184549376
shared_pool_reserved_size 9227468
v$shared_pool_reserved --->可以帮助我们调整共享池的保留空间
--2.查出未保持的大对象:
SQL> select owner,name,namespace from v$db_object_cache
where SHARABLE_MEM>100000 and TYPE in ('PACKAGE','PROCEDURE','FUNCTION','PACKAGE BODY') AND kept='NO';
--3.保持大对象:
SQL> EXEC dbms_shared_pool.keep('DBMS_STATS'); -- dbms_shared_pool包是用rdbms/admin/dbmspool.sql 脚本来创建的
使用unkeep取消pin对象:
SQL> EXEC dbms_shared_pool.unkeep('DBMS_STATS');
五、调试高速缓存区:
statspack or awr 报告中Buffer Pool Advisory --->
Est
Phys
Size for Size Buffers for Read Estimated
P Est (M) Factor Estimate Factor Physical Reads
--- -------- ------ ---------------- ------ ------------------
D 2,304 .1 284,076 4.5 254,079,546
D 4,608 .2 568,152 3.2 183,064,333
D 6,912 .3 852,228 2.5 139,723,608
D 9,216 .4 1,136,304 2.0 111,705,890
D 11,520 .5 1,420,380 1.6 88,450,100
D 13,824 .5 1,704,456 1.2 70,520,133
D 16,128 .6 1,988,532 1.1 61,517,500
D 18,432 .7 2,272,608 1.0 57,522,140 -->倒数第2列,物理因子 刚 降到最低时对应的db_cache_size大小最合适
D 20,736 .8 2,556,684 1.0 56,626,730
D 23,040 .9 2,840,760 1.0 56,509,810
D 25,344 1.0 3,124,836 1.0 56,422,381
D 27,648 1.1 3,408,912 1.0 56,405,021
D 29,952 1.2 3,692,988 1.0 56,403,348
D 32,256 1.3 3,977,064 1.0 56,402,929
D 34,560 1.4 4,261,140 1.0 56,402,929
D 36,864 1.5 4,545,216 1.0 56,402,929
D 39,168 1.5 4,829,292 1.0 56,402,929
D 41,472 1.6 5,113,368 1.0 56,402,929
D 43,776 1.7 5,397,444 1.0 56,402,929
D 46,080 1.8 5,681,520 1.0 56,402,511
SQL> desc v$db_cache_advice
name null? type
----------------------------------------- -------- ----------------------------
id number
name varchar2(20)
block_size number
advice_status varchar2(3)
size_for_estimate number
size_factor number
buffers_for_estimate number
estd_physical_read_factor number
estd_physical_reads number
estd_physical_read_time number
estd_pct_of_db_time_for_reads number
estd_cluster_reads number
estd_cluster_read_time number
SQL> select name ,value from v$sysstat where name in
('session logical reads','physical reads','consistent gets', 'db block gets');
NAME VALUE
------------------------- ----------
session logical reads 241364 -- a b,c之和为a
db block gets 16292 -- b
consistent gets 225072 -- c
physical reads 9834 -- d
physical reads --->物理读i/o
就是从磁盘读取数据块数量 产生原因:
1. 在数据库高速缓存区不存在的块
2. 全表扫描
3. 磁盘排序
consistent gets: 再一致性读取状态上读取多少块,主要原因:
由于你查询的过程中,其他会话对数据块进行操作,而对所需要查询的块做了修改,需要对回滚段中的数据前映像查询,以保证数据的一致性
db block gets:在内存中正好提取的块数目
--查看keep池:
SQL> show parameter db_keep
SQL> alter system set db_keep_cache_size=10m; --> 默认是0
SQL> select segment_name,buffer_pool from dba_segments where segment_name='EMP' AND OWNER='SCOTT';
SEGMENT_NAME BUFFER_
------------------------------ -------
EMP DEFAULT -->default
--加入keep池:
SQL> alter table scott.emp storage(buffer_pool keep);
SQL> select SEGMENT_NAME,BUFFER_POOL from dba_segments where segment_name='EMP' AND OWNER='SCOTT';
SEGMENT_NAME BUFFER_
------------------------------ -------
EMP KEEP -->keep
--取消加入:
SQL> ALTER TABLE scott.emp storage(buffer_pool default);
六、估算表的行记录
SQL> select bytes from dba_segments where segment_name='SOLO';
BYTES
----------
6291456
SQL> select avg_row_len from dba_tables where table_name='SOLO';
AVG_ROW_LEN
-----------
94
SQL> SELECT 6291456/94 FROM DUAL;
6291456/94
----------
66930.383
七、<sga>调优
asmm(auto shared memory management) ---->mman进程
自动共享内存管理,10g中新增特性,针对以下内存发挥作用:
buffer cache,共享池 ,java池,大池,流池
SQL> show parameter sga_tar
SQL> alter system set sga_target=400m; -->sga_target 一旦设置为非0,就代表mman开始自动调配内存
SQL> show parameter sga_
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 480M -->唯有它是静态参数,要修改,必须指定scope=spfile 重启才生效
sga_target big integer 400M
注意:
sga_target <= sga_max_size
当sga_target设置为400m后,自动调配内存(buffer cache,共享池 ,java池,大池,流池 )会在400m里调配,即便sga_max_size为500m ;
八、<pga>调优
进程全局区一段内存
SQL> show parameter pga_a
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
pga_aggregate_target big integer 200M
SQL> select name,value from v$statname a,v$mystat b where a.statistic#=b.statistic# and a.name like '%pga%';
NAME VALUE
---------------------------------------------------------------- ----------
session pga memory 2747988
session pga memory max 5107284
连接pga分配内存给session,pga中有一部分内寻不用于排序和其他操作
statspack or awr 报告中:PGA Memory Advisory
* When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value where Estd PGA Overalloc Count is 0
Estd Extra Estd P Estd PGA
PGA Target Size W/A MB W/A MB Read/ Cache Overallo
Est (MB) Factr Processed Written to Disk Hit % Count
---------- ------- ---------------- --------------- ------- --------
25 0.13 605.35 25.98 96.00 9
50 0.25 605.35 25.98 96.00 9
100 0.50 605.35 0.00 100.00 1
150 0.75 605.35 0.00 100.00 0 --->判断将pga设置成150m时候,为最佳
200 1.00 605.35 0.00 100.00 0
240 1.20 605.35 0.00 100.00 0
280 1.40 605.35 0.00 100.00 0
320 1.60 605.35 0.00 100.00 0
360 1.80 605.35 0.00 100.00 0
400 2.00 605.35 0.00 100.00 0
600 3.00 605.35 0.00 100.00 0
800 4.00 605.35 0.00 100.00 0
1,200 6.00 605.35 0.00 100.00 0
1,600 8.00 605.35 0.00 100.00 0
九、rowid的介绍
@?/rdbms/admin/utlsampl.sql -->重建scott
SQL> select rowid,substr(rowid,1,6) "object",
substr(rowid,7,3) "file",
substr(rowid,10,6) "block",
substr(rowid,16,3) "row",
t.*
from scott.dept t;
ROWID object file block row DEPTNO DNAME LOC
------------------ ------------ ------ ------------ ------ ------ -------------- -------------
AAAR3qAAEAAAACHAAA AAAR3q AAE AAAACH AAA 10 ACCOUNTING NEW YORK
AAAR3qAAEAAAACHAAB AAAR3q AAE AAAACH AAB 20 RESEARCH DALLAS
AAAR3qAAEAAAACHAAC AAAR3q AAE AAAACH AAC 30 SALES CHICAGO
AAAR3qAAEAAAACHAAD AAAR3q AAE AAAACH AAD 40 OPERATIONS BOSTON
rowid格式如下:
前6位,数据对象编号 object_id
接3位,文件编号 file_id
接6位,块编号
后3位,行号
通过dbms_rowid包,可以直接得到具体的rowid包含信息:
SQL> select dbms_rowid.rowid_object(rowid) object_id,
dbms_rowid.rowid_relative_fno(rowid) file_id,
dbms_rowid.rowid_block_number(rowid) block_id,
dbms_rowid.rowid_row_number(rowid) num
from scott.dept;
OBJECT_ID FILE_ID BLOCK_ID NUM
---------- ---------- ---------- ----------
73194 4 135 0
73194 4 135 1
73194 4 135 2
73194 4 135 3
SQL> select OBJECT_NAME from dba_objects where OBJECT_ID=73194;
OBJECT_NAME
--------------------------------------------------------------------------------
DEPT
SQL> select file_name from dba_data_files where file_id=4;
FILE_NAME
--------------------------------------------
F:\APP\ADMINISTRATOR\ORADATA\OCM\USERS01.DBF
十、行迁移
statspack or awr report -->table fetch continued row
如果这里发现大量发生行迁移行链接的对象,那么就要找出哪些对象发生行迁移行链接
(update时若将记录迁移到新位置,原来的位置有指针指向新位置。)
查找发生行迁移和行链接的表:
SQL> select owner,table_name ,tablespace_name from dba_tables
where chain_cnt>0;
清除行迁移
方法一:
exp ... imp
expdp ....impdp
--->影响应用,速度慢,安全
方法二:
move
--->简单,但是需要2倍磁盘空间来处理
方法三:
脚本建立一个存放行迁移的表 chained_rows:
SQL> @?/rdbms/admin/utlchain.sql
SQL> desc chained_rows
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER_NAME VARCHAR2(30)
TABLE_NAME VARCHAR2(30)
CLUSTER_NAME VARCHAR2(30)
PARTITION_NAME VARCHAR2(30)
SUBPARTITION_NAME VARCHAR2(30)
HEAD_ROWID ROWID
ANALYZE_TIMESTAMP DATE
分析表:
SQL> analyze table scott.dept list chained rows;
Table analyzed.
建立中间表暂时存放迁移的数据
SQL> create table dept_bak as
select * from scott.dept where rowid in
(select HEAD_ROWID from chained_rows);
删除原来表中所有的行迁移的行:
SQL> delete from scott.dept where rowid in (select HEAD_ROWID from chained_rows);
再将被删除的行从中间表中反插回去
insert into scott.dept select * from dept_bak;
commit;
十一、高水位线hwm
-- 1.一个大表,查count(*),看cost扫描
sys@ocm> set autotrace on
sys@ocm> select count(*) from scott.a;
COUNT(*)
----------
131260
执行计划
----------------------------------------------------------
Plan hash value: 3918351354
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 520 (1)| 00:00:07 | ----> cost为520
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| A | 127K| 520 (1)| 00:00:07 |
-------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement (level=2)
统计信息
----------------------------------------------------------
4 recursive calls
0 db block gets
1951 consistent gets
1882 physical reads
0 redo size
425 bytes sent via SQL*Net to client
415 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
-- 2.delete此表,再查count(*),看cost扫描
sys@ocm> delete from scott.a;
已删除131260行。
sys@ocm> commit;
提交完成。
sys@ocm> select count(*) from scott.a;
COUNT(*)
----------
0
执行计划
----------------------------------------------------------
Plan hash value: 3918351354
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 519 (1)| 00:00:07 | ----> cost为519,hwm不调,cost扫描不变
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| A | 1 | 519 (1)| 00:00:07 |
-------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement (level=2)
统计信息
----------------------------------------------------------
0 recursive calls
0 db block gets
2683 consistent gets
0 physical reads
57356 redo size
422 bytes sent via SQL*Net to client
415 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
-- 3.truncate此表,再查count(*),看cost扫描
sys@ocm> truncate table scott.a;
表被截断。
sys@ocm> select count(*) from scott.a;
COUNT(*)
----------
0
执行计划
----------------------------------------------------------
Plan hash value: 3918351354
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 (0)| 00:00:01 | --->cost为2,hwm归零,cost扫描下降
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| A | 1 | 2 (0)| 00:00:01 |
-------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement (level=2)
统计信息
----------------------------------------------------------
20 recursive calls
1 db block gets
10 consistent gets
0 physical reads
96 redo size
422 bytes sent via SQL*Net to client
415 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
delete ->当删除了表的行记录后,大量数据块都是空块,可以再使用.但是hwm高水位线不调整,全表扫描会扫描hwm以下的所有块,所以导致全表扫描消耗成本很高。
当我们truncate table 后,高水位线归零,再次全表扫描,成本降下来了
delete ---->dml , 写redo undo , 不调hwm
truncate --->ddl , 写redo , 调hwm
-- 创建查询hwm的存储过程:
SQL> create or replace procedure p_unused_space(p_object_name in varchar2,
p_object_type in varchar2 default 'TABLE',
p_owner in varchar2 default user,
p_partition_name in varchar2 default '') is
v_total_blocks number;
v_total_bytes number;
v_unused_blocks number;
v_unused_bytes number;
v_last_used_extent_file_id number;
v_last_used_extent_block_id number;
v_last_used_block number;
begin
dbms_space.unused_space(upper(p_owner), upper(p_object_name), upper(p_object_type),
v_total_blocks,
v_total_bytes, v_unused_blocks, v_unused_bytes, v_last_used_extent_file_id,
v_last_used_extent_block_id, v_last_used_block, upper(p_partition_name));
dbms_output.put_line('total_blocks is ' || v_total_blocks);
dbms_output.put_line('total_bytes is ' || v_total_bytes);
dbms_output.put_line('unused_blocks is ' || v_unused_blocks);
dbms_output.put_line('unused_bytes is ' || v_unused_bytes);
dbms_output.put_line('last_used_extent_file_id is ' || v_last_used_extent_file_id);
dbms_output.put_line('last_used_extent_block_id is ' || v_last_used_extent_block_id);
dbms_output.put_line('last_used_block is ' || v_last_used_block);
end;
/
-- 调用过程:
SQL> set serveroutput on
SQL> exec p_unused_space('DEPT','TABLE','SCOTT');
total_blocks is 8 <-- ###
total_bytes is 65536
unused_blocks is 4 <-- ###
unused_bytes is 32768
last_used_extent_file_id is 20
last_used_extent_block_id is 9
last_used_block is 4
结果:
HWM= total_blocks - unused_blocks-1 =8-4-1=3
SQL> alter system dump datafile 20 block 9;
HWM Flag: HWM Set
Highwater:: 0x0500000d ext#: 0 blk#: 4 ext size: 8
#blocks in seg. hdr's freelists: 0 '
#blocks below: 4
mapblk 0x00000000 offset: 0
-- <shrink>降低高水位线
从10g开始,oracle开始提供shrink 命令,assm ,假如我们的表空间不支持assm就不能使用shrink缩小段降低高水位线
assm:
SQL> create tablespace sk datafile '/u01/app/oracle/oradata/pod/sk.dbf' size 20m segment space management manual;
SQL> select TABLESPACE_NAME,EXTENT_MANAGEMENT,SEGMENT_SPACE_MANAGEMENT from dba_tablespaces;
TABLESPACE_NAME EXTENT_MAN SEGMEN
------------------------------ ---------- ------
SYSTEM LOCAL MANUAL
SYSAUX LOCAL AUTO
TEMP LOCAL MANUAL
USERS LOCAL AUTO
UNDOTBS2 LOCAL MANUAL
SK LOCAL MANUAL --> SK的段空间管理是手动的,不能使用Shrink 整合碎片
eg:
-- 创建表,查看hwm:
SQL> show user
USER is "SYS"
SQL> create tablespace sk2 datafile '/u01/app/oracle/oradata/pod/sk2.dbf' size 20m;
SQL> create table t_sk2 tablespace sk2 as select * from dba_objects;
SQL> set serveroutput on
SQL> exec p_unused_space('t_sk2','table','sys');
total_blocks is 768
total_bytes is 6291456
unused_blocks is 69
unused_bytes is 565248
last_used_extent_file_id is 6
last_used_extent_block_id is 649
last_used_block is 59
hwm=768-69-1=698 -->hwm为698
-- 制造碎片,再查看hwm:
SQL> delete from t_sk2 where object_name like '%R%';
SQL> delete from t_sk2 where object_name like '%L%';
SQL> exec p_unused_space('t_sk2','table','sys');
total_blocks is 768
total_bytes is 6291456
unused_blocks is 69
unused_bytes is 565248
last_used_extent_file_id is 6
last_used_extent_block_id is 649
last_used_block is 59
hwm=768-69-1=698 -->hwm不变
-- 碎片整合,查看hwm:
SQL> alter table t_sk2 enable row movement;
SQL> alter table t_sk2 shrink space;
SQL> exec p_unused_space('t_sk2','table','sys');
total_blocks is 272
total_bytes is 2228224
unused_blocks is 1
unused_bytes is 8192
last_used_extent_file_id is 6
last_used_extent_block_id is 265
last_used_block is 15
hwm=272-1-1=270 -->hwm下降
--日志挖掘:
SQL> exec dbms_logmnr.add_logfile('/oracle/oradata/pod/redo02.log',dbms_logmnr.new);
SQL> exec dbms_logmnr.start_logmnr(options=>dbms_logmnr.dict_from_online_catalog);
可以看出shrink操作的整个过程都是在作delete,insert操作,与move操作比起来,shrink日志写的大很多,但move占两倍磁盘空间
-- shrink 几个问题:
1. shrink 后index 是否需要rebuild
因为shrink的操作改变了行数据的rowid,那么如果table上有index,shrink table 后index ---?
结论: valid ---->index 一起维护了
shrink的参数:
保持高水位:
alter table tbname shrink space compact;
回缩表和hwm:
alter table tbname shrink space;
回缩表与相关索引:
alter table tbname shrink space cascade;
回缩素引:
alter index idxname shrink space;---->assm (注意索引表空间也要是assm)
十二、验证坏块的方法:(dbv,rman)
-- 1.dbverify 工具验证数据文件的物理结构:
dbv 工具主要目的检查文件的物理结构 包括数据文件是否损坏,是否存在逻辑坏块,包含种类性的数据
dbv 可以验证online or offline 数据文件
当数据库shutdown ,依旧可以dbv
eg:
SQL> ho dbv file=/u01/app/oracle/oradata/pod/tt1_e2.dbf
DBVERIFY - Verification complete
Total Pages Examined : 1280
Total Pages Processed (Data) : 0
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 1
Total Pages Failing (Index): 0
Total Pages Processed (Other): 11
Total Pages Processed (Seg) : 0
Total Pages Failing (Seg) : 0
Total Pages Empty : 1268
Total Pages Marked Corrupt : 0 ----> 标记坏块
Total Pages Influx : 0
Highest block SCN : 2249531 (0.2249531)
可以看到,dbv给出的结果包含多少个数据块,多少个索引块,多少个空块,多少个标记为坏块的块
对于dbv工具,高版本可以自动识别底版本的数据库
比如11g的dbv工具访问9i数据库,但是底版本dbv工具访问高版本会报错
dbv工具也可以验证rman copy的文件
RMAN> run{
copy datafile 1 to '/u01/app/oracle/Disk2/system.dbf';}
[oracle@emrep Disk2]$ dbv file=system.dbf
-- 2.rman:
RMAN> backup validate database; ----> 检查全库是否存在坏块,不产生任何备份集
RMAN> backup validate archivelog all;
SQL> select FILE# , BLOCK# from v$database_block_corruption;
rman target /
run{
allocate channel c1 device type disk;
blockrecover datafile 12 blcok 18000;
release channel c1;
}
十三、查看sql语句执行计划
关于Autotrace几个常用选项的说明:
SET AUTOTRACE OFF ---------------- 不生成AUTOTRACE 报告,这是缺省模式
SET AUTOTRACE ON EXPLAIN ------ AUTOTRACE只显示优化器执行路径报告
SET AUTOTRACE ON STATISTICS -- 只显示执行统计信息
SET AUTOTRACE ON ----------------- 包含执行计划和统计信息
SET AUTOTRACE TRACEONLY ------ 同set autotrace on,但是不显示查询输出
sys和system用户可直接开启解释计划,其他用户若想开启,有如下方法
-- 方法一:
1.SQL> @?/rdbms/admin/utlxplan.sql -->建表,plan_table存储计划信息
2.SQL> grant all on plan_table to public; -->授权
3.SQL> @?/sqlplus/admin/plustrce.sql -->此脚本建plustrace角色,并授予查询 v_$mystat,v_$sesstat,v_$statname
4.SQL> grant plustrace to public; -->授权
然后可以验证一把
SQL> conn scott/tiger
SQL> set autotrace on
SQL> select * from dept;
-- 方法二:
1.SQL> @?/rdbms/admin/utlxplan.sql -->建表存储计划信息
2.SQL> explain plan for
select * from dept;
3.SQL> @?/rdbms/admin/utlxpls.sql
-- 方法三:
desc v$sql_plan
-- 方法四:
sql_trace --->tkprof ---udump
==========================================
案例.用绑定变量和不用时的执行计划的区别(方法四):
==========================================
scott@ocm> conn / as sysdba
已连接。
会话已更改。
-- 建表
sys@ocm> create table t20 (a int);
表已创建。
-- 建过程P1,绑定变量 插入两万行
sys@ocm> create or replace procedure p1
as
begin
for i in 1..20000 loop
execute immediate
'insert into t20 values(:x)' using i;
end loop;
end;
/
过程已创建。
-- 建过程P2,不绑定变量 插入两万行
sys@ocm> create or replace procedure p2
as
begin
for i in 1..20000 loop
execute immediate
'insert into t20 values('||i||')';
end loop;
end;
/
过程已创建。
-- 跟踪会话,执行P1和P2过程
sys@ocm> alter session set sql_trace=true;
会话已更改。
sys@ocm> exec p1
PL/SQL 过程已成功完成。
sys@ocm> exec p2
PL/SQL 过程已成功完成。
sys@ocm> alter session set sql_trace=false;
-- 获得trc文件:
sys@ocm> select d.value||'\'||lower(rtrim(i.instance, chr(0)))||'_ora_'||p.spid||'.trc' trace_file_name
from ( select p.spid from sys.v$mystat m,sys.v$session s,sys.v$process p where m.statistic# = 1 and s.sid = m.sid and p.addr = s.paddr) p,
(select t.instance from sys.v$thread t,sys.v$parameter v where v.name = 'thread' and (v.value = 0 or t.thread# = to_number(v.value))) i,
( select value from sys.v$parameter where name = 'user_dump_dest') d ;
TRACE_FILE_NAME
--------------------------------------------------------------------------------
f:\app\administrator\diag\rdbms\ocm\ocm\trace\ocm_ora_2932.trc
-- 从trc文件的内容可以看出P1是一次解析多次执行,P2则解析了20000次,执行了20000次,对于shared pool 来讲,就是灾难的,占用大量空间,执行也很低效
十四、索引技术