EXPIMP--從實驗中來看exp,imp之間的規則
HOWTO--如何在10g不支持的Linux版本安装10g
HOWTO--讓Linux下的oracle自動啟動
SQL--收集网上的一些精妙的SQL的例子(ZT)
EXP-IMP--在8i中的中文说明
英文版的STATSPACK Readme
EXPIMP--從實驗中來看exp,imp之間的規則
发表人:vongates | 发表时间: 2005年七月15日, 06:41
從下面的測試結果中可以看出exp,imp在不同版本中相互之間的關系:
源库 目标库 exp版本 exp成功? imp版本 imp成功?
8.1.7.0 9.2.0.1 8.1.7.0 成功 8.1.7.0 成功
8.1.7.0 9.2.0.1 8.1.7.0 成功 9.2.0.1 成功
8.1.7.0 9.2.0.1 9.2.0.1 失败(ORA-00942)
8.1.7.0 9.2.0.1 8.1.7.0 成功 8.1.7.0 成功
8.1.7.0 9.2.0.1 8.1.7.0 成功 9.2.0.1 成功
8.1.7.0 9.2.0.1 9.2.0.1 失败(ORA-00942)
9.2.0.1 8.1.7.0 8.1.7.0 成功 8.1.7.0 成功
9.2.0.1 8.1.7.0 8.1.7.0 成功 9.2.0.1 失败(ORA-00942、IMP-00023)
9.2.0.1 8.1.7.0 9.2.0.1 成功 8.1.7.0 失败(IMP-00010)
9.2.0.1 8.1.7.0 9.2.0.1 成功 9.2.0.1 失败(ORA-00942、IMP-00023)
9.2.0.1 8.1.7.0 8.1.7.0 成功 9.2.0.1 失败(ORA-00942、IMP-00023)
9.2.0.1 8.1.7.0 9.2.0.1 成功 8.1.7.0 失败(IMP-00010)
9.2.0.1 8.1.7.0 9.2.0.1 成功 9.2.0.1 失败(ORA-00942、IMP-00023)
8.1.7.0 8.1.7.0 8.1.7.0 成功 8.1.7.0 成功
8.1.7.0 8.1.7.0 8.1.7.0 成功 9.2.0.1 失败(ORA-00942、IMP-00023)
8.1.7.0 8.1.7.0 9.2.0.1 失败(ORA-00942)
8.1.7.0 8.1.7.0 8.1.7.0 成功 9.2.0.1 失败(ORA-00942、IMP-00023)
8.1.7.0 8.1.7.0 9.2.0.1 失败(ORA-00942)
9.2.0.1 9.2.0.1 8.1.7.0 成功 8.1.7.0 成功
9.2.0.1 9.2.0.1 8.1.7.0 成功 9.2.0.1 成功
9.2.0.1 9.2.0.1 9.2.0.1 成功 8.1.7.0 失败(IMP-00010)
9.2.0.1 9.2.0.1 9.2.0.1 成功 9.2.0.1 成功
9.2.0.1 9.2.0.1 8.1.7.0 成功 9.2.0.1 成功
9.2.0.1 9.2.0.1 9.2.0.1 成功 8.1.7.0 失败(IMP-00010)
9.2.0.1 9.2.0.1 9.2.0.1 成功 9.2.0.1 成功
规则:
1、 imp工具的版本应该和目标数据库的版本一致;
2、 exp工具的版本应该与源数据库和目标数据库中版本最低的一个相一致;
原文地址:http://vongates.itpub.net/post/2553/35118
2、 exp工具的版本应该与源数据库和目标数据库中版本最低的一个相一致;
原文地址:http://vongates.itpub.net/post/2553/35118
==============================
HOWTO--如何在10g不支持的Linux版本安装10g
发表人:vongates | 发表时间: 2005年五月13日, 06:18
隨著10G的被更多的人了解,研究,我們知道并不是所有的Linux發行版本都被Oracle認証過。如果我們在沒有被Oracle認証過的發行版上做10G的安裝,我們需做一些修改才可以正常的安裝,因為在安裝前Oracle會做一次自檢的動作。目前10G支持的Linux發行版本有:redhat-2.1, redhat-3, SuSE-9, SuSE-8 or UnitedLinux-1.0,如果我們的Linux版本不在前面的列表中,則可以通過下面的方法來解決
1). 在运行runInstaller 時加上(-ignoreSysPrereqs)flag來跳过检查.
$runInstaller -ignoreSysPrereqs &
2). 修改/etc/redhat-release文件:(以RHEL3為例)
安装完毕,再恢復到原來的版本:#su - root #cp /etc/redhat-release /etc/redhat-release.backup #cat > /etc/redhat-release << EOF Red Hat Enterprise Linux AS release 3 (Taroon) EOF
#su - root #cp /etc/redhat-release.backup /etc/redhat-release
雙看到一種方法
這種方法沒有用過。不過看上去也是可行的
這是Fenng第一次提到的
我们可以去修改Oracle 的install/oraparam.ini文件:
這是Fenng第一次提到的
我们可以去修改Oracle 的install/oraparam.ini文件:
[Certified Versions]
Linux=redhat-2.1,UnitedLinux-1.0,redhat-3
Linux=redhat-2.1,UnitedLinux-1.0,redhat-3
把这个屏蔽掉,或者是添加新的内容:
[Certified Versions]
Linux=redhat-2.1,UnitedLinux-1.0,redhat-3
Linux=redhat-2.1,UnitedLinux-1.0,redhat-3
[Linux-redhat-2.1-optional]
TEMP_SPACE=80
SWAP_SPACE=150
MIN_DISPLAY_COLORS=256
TEMP_SPACE=80
SWAP_SPACE=150
MIN_DISPLAY_COLORS=256
[UnitedLinux-1.0-optional]
TEMP_SPACE=80
SWAP_SPACE=150
MIN_DISPLAY_COLORS=256
TEMP_SPACE=80
SWAP_SPACE=150
MIN_DISPLAY_COLORS=256
[Linux-redhat-3.0-optional]
TEMP_SPACE=80
SWAP_SPACE=150
MIN_DISPLAY_COLORS=256
TEMP_SPACE=80
SWAP_SPACE=150
MIN_DISPLAY_COLORS=256
对这个地方定制一下应该就可以了
vongates | 13/05/2005, 07:00
原文地址:http://vongates.itpub.net/post/2553/29605
================================
HOWTO--讓Linux下的oracle自動啟動
发表人:vongates | 发表时间: 2005年五月06日, 08:51
當我們在Linux下安裝Oracle是不會自動啟動的,我們要做如何改變就可以實現,下面我們來看一下
修改/etc/oratab将
*:/opt/oracle/product/9.2:N
orcl:/opt/oracle/product/9.2:N
修改为Y
root身份
#vi /etc/rc.local 加入
su - oracle -c 'lsnrctl start'
su - oracle -c 'dbstart'
修改/etc/oratab将
*:/opt/oracle/product/9.2:N
orcl:/opt/oracle/product/9.2:N
修改为Y
root身份
#vi /etc/rc.local 加入
su - oracle -c 'lsnrctl start'
su - oracle -c 'dbstart'
原文地址:http://vongates.itpub.net/post/2553/28723
=====================
SQL--收集网上的一些精妙的SQL的例子(ZT)
发表人:vongates | 发表时间: 2005年四月22日, 06:29
收集网上的一些精妙的SQL的例子
说明:复制表(只复制结构,源表名:a 新表名:b)
SQL: select * into b from a where 1<>1
SQL: select * into b from a where 1<>1
说明:拷贝表(拷贝数据,源表名:a 目标表名:b)
SQL: insert into b(a, b, c) select d,e,f from b;
SQL: insert into b(a, b, c) select d,e,f from b;
说明:显示文章、提交人和最后回复时间
SQL: select a.title,a.username,b.adddate from table a,(select max(adddate) adddate from table where table.title=a.title) b
SQL: select a.title,a.username,b.adddate from table a,(select max(adddate) adddate from table where table.title=a.title) b
说明:外连接查询(表名1:a 表名2:b)
SQL: select a.a, a.b, a.c, b.c, b.d, b.f from a LEFT OUT JOIN b ON a.a = b.c
SQL: select a.a, a.b, a.c, b.c, b.d, b.f from a LEFT OUT JOIN b ON a.a = b.c
说明:日程安排提前五分钟提醒
SQL: select * from 日程安排 where datediff('minute',f开始时间,getdate())>5
SQL: select * from 日程安排 where datediff('minute',f开始时间,getdate())>5
说明:两张关联表,删除主表中已经在副表中没有的信息
SQL: delete from info where not exists ( select * from infobz where info.infid=infobz.infid )
SQL: delete from info where not exists ( select * from infobz where info.infid=infobz.infid )
说明:--
SQL:
SELECT A.NUM, A.NAME, B.UPD_DATE, B.PREV_UPD_DATE
FROM TABLE1,
(SELECT X.NUM, X.UPD_DATE, Y.UPD_DATE PREV_UPD_DATE
FROM (SELECT NUM, UPD_DATE, INBOUND_QTY, STOCK_ONHAND
FROM TABLE2
WHERE TO_CHAR(UPD_DATE, 'YYYY/MM') =
TO_CHAR(SYSDATE, 'YYYY/MM')) X,
(SELECT NUM, UPD_DATE, STOCK_ONHAND
FROM TABLE2
WHERE TO_CHAR(UPD_DATE, 'YYYY/MM') =
TO_CHAR(TO_DATE(TO_CHAR(SYSDATE, 'YYYY/MM') || '/01',
'YYYY/MM/DD') - 1,
'YYYY/MM')) Y,
WHERE X.NUM = Y.NUM
(+)
AND X.INBOUND_QTY + NVL(Y.STOCK_ONHAND, 0) <> X.STOCK_ONHAND) B
WHERE A.NUM = B.NUM
SQL:
SELECT A.NUM, A.NAME, B.UPD_DATE, B.PREV_UPD_DATE
FROM TABLE1,
(SELECT X.NUM, X.UPD_DATE, Y.UPD_DATE PREV_UPD_DATE
FROM (SELECT NUM, UPD_DATE, INBOUND_QTY, STOCK_ONHAND
FROM TABLE2
WHERE TO_CHAR(UPD_DATE, 'YYYY/MM') =
TO_CHAR(SYSDATE, 'YYYY/MM')) X,
(SELECT NUM, UPD_DATE, STOCK_ONHAND
FROM TABLE2
WHERE TO_CHAR(UPD_DATE, 'YYYY/MM') =
TO_CHAR(TO_DATE(TO_CHAR(SYSDATE, 'YYYY/MM') || '/01',
'YYYY/MM/DD') - 1,
'YYYY/MM')) Y,
WHERE X.NUM = Y.NUM
(+)
AND X.INBOUND_QTY + NVL(Y.STOCK_ONHAND, 0) <> X.STOCK_ONHAND) B
WHERE A.NUM = B.NUM
说明:--
SQL:
select * from studentinfo where not exists(select * from student where studentinfo.id=student.id) and 系名称='"&strdepartmentname&"' and 专业名称='"&strprofessionname&"' order by 性别,生源地,高考总成绩
说明:
从数据库中去一年的各单位电话费统计(电话费定额贺电化肥清单两个表来源)
从数据库中去一年的各单位电话费统计(电话费定额贺电化肥清单两个表来源)
SQL:
SELECT a.userper, a.tel, a.standfee, TO_CHAR(a.telfeedate, 'yyyy') AS telyear,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '01', a.factration)) AS JAN,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '02', a.factration)) AS FRI,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '03', a.factration)) AS MAR,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '04', a.factration)) AS APR,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '05', a.factration)) AS MAY,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '06', a.factration)) AS JUE,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '07', a.factration)) AS JUL,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '08', a.factration)) AS AGU,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '09', a.factration)) AS SEP,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '10', a.factration)) AS OCT,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '11', a.factration)) AS NOV,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '12', a.factration)) AS DEC
FROM (SELECT a.userper, a.tel, a.standfee, b.telfeedate, b.factration
FROM TELFEESTAND a, TELFEE b
WHERE a.tel = b.telfax) a
GROUP BY a.userper, a.tel, a.standfee, TO_CHAR(a.telfeedate, 'yyyy')
SELECT a.userper, a.tel, a.standfee, TO_CHAR(a.telfeedate, 'yyyy') AS telyear,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '01', a.factration)) AS JAN,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '02', a.factration)) AS FRI,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '03', a.factration)) AS MAR,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '04', a.factration)) AS APR,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '05', a.factration)) AS MAY,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '06', a.factration)) AS JUE,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '07', a.factration)) AS JUL,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '08', a.factration)) AS AGU,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '09', a.factration)) AS SEP,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '10', a.factration)) AS OCT,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '11', a.factration)) AS NOV,
SUM(decode(TO_CHAR(a.telfeedate, 'mm'), '12', a.factration)) AS DEC
FROM (SELECT a.userper, a.tel, a.standfee, b.telfeedate, b.factration
FROM TELFEESTAND a, TELFEE b
WHERE a.tel = b.telfax) a
GROUP BY a.userper, a.tel, a.standfee, TO_CHAR(a.telfeedate, 'yyyy')
说明:四表联查问题:
SQL: select * from a left inner join b on a.a=b.b right inner join c on a.a=c.c inner join d on a.a=d.d where .....
说明:得到表中最小的未使用的ID号
SQL:
SELECT (CASE WHEN EXISTS(SELECT * FROM Handle b WHERE b.HandleID = 1) THEN MIN(HandleID) + 1 ELSE 1 END) as HandleID
FROM Handle
WHERE NOT HandleID IN (SELECT a.HandleID - 1 FROM Handle a)
SELECT (CASE WHEN EXISTS(SELECT * FROM Handle b WHERE b.HandleID = 1) THEN MIN(HandleID) + 1 ELSE 1 END) as HandleID
FROM Handle
WHERE NOT HandleID IN (SELECT a.HandleID - 1 FROM Handle a)
原文地址:http://vongates.itpub.net/post/2553/27223
===================================
EXP-IMP--在8i中的中文说明
发表人:vongates | 发表时间: 2005年三月20日, 07:24
认识8i中exp ,imp中每个参数的中文意思。
C:Documents and SettingsAdministrator>
exp help=y
Export: Release 8.1.7.0.0 - Production on 星期日 3月 20 07:22:48 2005
(c) Copyright 2000 Oracle Corporation. All rights reserved.
通过输入 EXP 命令和用户名/口令,您可以
在用户 / 口令之后的命令:
在用户 / 口令之后的命令:
实例: EXP SCOTT/TIGER
或者,您也可以通过输入跟有各种参数的 EXP 命令来控制“导出”
的运行方式。要指定参数,您可以使用关键字:
的运行方式。要指定参数,您可以使用关键字:
格式: EXP KEYWORD=value 或 KEYWORD=(value1,value2,...,valueN)
实例: EXP SCOTT/TIGER GRANTS=Y TABLES=(EMP,DEPT,MGR)
或 TABLES=(T1: P1,T1: P2),如果 T1 是分区表
实例: EXP SCOTT/TIGER GRANTS=Y TABLES=(EMP,DEPT,MGR)
或 TABLES=(T1: P1,T1: P2),如果 T1 是分区表
USERID 必须是命令行中的第一个参数。
关键字 说明(默认) 关键字 说明(默认)
--------------------------------------------------------------------------
USERID 用户名/口令 FULL 导出整个文件 (N)
BUFFER 数据缓冲区的大小 OWNER 所有者用户名列表
FILE 输出文件 (EXPDAT.DMP) TABLES 表名列表
COMPRESS 导入一个范围 (Y) RECORDLENGTH IO 记录的长度
GRANTS 导出权限 (Y) INCTYPE 增量导出类型
INDEXES 导出索引 (Y) RECORD 跟踪增量导出 (Y)
ROWS 导出数据行 (Y) PARFILE 参数文件名
CONSTRAINTS 导出限制 (Y) CONSISTENT 交叉表一致性
LOG 屏幕输出的日志文件 STATISTICS 分析对象 (ESTIMATE)
DIRECT 直接路径 (N) TRIGGERS 导出触发器 (Y)
FEEDBACK 显示每 x 行 (0) 的进度
FILESIZE 各转储文件的最大尺寸
QUERY 选定导出表子集的子句
--------------------------------------------------------------------------
USERID 用户名/口令 FULL 导出整个文件 (N)
BUFFER 数据缓冲区的大小 OWNER 所有者用户名列表
FILE 输出文件 (EXPDAT.DMP) TABLES 表名列表
COMPRESS 导入一个范围 (Y) RECORDLENGTH IO 记录的长度
GRANTS 导出权限 (Y) INCTYPE 增量导出类型
INDEXES 导出索引 (Y) RECORD 跟踪增量导出 (Y)
ROWS 导出数据行 (Y) PARFILE 参数文件名
CONSTRAINTS 导出限制 (Y) CONSISTENT 交叉表一致性
LOG 屏幕输出的日志文件 STATISTICS 分析对象 (ESTIMATE)
DIRECT 直接路径 (N) TRIGGERS 导出触发器 (Y)
FEEDBACK 显示每 x 行 (0) 的进度
FILESIZE 各转储文件的最大尺寸
QUERY 选定导出表子集的子句
下列关键字仅用于可传输的表空间
TRANSPORT_TABLESPACE 导出可传输的表空间元数据 (N)
TABLESPACES 将传输的表空间列表
TRANSPORT_TABLESPACE 导出可传输的表空间元数据 (N)
TABLESPACES 将传输的表空间列表
在没有警告的情况下成功终止导出。
C:Documents and SettingsAdministrator>
imp help=y
Import: Release 8.1.7.0.0 - Production on 星期日 3月 20 07:21:02 2005
(c) Copyright 2000 Oracle Corporation. All rights reserved.
可以通过输入 IMP 命令和您的用户名/口令
跟有您的用户名 / 口令的命令:
跟有您的用户名 / 口令的命令:
实例: IMP SCOTT/TIGER
或者, 可以通过输入 IMP 命令和各种自变量来控制“导入”
按照不同参数。要指定参数,您可以使用关键字:
按照不同参数。要指定参数,您可以使用关键字:
格式: IMP KEYWORD=value 或 KEYWORD=(value1,value2,...,vlaueN)
实例: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N
或 TABLES=(T1: P1,T1: P2),如果 T1 是分区表
实例: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N
或 TABLES=(T1: P1,T1: P2),如果 T1 是分区表
USERID 必须是命令行中的第一个参数。
关键字 说明(默认) 关键字 说明(默认)
--------------------------------------------------------------------------
USERID 用户名/口令 FULL 导入整个文件 (N)
BUFFER 数据缓冲区大小 FROMUSER 所有人用户名列表
FILE 输入文件 (EXPDAT.DMP) TOUSER 用户名列表
SHOW 只列出文件内容 (N) TABLES 表名列表
IGNORE 忽略创建错误 (N) RECORDLENGTH IO 记录的长度
GRANTS 导入权限 (Y) INCTYPE 增量导入类型
INDEXES 导入索引 (Y) COMMIT 提交数组插入 (N)
ROWS 导入数据行 (Y) PARFILE 参数文件名
LOG 屏幕输出的日志文件 CONSTRAINTS 导入限制 (Y)
DESTROY 覆盖表空间数据文件 (N)
INDEXFILE 将表/索引信息写入指定的文件
SKIP_UNUSABLE_INDEXES 跳过不可用索引的维护 (N)
ANALYZE 执行转储文件中的 ANALYZE 语句 (Y)
FEEDBACK 显示每 x 行 (0) 的进度
TOID_NOVALIDATE 跳过指定类型 id 的校验
FILESIZE 各转储文件的最大尺寸
RECALCULATE_STATISTICS 重新计算统计值 (N)
--------------------------------------------------------------------------
USERID 用户名/口令 FULL 导入整个文件 (N)
BUFFER 数据缓冲区大小 FROMUSER 所有人用户名列表
FILE 输入文件 (EXPDAT.DMP) TOUSER 用户名列表
SHOW 只列出文件内容 (N) TABLES 表名列表
IGNORE 忽略创建错误 (N) RECORDLENGTH IO 记录的长度
GRANTS 导入权限 (Y) INCTYPE 增量导入类型
INDEXES 导入索引 (Y) COMMIT 提交数组插入 (N)
ROWS 导入数据行 (Y) PARFILE 参数文件名
LOG 屏幕输出的日志文件 CONSTRAINTS 导入限制 (Y)
DESTROY 覆盖表空间数据文件 (N)
INDEXFILE 将表/索引信息写入指定的文件
SKIP_UNUSABLE_INDEXES 跳过不可用索引的维护 (N)
ANALYZE 执行转储文件中的 ANALYZE 语句 (Y)
FEEDBACK 显示每 x 行 (0) 的进度
TOID_NOVALIDATE 跳过指定类型 id 的校验
FILESIZE 各转储文件的最大尺寸
RECALCULATE_STATISTICS 重新计算统计值 (N)
下列关键字仅用于可传输的表空间
TRANSPORT_TABLESPACE 导入可传输的表空间元数据 (N)
TABLESPACES 将要传输到数据库的表空间
DATAFILES 将要传输到数据库的数据文件
TTS_OWNERS 拥有可传输表空间集中数据的用户
TRANSPORT_TABLESPACE 导入可传输的表空间元数据 (N)
TABLESPACES 将要传输到数据库的表空间
DATAFILES 将要传输到数据库的数据文件
TTS_OWNERS 拥有可传输表空间集中数据的用户
成功终止导入,但出现警告。
C:Documents and SettingsAdministrator>
从oracle8i导入到oracle9i出现乱码
从oracle8i导入到oracle9i中文出现乱码。
其他外部条件基本一样,EXP命令导出的,请教高手怎么解决这个问题,谢谢
roc | 29/03/2005, 10:02
从oracle8i导入到oracle9i出现乱码
oracle8i到9i应该是兼容的,另外还有差别是:原来是惠普unix,后来9i用的是Solaris,请指点
roc | 29/03/2005, 11:38
要看你的字符集
要看你两个db的字符集是不是一样。只要字符集相同或都是超集就不会有问题。当在也要注意exp imp的client的local的字符集哟。可以到
www.eygle.com找到相关字符集的专题
原文地址:http://vongates.itpub.net/post/2553/22819
===================================
英文版的STATSPACK Readme
发表人:vongates | 发表时间: 2005年三月21日, 22:24
Statistics Package (STATSPACK) README (spdoc.txt)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
TABLE OF CONTENTS
-----------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
TABLE OF CONTENTS
-----------------
0. Introduction and Terminology
1. Configuration
1.1. Database space Requirements
1.2. Installing the Tool
1.3. Errors during Installation
2. Gathering data - taking a snapshot
2.1. Automating Statistics Gathering
2.2. Using dbms_job
3. Running a Performance report
3.1. Running the report
3.2. Gathering optimizer statistics on the PERFSTAT schema
4. Configuring the amount of data captured
4.1. Snapshot Level
4.2. Snapshot SQL thresholds
4.3. Changing the default values
4.4. Snapshot Levels - details
4.5. Specifying a Session Id
4.6. Input Parameters for the SNAP and
MODIFY_STATSPACK_PARAMETERS procedures
5. Time Units used for Wait events
6. Event Timings
7. Managing and Sharing performance data
7.1. Sharing data via export
7.2. Purging/removing unnecessary data
7.3. Removing all data
8. New and Changed Features
9. Upgrading from earlier releases
10. OPS specific considerations
11. Conflicts and differences compared to UTLBSTAT/UTLESTAT
11.1. Running BSTAT/ESTAT in conjunction to Statspack
11.2. Differences between Statspack and BSTAT/ESTAT
12. Removing the package
13. Supplied Scripts Overview
14. Limitations and Modifications
14.1. Limitations
14.2. Modifications
1. Configuration
1.1. Database space Requirements
1.2. Installing the Tool
1.3. Errors during Installation
2. Gathering data - taking a snapshot
2.1. Automating Statistics Gathering
2.2. Using dbms_job
3. Running a Performance report
3.1. Running the report
3.2. Gathering optimizer statistics on the PERFSTAT schema
4. Configuring the amount of data captured
4.1. Snapshot Level
4.2. Snapshot SQL thresholds
4.3. Changing the default values
4.4. Snapshot Levels - details
4.5. Specifying a Session Id
4.6. Input Parameters for the SNAP and
MODIFY_STATSPACK_PARAMETERS procedures
5. Time Units used for Wait events
6. Event Timings
7. Managing and Sharing performance data
7.1. Sharing data via export
7.2. Purging/removing unnecessary data
7.3. Removing all data
8. New and Changed Features
9. Upgrading from earlier releases
10. OPS specific considerations
11. Conflicts and differences compared to UTLBSTAT/UTLESTAT
11.1. Running BSTAT/ESTAT in conjunction to Statspack
11.2. Differences between Statspack and BSTAT/ESTAT
12. Removing the package
13. Supplied Scripts Overview
14. Limitations and Modifications
14.1. Limitations
14.2. Modifications
0. Introduction
----------------
----------------
To effectively perform reactive tuning, it is vital to have an established
baseline for later comparison when the system is running poorly. Without
a baseline data point, it becomes very difficult to identify what the new
problem is attributable to: Has the volume of transactions on the system
increased? Has the transaction profile or application changed? Has the
number of users increased?
baseline for later comparison when the system is running poorly. Without
a baseline data point, it becomes very difficult to identify what the new
problem is attributable to: Has the volume of transactions on the system
increased? Has the transaction profile or application changed? Has the
number of users increased?
Statspack fundamentally differs from the well known UTLBSTAT/UTLESTAT
tuning scripts by collecting more information, and also by storing the
performance statistics data permanently in Oracle tables, which can later
be used for reporting and analysis. The data collected can be analyzed
using the report provided, which includes an "instance health and load"
summary page, high resource SQL statements, as well as the traditional
wait events and initialization parameters.
tuning scripts by collecting more information, and also by storing the
performance statistics data permanently in Oracle tables, which can later
be used for reporting and analysis. The data collected can be analyzed
using the report provided, which includes an "instance health and load"
summary page, high resource SQL statements, as well as the traditional
wait events and initialization parameters.
Statspack improves on the existing UTLBSTAT/UTLESTAT performance scripts
in the following ways:
in the following ways:
- Statspack collects more data, including high resource SQL
- Statspack pre-calculates many ratios useful when performance
tuning, such as cache hit ratios, per transaction and per
second statistics (many of these ratios must be calculated
manually when using BSTAT/ESTAT)
tuning, such as cache hit ratios, per transaction and per
second statistics (many of these ratios must be calculated
manually when using BSTAT/ESTAT)
- Permanent tables owned by PERFSTAT store performance statistics;
instead of creating/dropping tables each time, data is inserted
into the pre-existing tables. This makes historical data
comparisons easier
instead of creating/dropping tables each time, data is inserted
into the pre-existing tables. This makes historical data
comparisons easier
- Statspack separates the data collection from the report generation.
Data is collected when a 'snapshot' is taken; viewing the data
collected is in the hands of the performance engineer when he/she
runs the performance report
Data is collected when a 'snapshot' is taken; viewing the data
collected is in the hands of the performance engineer when he/she
runs the performance report
- Data collection is easy to automate using either dbms_job or an
OS utility
OS utility
NOTE: The term 'snapshot' is used to denote a set of statistics gathered
at a single time, identified by a unique Id which includes the
snapshot number (or snap_id). This term should not be confused
with Oracle's Snapshot Replication technology.
How does STATSPACK work?
Statspack is a set of SQL, PL/SQL and SQL*Plus scripts which allow the
collection, automation, storage and viewing of performance data. A user
is automatically created by the installation script - this user, PERFSTAT,
owns all objects needed by this package. This user is granted limited
query-only privileges on the V$views required for performance tuning.
collection, automation, storage and viewing of performance data. A user
is automatically created by the installation script - this user, PERFSTAT,
owns all objects needed by this package. This user is granted limited
query-only privileges on the V$views required for performance tuning.
Statspack users will become familiar with the concept of a 'snapshot'.
'snapshot' is the term used to identify a single collection of
performance data. Each snapshot taken is identified by a 'snapshot id'
which is a unique number generated at the time the snapshot is taken;
each time a new collection is taken, a new snap_id is generated.
'snapshot' is the term used to identify a single collection of
performance data. Each snapshot taken is identified by a 'snapshot id'
which is a unique number generated at the time the snapshot is taken;
each time a new collection is taken, a new snap_id is generated.
The snap_id, along with the database identifier (dbid) and instance number
(instance_number) comprise the unique key for a snapshot (using this
unique combination allows storage of multiple instances of an OPS
database in the same tables).
(instance_number) comprise the unique key for a snapshot (using this
unique combination allows storage of multiple instances of an OPS
database in the same tables).
Once snapshots are taken, it is possible to run the performance report.
The performance report will prompt for the two snapshot id's the report
will process. The report produced calculates the activity on the instance
between the two snapshot periods specified, in a similar way to the
BSTAT/ESTAT report; to compare - the first snap_id supplied can be
considered the equivalent of running BSTAT; the second snap_id
specified can be considered the equivalent of ESTAT. Unlike BSTAT/ESTAT
which can by it's nature only compare two static data points, the report
can compare any two snapshots specified.
The performance report will prompt for the two snapshot id's the report
will process. The report produced calculates the activity on the instance
between the two snapshot periods specified, in a similar way to the
BSTAT/ESTAT report; to compare - the first snap_id supplied can be
considered the equivalent of running BSTAT; the second snap_id
specified can be considered the equivalent of ESTAT. Unlike BSTAT/ESTAT
which can by it's nature only compare two static data points, the report
can compare any two snapshots specified.
1. Configuration
-----------------
-----------------
1.1. Database Space Requirements
The amount of database space required by the package will vary considerably
based on the frequency of snapshots, the size of the database and instance,
and the amount of data collected (which is configurable).
It is therefore difficult to provide general storage clauses and space
utilization predictions which will be accurate at each site.
based on the frequency of snapshots, the size of the database and instance,
and the amount of data collected (which is configurable).
It is therefore difficult to provide general storage clauses and space
utilization predictions which will be accurate at each site.
Note: The default initial and next extent size is 1MB or 5MB for all
tables and indexes which contain changeable data. The minimum
default tablespace requirement is approximately 45MB.
tables and indexes which contain changeable data. The minimum
default tablespace requirement is approximately 45MB.
Dictionary Managed Tablespaces
If you install the package in a dictionary-managed tablespace, Oracle
suggests you monitor the space used by the objects created, and adjust
the storage clauses of the segments, if required.
If you install the package in a dictionary-managed tablespace, Oracle
suggests you monitor the space used by the objects created, and adjust
the storage clauses of the segments, if required.
Locally Managed Tablespaces
If you install the package in a locally-managed tablespace, storage
clauses are not required, as the storage?characteristics are
automatically managed.
If you install the package in a locally-managed tablespace, storage
clauses are not required, as the storage?characteristics are
automatically managed.
1.2 Installing the Tool
Step 1.
This step creates the PERFSTAT user, w?ich will own all PL/SQL code and
database objects created (including the STATSPACK tables, constraints
and the STATSPACK package).
This step creates the PERFSTAT user, w?ich will own all PL/SQL code and
database objects created (including the STATSPACK tables, constraints
and the STATSPACK package).
During the installation you will be prompted for the PERFSTAT
user's default and temporary tablespaces.
user's default and temporary tablespaces.
The default tablespace will be used to create all Statspack
objects (such as tables and indexes). The temporary tablespace
will be used for sort-type activities (for more information on
temporary tablespaces, see the Oracle Concepts Documentation).
objects (such as tables and indexes). The temporary tablespace
will be used for sort-type activities (for more information on
temporary tablespaces, see the Oracle Concepts Documentation).
NOTE:
o Oracle do not recommend using the SYSTEM tablespace to store
statistics data. A more appropriate tablespace is the TOOLS
tablespace.
Similarly, do not use the SYSTEM tablespace as the Statspack
user's TEMPORARY tablespace.
o Oracle do not recommend using the SYSTEM tablespace to store
statistics data. A more appropriate tablespace is the TOOLS
tablespace.
Similarly, do not use the SYSTEM tablespace as the Statspack
user's TEMPORARY tablespace.
o During the installation, the dbms_shared_pool and dbms_job
PL/SQL packages are created. dbms_shared_pool is used to
pin the Statspack package in the shared pool; dbms_job
is created on the assumption the DBA will want to schedule
periodic snapshots automatically using dbms_job.
PL/SQL packages are created. dbms_shared_pool is used to
pin the Statspack package in the shared pool; dbms_job
is created on the assumption the DBA will want to schedule
periodic snapshots automatically using dbms_job.
To install the package, either change directory to the ORACLE_HOME
rdbms/admin directory, or fully specify the ORACLE_HOME/rdbms/admin
directory when calling the installation script, spcreate.
rdbms/admin directory, or fully specify the ORACLE_HOME/rdbms/admin
directory when calling the installation script, spcreate.
To run the installation script, you must use SQL*Plus and connect as
a user with SYSDBA privilege. Do not use Server Manager (svrmgrl)
to install Statspack, as the installation will fail.
a user with SYSDBA privilege. Do not use Server Manager (svrmgrl)
to install Statspack, as the installation will fail.
e.g. Start SQL*Plus, then:
on Unix:
SQL> connect / as sysdba
SQL> @?/rdbms/admin/spcreate
on Unix:
SQL> connect / as sysdba
SQL> @?/rdbms/admin/spcreate
on NT:
SQL> connect / as sysdba
SQL> @%ORACLE_HOME%rdbmsadminspcreate
SQL> connect / as sysdba
SQL> @%ORACLE_HOME%rdbmsadminspcreate
The spcreate install script runs 3 other scripts - you do not need to
run these - the scripts are called automatically:
1. spcusr -> creates the user and grants privileges
2. spctab -> creates the tables
3. spcpkg -> creates the package
Check each of the three output files produced (spcusr.lis,
spctab.lis, spcpkg.lis) by the installation to ensure no
errors were encountered, before continuing on to the next step.
spctab.lis, spcpkg.lis) by the installation to ensure no
errors were encountered, before continuing on to the next step.
Note that there are two ways to install Statspack - interactively (as
shown above), or in "batch" mode; batch mode is useful when you do
not wish to be prompted for the PERFSTAT user's default and
temporary tablespaces.
shown above), or in "batch" mode; batch mode is useful when you do
not wish to be prompted for the PERFSTAT user's default and
temporary tablespaces.
Batch mode installation
~~~~~~~~~~~~~~~~~~~~~~~
To install in batch mode, you must assign values to the SQL*Plus
variables which specify the default and temporary tablespaces before
running spcreate.
The variables are:
default_tablespace -> for the default tablespace
temporary_tablespace -> for the temporary tablespace
default_tablespace -> for the default tablespace
temporary_tablespace -> for the temporary tablespace
e.g.
on Unix:
SQL> connect / as sysdba
SQL> define default_tablespace='tools'
SQL> define temporary_tablespace='temp'
SQL> @?/rdbms/admin/spcreate
on Unix:
SQL> connect / as sysdba
SQL> define default_tablespace='tools'
SQL> define temporary_tablespace='temp'
SQL> @?/rdbms/admin/spcreate
spcreate will no longer prompt for the above information.
Step 2.
The setup phase is now complete.
If you wish to, you may decide to change the password of the
PERFSTAT user for security purposes.
1.3 Errors during installation
A common error made during Statspack installation is running the install
script from Server Manager (svrmgrl) rather than from SQL*Plus. If you
use svrmgrl, the installation will fail. To correctly install Statspack
after such errors, first run the de-install script, then the install
script. Both scripts must be run from SQL*Plus.
script from Server Manager (svrmgrl) rather than from SQL*Plus. If you
use svrmgrl, the installation will fail. To correctly install Statspack
after such errors, first run the de-install script, then the install
script. Both scripts must be run from SQL*Plus.
e.g. Start SQL*Plus, connect as a user with SYSDBA privilege, then:
SQL> @spdrop
SQL> @spcreate
SQL> @spdrop
SQL> @spcreate
2. Gathering data - taking a snapshot
---------------------------------------
---------------------------------------
The simplest interactive way to take a snapshot is to login to SQL*Plus
as the PERFSTAT user, and execute the procedure statspack.snap:
e.g.
SQL> connect perfstat/perfstat
SQL> execute statspack.snap;
as the PERFSTAT user, and execute the procedure statspack.snap:
e.g.
SQL> connect perfstat/perfstat
SQL> execute statspack.snap;
Note: In an OPS environment, you must connect to the instance
you wish to collect data for.
you wish to collect data for.
This will store the current values for the performance statistics
in the STATSPACK tables, and can be used as a baseline snapshot
for comparison with another snapshot taken at a later time.
in the STATSPACK tables, and can be used as a baseline snapshot
for comparison with another snapshot taken at a later time.
For better performance analysis, set the init.ora parameter timed_statistics
to true; this way, Statspack data collected will include important timing
information. The timed_statistics parameter is also dynamically changable
using the 'alter system' command. Timing data is important and is usually
required by Oracle support to diagnose performance problems.
to true; this way, Statspack data collected will include important timing
information. The timed_statistics parameter is also dynamically changable
using the 'alter system' command. Timing data is important and is usually
required by Oracle support to diagnose performance problems.
2.1 Automating statistics gathering
To be able to make comparisons of performance from one day, week or
year to the next, there must be multiple snapshots taken over a period
of time.
year to the next, there must be multiple snapshots taken over a period
of time.
The best method to gather snapshots is to automate the collection on
a regular time interval. It is possible to do this:
a regular time interval. It is possible to do this:
- within the database, using the Oracle dbms_job procedure to
schedule the snapshots
schedule the snapshots
- using Operating System utlities (such as 'cron' on Unix or 'at' on
NT) to schedule the snapshot
NT) to schedule the snapshot
2.2. Using dbms_job
To use an Oracle-automated method for collecting statistics, you can use
dbms_job. A sample script on how to do this is supplied in spauto.sql,
which schedules a snapshot every hour, on the hour.
dbms_job. A sample script on how to do this is supplied in spauto.sql,
which schedules a snapshot every hour, on the hour.
You may wish to schedule snapshots at regular times each day to reflect your
system's OLTP and/or batch peak loads. For example take snapshots at 9am,
10am, 11am, 12 midday and 6pm for the OLTP load, then a snapshot at
12 midnight and another at 6am for the batch window.
system's OLTP and/or batch peak loads. For example take snapshots at 9am,
10am, 11am, 12 midday and 6pm for the OLTP load, then a snapshot at
12 midnight and another at 6am for the batch window.
In order to use dbms_job to schedule snapshots, the job_queue_processes
initialization parameter must be set to greater than 0 in the init.ora
file for the job to be run automatically.
initialization parameter must be set to greater than 0 in the init.ora
file for the job to be run automatically.
Example of an init.ora entry:
# Set to enable the job queue process to start. This allows dbms_job
# to schedule automatic statistics collection using STATSPACK
job_queue_processes=1
# Set to enable the job queue process to start. This allows dbms_job
# to schedule automatic statistics collection using STATSPACK
job_queue_processes=1
If using spauto.sql in OPS enviroment, the spauto.sql script must
be run once on each instance in the cluster. Similarly, the
job_queue_processes parameter must also be set for each instance.
be run once on each instance in the cluster. Similarly, the
job_queue_processes parameter must also be set for each instance.
Changing the interval of statistics collection
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To change the interval of statistics collection use the dbms_job.interval
procedure
e.g.
execute dbms_job.interval(1,'SYSDATE+(1/48)');
Where 'SYSDATE+(1/48)' will result in the statistics being gathered each
1/48 hours (i.e. every half hour).
1/48 hours (i.e. every half hour).
To force the job to run immediately,
execute dbms_job.run(<job number>);
execute dbms_job.run(<job number>);
To remove the autocollect job,
execute dbms_job.remove(<job number>);
execute dbms_job.remove(<job number>);
For more information on dbms_job, see the Supplied Packages Reference
Manual.
Manual.
3. Running a Performance report
---------------------------------
---------------------------------
Once snapshots are taken, it is possible to generate a performance report.
The SQL script which generates the report prompts for the two snapshot id's
to be processed.
The first will be the beginning snapshot id, the second will be the
ending snapshot id. The report will then calculate and print ratios,
increases etc. for all statistics between the two snapshot periods, in
a similar way to the BSTAT/ESTAT report.
The SQL script which generates the report prompts for the two snapshot id's
to be processed.
The first will be the beginning snapshot id, the second will be the
ending snapshot id. The report will then calculate and print ratios,
increases etc. for all statistics between the two snapshot periods, in
a similar way to the BSTAT/ESTAT report.
Note: It is not correct to specify begin and end snapshots where the
begin snapshot and end snapshot were taken from different
instance startups. In other words, the instance must not have
been shutdown between the times that the begin and end snapshots
were taken.
The reason for this requirement is the database's dynamic
performance tables which Statspack queries to gather the data
are memory resident, hence shutting down the database will
reset the values in the performance tables to 0. As Statspack
subtracts the begin-snapshot statistics from the end-snapshot
statistics, the resulting output will be invalid.
If begin and end snapshots which were taken between shutdowns
are specified in the report, the report shows an appropriate error
to indicate this.
begin snapshot and end snapshot were taken from different
instance startups. In other words, the instance must not have
been shutdown between the times that the begin and end snapshots
were taken.
The reason for this requirement is the database's dynamic
performance tables which Statspack queries to gather the data
are memory resident, hence shutting down the database will
reset the values in the performance tables to 0. As Statspack
subtracts the begin-snapshot statistics from the end-snapshot
statistics, the resulting output will be invalid.
If begin and end snapshots which were taken between shutdowns
are specified in the report, the report shows an appropriate error
to indicate this.
Separating the phase of data gathering from producing a report, allows the
flexibility of basing a report on any data points selected. For example
it may be reasonable for the DBA to use the supplied automation script to
automate data collection every hour on the hour; If at some later point
a performance issue arose which may be better investigated by looking
at a three hour data window rather than an hour's worth of data, the
only thing the DBA need do, is specify the required start point and end
point when running the report.
flexibility of basing a report on any data points selected. For example
it may be reasonable for the DBA to use the supplied automation script to
automate data collection every hour on the hour; If at some later point
a performance issue arose which may be better investigated by looking
at a three hour data window rather than an hour's worth of data, the
only thing the DBA need do, is specify the required start point and end
point when running the report.
3.1 Running the report
To examine the change in statistics between two time periods, the
spreport.sql file is executed while being connected to the PERFSTAT
user. The spreport.sql command file is located in the rdbms/admin
directory of the Oracle Home.
spreport.sql file is executed while being connected to the PERFSTAT
user. The spreport.sql command file is located in the rdbms/admin
directory of the Oracle Home.
Note: In an OPS environment you must connect to the instance you
wish to report on.
wish to report on.
You will be prompted for:
1. The beginning snapshot Id
2. The ending snapshot Id
3. The name of the report text file to be created
1. The beginning snapshot Id
2. The ending snapshot Id
3. The name of the report text file to be created
e.g. on Unix
SQL> connect perfstat/perfstat
SQL> @?/rdbms/admin/spreport
SQL> connect perfstat/perfstat
SQL> @?/rdbms/admin/spreport
e.g. on NT
SQL> connect perfstat/perfstat
SQL> @%ORACLE_HOME%rdbmsadminspreport
SQL> connect perfstat/perfstat
SQL> @%ORACLE_HOME%rdbmsadminspreport
Example output:
SQL> connect perfstat/perfstat
Connected.
SQL> @spreport
SQL> connect perfstat/perfstat
Connected.
SQL> @spreport
DB Id DB Name Inst Num Instance
----------- ------------ -------- ------------
2618106428 PRD1 1 prd1
----------- ------------ -------- ------------
2618106428 PRD1 1 prd1
Completed Snapshots
Snap Snap
Instance DB Name Id Snap Started Level Comment
------------ ------------ ----- ----------------- ----- ----------------------
prd1 PRD1 1 11 May 2000 12:07 5
2 11 May 2000 12:08 5
Instance DB Name Id Snap Started Level Comment
------------ ------------ ----- ----------------- ----- ----------------------
prd1 PRD1 1 11 May 2000 12:07 5
2 11 May 2000 12:08 5
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 1
Begin Snapshot Id specified: 1
Enter value for end_snap:
End Snapshot Id specified: 2
End Snapshot Id specified: 2
Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is sp_1_2 To use this name,
press <return> to continue, otherwise enter an alternative.
Enter value for report_name: <press return or enter a new name>
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is sp_1_2 To use this name,
press <return> to continue, otherwise enter an alternative.
Enter value for report_name: <press return or enter a new name>
Using the report name sp_1_2
The report will now scroll past, and also be written to the file
specified (e.g. sp_1_2.lis).
Batch mode report generation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To run a report without being prompted, assign values to the
SQL*Plus variables which specify the begin snap id, the end snap id
and the report name before running spreport.
The variables are:
begin_snap -> specifies the begin Snapshot Id
end_snap -> specifies the end Snapshot Id
report_name -> specifies the Report output name
begin_snap -> specifies the begin Snapshot Id
end_snap -> specifies the end Snapshot Id
report_name -> specifies the Report output name
e.g.
on Unix:
SQL> connect perfstat/perfstat
SQL> define begin_snap=1
SQL> define end_snap=2
SQL> define report_name=batch_run
SQL> @?/rdbms/admin/spreport
on Unix:
SQL> connect perfstat/perfstat
SQL> define begin_snap=1
SQL> define end_snap=2
SQL> define report_name=batch_run
SQL> @?/rdbms/admin/spreport
spreport will no longer prompt for the above information.
3.2. Gathering Optimizer statistics on the PERFSTAT schema
For best performance when running spreport, collect optimizer statistics
for tables and indexes owned by the PERFSTAT. This should be performed
whenever significant change in data volumes in PERFSTAT's tables.
The easiest way to do this, is either to use dbms_utility, or dbms_stats,
and specify the PERFSTAT user:
execute dbms_utility.analyze_schema('PERFSTAT','COMPUTE');
or
execute dbms_stats.gather_schema_stats('PERFSTAT');
for tables and indexes owned by the PERFSTAT. This should be performed
whenever significant change in data volumes in PERFSTAT's tables.
The easiest way to do this, is either to use dbms_utility, or dbms_stats,
and specify the PERFSTAT user:
execute dbms_utility.analyze_schema('PERFSTAT','COMPUTE');
or
execute dbms_stats.gather_schema_stats('PERFSTAT');
4. Configuring the amount of data captured
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
4.1. Snapshot Level
It is possible to change the amount of information gathered by the package,
by specifying a different snapshot 'level'. In other words, the level
chosen (or defaulted) will decide the amount of data collected.
The higher the snapshot level, the more data is gathered. The default
level set by the installation is level 5. The various levels are
explained in detail section 4.4 below.
by specifying a different snapshot 'level'. In other words, the level
chosen (or defaulted) will decide the amount of data collected.
The higher the snapshot level, the more data is gathered. The default
level set by the installation is level 5. The various levels are
explained in detail section 4.4 below.
4.2. Snapshot SQL thresholds
There are other parameters which can be configured in addition to the
snapshot level.
snapshot level.
These parameters are used as thresholds when collecting data on SQL
statements; data will be captured on any SQL statements that breach
the specified thresholds.
statements; data will be captured on any SQL statements that breach
the specified thresholds.
Snapshot level and threshold information used by the package is stored
in the stats$statspack_parameter table.
in the stats$statspack_parameter table.
4.3. Changing the default values for Snapshot Level and SQL Thresholds
If you wish to, you can change the default parameters used for taking
snapshots, so that they are tailored to the instance's workload.
snapshots, so that they are tailored to the instance's workload.
You can do this either by:
o Taking a snapshot, and specifying the new defaults to be saved to the
database (using statspack.snap, and using the i_modify_parameter
input variable).
database (using statspack.snap, and using the i_modify_parameter
input variable).
SQL> execute statspack.snap -
(i_snap_level=>10, i_modify_parameter=>'true');
(i_snap_level=>10, i_modify_parameter=>'true');
Setting the i_modify_parameter value to true will save the new
thresholds in the stats$statspack_parameter table; these thresholds
will be used for all subsequent snapshots.
thresholds in the stats$statspack_parameter table; these thresholds
will be used for all subsequent snapshots.
If the i_modify_parameter was set to false or if it were omitted, the
new parameter values would not be saved. Only the snapshot taken at
that point will use the specified values, any subsequent snapshots will
use the preexisting values in the stats$statspack_parameter table.
new parameter values would not be saved. Only the snapshot taken at
that point will use the specified values, any subsequent snapshots will
use the preexisting values in the stats$statspack_parameter table.
The full list of parameters which can be passed into snap
are listed in 4.6. below
are listed in 4.6. below
o Changing the defaults immediately without taking a snapshot, using the
statspack.modify_statspack_parameter procedure. For example to change
the snapshot level to 10, and the SQL thresholds for buffer_gets and
disk_reads, the following statement can be issued:
statspack.modify_statspack_parameter procedure. For example to change
the snapshot level to 10, and the SQL thresholds for buffer_gets and
disk_reads, the following statement can be issued:
SQL> execute statspack.modify_statspack_parameter -
(i_snap_level=>10, i_buffer_gets_th=>10000, i_disk_reads_th=>1000);
(i_snap_level=>10, i_buffer_gets_th=>10000, i_disk_reads_th=>1000);
This procedure changes the values permananently, but does not
take a snapshot.
take a snapshot.
The full list of parameters which can be passed into the
modify_statspack_parameter procedure are the same as those for
the snap procedure. These are listed in 4.6. below.
modify_statspack_parameter procedure are the same as those for
the snap procedure. These are listed in 4.6. below.
4.4 Snapshot Levels - details
Levels >= 0 General performance statistics
Statistics gathered:
This level and any level greater than 0 collects general
performance statistics, such as: wait statistics, system events,
system statistics, rollback segment data, row cache, SGA,
background events, session events, lock statistics,
buffer pool statistics, parent latch statistics.
Statistics gathered:
This level and any level greater than 0 collects general
performance statistics, such as: wait statistics, system events,
system statistics, rollback segment data, row cache, SGA,
background events, session events, lock statistics,
buffer pool statistics, parent latch statistics.
Levels >= 5 Additional data: SQL Statements
This level includes all statistics gathered in the lower level(s),
and additionally gathers the performance data on high resource
usage SQL statements.
This level includes all statistics gathered in the lower level(s),
and additionally gathers the performance data on high resource
usage SQL statements.
In a level 5 snapshot, note that the time required for the snapshot
to complete is dependant on the shared_pool_size and on the number of
SQL statements in the shared pool at the time the snapshot is taken:
the larger the shared pool, the longer the time taken to complete
the snapshot.
to complete is dependant on the shared_pool_size and on the number of
SQL statements in the shared pool at the time the snapshot is taken:
the larger the shared pool, the longer the time taken to complete
the snapshot.
SQL 'Thresholds'
The SQL statements gathered by Statspack are those which exceed one of
four predefined threshold parameters:
- number of executions of the SQL statement (default 100)
- number of disk reads performed by the SQL statement (default 1,000)
- number of parse calls performed by the SQL statement (default 1,000)
- number of buffer gets performed by the SQL statement (default 10,000)
- size of sharable memory used by the SQL statement (default 1m)
- version count for the SQL statement (default 20)
The SQL statements gathered by Statspack are those which exceed one of
four predefined threshold parameters:
- number of executions of the SQL statement (default 100)
- number of disk reads performed by the SQL statement (default 1,000)
- number of parse calls performed by the SQL statement (default 1,000)
- number of buffer gets performed by the SQL statement (default 10,000)
- size of sharable memory used by the SQL statement (default 1m)
- version count for the SQL statement (default 20)
The values of each of these threshold parameters are used when
deciding which SQL statements to collect - if a SQL statement's
resource usage exceeds any one of the above threshold values, it
is captured during the snapshot.
The SQL threshold levels used are either those stored in the table
stats$statspack_parameter, or by the thresholds specified when
the snapshot is taken.
deciding which SQL statements to collect - if a SQL statement's
resource usage exceeds any one of the above threshold values, it
is captured during the snapshot.
The SQL threshold levels used are either those stored in the table
stats$statspack_parameter, or by the thresholds specified when
the snapshot is taken.
Levels >= 10 Additional statistics: Parent and Child latches
This level includes all statistics gathered in the lower levels, and
additionally gathers Parent and Child Latch information. Data gathered
at this level can sometimes cause the snapshot to take longer to complete
i.e. this level can be resource intensive, and should only be used
when advised by Oracle personnel.
This level includes all statistics gathered in the lower levels, and
additionally gathers Parent and Child Latch information. Data gathered
at this level can sometimes cause the snapshot to take longer to complete
i.e. this level can be resource intensive, and should only be used
when advised by Oracle personnel.
4.5. Specifying a Session Id
If you would like to gather session statistics and wait events for a
particular session (in addition to the instance statistics and wait events),
it is possible to specify the session id in the call to Statspack. The
statistics gathered for the session will include session statistics,
session events and lock activity. The default behaviour is to not to
gather session level statistics.
particular session (in addition to the instance statistics and wait events),
it is possible to specify the session id in the call to Statspack. The
statistics gathered for the session will include session statistics,
session events and lock activity. The default behaviour is to not to
gather session level statistics.
SQL> execute statspack.snap(i_session_id=>3);
4.6. Input Parameters for the SNAP and MODIFY_STATSPACK_PARAMETERS procedures
Parameters able to be passed in to the statspack.snap and
statspack.modify_statspack_parameter procedures are as follows:
statspack.modify_statspack_parameter procedures are as follows:
Range of Default
Parameter Name Valid Values Value Meaning
------------------ ------------ ------- -----------------------------------
i_snap_level 0, 5, 10 5 Snapshot Level
i_ucomment Text Blank Comment to be stored with Snapshot
i_executions_th Integer >=0 100 SQL Threshold: number of times
the statement was executed
i_disk_reads_th Integer >=0 1,000 SQL Threshold: number of disk reads
the statement made
i_parse_calls_th Integer >=0 1,000 SQL Threshold: number of parse
calls the statement made
i_buffer_gets_th Integer >=0 10,000 SQL Threshold: number of buffer
gets the statement made
i_sharable_mem_th Integer >=0 1048576 SQL Threshold: amount of sharable
memory
i_version_count_th Integer >=0 20 SQL Threshold: number of versions
of a SQL statement
i_session_id Valid sid 0 (no Session Id of the Oracle Session
from session) to capture session granular
v$session statistics for
i_modify_parameter True, False False Save the parameters specified for
future snapshots?
Parameter Name Valid Values Value Meaning
------------------ ------------ ------- -----------------------------------
i_snap_level 0, 5, 10 5 Snapshot Level
i_ucomment Text Blank Comment to be stored with Snapshot
i_executions_th Integer >=0 100 SQL Threshold: number of times
the statement was executed
i_disk_reads_th Integer >=0 1,000 SQL Threshold: number of disk reads
the statement made
i_parse_calls_th Integer >=0 1,000 SQL Threshold: number of parse
calls the statement made
i_buffer_gets_th Integer >=0 10,000 SQL Threshold: number of buffer
gets the statement made
i_sharable_mem_th Integer >=0 1048576 SQL Threshold: amount of sharable
memory
i_version_count_th Integer >=0 20 SQL Threshold: number of versions
of a SQL statement
i_session_id Valid sid 0 (no Session Id of the Oracle Session
from session) to capture session granular
v$session statistics for
i_modify_parameter True, False False Save the parameters specified for
future snapshots?
5. Time Units used for Wait events
-----------------------------------
-----------------------------------
Although Oracle records wait time in hundredth's of a second (i.e.
centiseconds), some timings in the report (especially IO times) are
converted to milliseconds to allow easier comparison with Operating
System monitoring utilities which often report timings in milliseconds.
centiseconds), some timings in the report (especially IO times) are
converted to milliseconds to allow easier comparison with Operating
System monitoring utilities which often report timings in milliseconds.
For clarity, the time units used are specified in the column headings of
each timed column. The convention used is:
(cs) - a centisecond - which is 100th of a second
(ms) - a millisecond - which is 1000th of a second
each timed column. The convention used is:
(cs) - a centisecond - which is 100th of a second
(ms) - a millisecond - which is 1000th of a second
6. Event Timings
-----------------
If timings are available, the Statspack report will order wait events by time
(in the Top-5 and background and foreground Wait Events sections).
-----------------
If timings are available, the Statspack report will order wait events by time
(in the Top-5 and background and foreground Wait Events sections).
If timed_statistics is false for the instance, however a subset of users or
programs set timed_statistics set to true dynamically, the Statspack report
output may look inconsistent, where some events have timings (those which the
individual programs/users waited for), and the remaining events do not.
The Top-5 section will also look unusual in this situation.
programs set timed_statistics set to true dynamically, the Statspack report
output may look inconsistent, where some events have timings (those which the
individual programs/users waited for), and the remaining events do not.
The Top-5 section will also look unusual in this situation.
Optimally, timed_statistics should be set to true at the instance level for
ease of diagnosing performance problems.
ease of diagnosing performance problems.
7. Managing and Sharing performance data
-----------------------------------------
-----------------------------------------
7.1. Sharing data via export
If you wish to share data with other sites (for example if Oracle
Support requires the raw statistics), it is possible to export
the PERFSTAT user.
Support requires the raw statistics), it is possible to export
the PERFSTAT user.
An export parameter file (spuexp.par) has been supplied for this
purpose. To use this file, supply the export command with the
userid parameter, along with the export parameter file name.
e.g.
exp userid=perfstat/perfstat parfile=spuexp.par
purpose. To use this file, supply the export command with the
userid parameter, along with the export parameter file name.
e.g.
exp userid=perfstat/perfstat parfile=spuexp.par
This will create a file called spuexp.dmp and the log file
spuexp.log
spuexp.log
If you wish to load the data into another database, use the import
command. For more information on using export and import, please
see the Oracle Utilities manual.
command. For more information on using export and import, please
see the Oracle Utilities manual.
7.2. Purging/removing unnecessary data
It is possible to purge unnecessary data from the PERFSTAT schema using
sppurge.sql. This script deletes snapshots which fall between
the begin and end range of Snapshot Id's specified.
sppurge.sql. This script deletes snapshots which fall between
the begin and end range of Snapshot Id's specified.
NOTE:
It is recommended you export the schema as a backup before running this
script, either using your own export parameters, or those provided in
spuexp.par
It is recommended you export the schema as a backup before running this
script, either using your own export parameters, or those provided in
spuexp.par
Purging may require the use of a large rollback segment, as all data
relating each Snapshot Id to be purged will be deleted.
To avoid rollback segment extension errors, explicitly use a large
rollback segment. This can be done by executing the 'set transaction
use rollback segment..' command before running the sppurge.sql script
(for more information on the set transaction command see the SQL reference
manual). Alternatively, to avoid rollback segment extension errors
specify a smaller range of Snapshot Id's to purge.
relating each Snapshot Id to be purged will be deleted.
To avoid rollback segment extension errors, explicitly use a large
rollback segment. This can be done by executing the 'set transaction
use rollback segment..' command before running the sppurge.sql script
(for more information on the set transaction command see the SQL reference
manual). Alternatively, to avoid rollback segment extension errors
specify a smaller range of Snapshot Id's to purge.
When sppurge is run the the instance currently connected to along with
the available snapshots are displayed. The DBA is then prompted for the
low Snap Id and high Snap Id. All snapshots which fall within this
range will be purged.
the available snapshots are displayed. The DBA is then prompted for the
low Snap Id and high Snap Id. All snapshots which fall within this
range will be purged.
e.g. Purging data - connect to PERFSTAT using SQL*Plus, then run the
script - an example output appears below.
script - an example output appears below.
SQL> connect perfstat/perfstat
SQL> set transaction use rollback segment rbig;
SQL> @sppurge
SQL> set transaction use rollback segment rbig;
SQL> @sppurge
Database Instance currently connected to
========================================
Instance
DB Id DB Nam
========================================
Instance
DB Id DB Nam