oracle logminer分析归档日志

适用场景:

数据库在某个时间段产生了大量的归档日志,用户想要分析在这段时间数据具体做了什么操作

一、查询目标时间点的归档日志是否存在

set pagesize 900;

set linesize 900;

col name for a55;

ALTER SESSION SET NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS";

select name, first_time, next_time,SEQUENCE# from v$archived_log   where first_time > to_date('2021-11-15 16:00:00', 'yyyy-mm-dd hh24:mi:ss') and first_time < to_date('2021-11-15 16:30:00', 'yyyy-mm-dd hh24:mi:ss');

二、添加归档日志文件

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253871_1013686939.dbf',dbms_logmnr.new);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253872_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253873_1013686939.dbf',dbms_logmnr.addfile);

三、启动logminer进行分析

execute dbms_logmnr.start_logmnr;

也可以指定其他条件,比如时间:

execute dbms_logmnr.start_logmnr(StartTime =>to_date('2021-11-15 16:00:00','YYYY-MM-DD HH24:MI:SS') EndTime =>to_date('2021-11-15 16:30:00','YYYY-MM-DD HH24:MI:SS '));

四、输出结果

可以将结果输出到一个文件中

spool  test.txt ;       --结果输出到当前目录的test.txt文件中

select sql_redo,info from v$logmnr_contents where UPPER(sql_redo) like '%UPDATE%'  or UPPER(sql_redo) like '%INSERT%' or UPPER(sql_redo) like '%DELETE%';   --获取DML语句

spool off;

也可以通过建表的方式将结果保存在一个表格中。(如果日志量大,不要建这个临时表)

create table temp_result1 as  select * from v$logmnr_contents;

五、结束logminer

execute dbms_logmnr.end_logmnr;

六、分析结果

分析test.txt 或temp_result1表中数据

如果输出的结果中,对象名未知,对象id已知的话,可以根据以下语句查询对应的对象名

select owner,object_name  from dba_objects where object_id='xxxxx';

七、示例

例如:

想要查询2021年11月15日16:00到 2011年11月15日16:30,数据库做了哪些dml操作。

1.检查这个目标时间点的归档日志是否存在

set pagesize 900;

set linesize 900;

col name for a55;

ALTER SESSION SET NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS";

select name, first_time, next_time,SEQUENCE# from v$archived_log   where first_time > to_date('2021-11-15 16:00:00', 'yyyy-mm-dd hh24:mi:ss') and first_time < to_date('2021-11-15 16:30:00', 'yyyy-mm-dd hh24:mi:ss');

查询结果如下:

2.日志挖掘

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253871_1013686939.dbf',dbms_logmnr.new);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253872_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253873_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253874_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253875_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253876_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253877_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253878_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253879_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253880_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253881_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253882_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253883_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253884_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253885_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253886_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253887_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253888_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253889_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253890_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253891_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253892_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253893_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253894_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253895_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253896_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253897_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253898_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253899_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253900_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253901_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253902_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253903_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253904_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253905_1013686939.dbf',dbms_logmnr.addfile);

EXECUTE DBMS_LOGMNR.ADD_LOGFILE('/home/u01/arch/1_253906_1013686939.dbf',dbms_logmnr.addfile);

3.启用logminer

PROMPT start LogMiner;

execute dbms_logmnr.start_logmnr;

4.结果输出

spool  test.txt ;       --结果输出到当前目录的test.txt文件中

select sql_redo,info from v$logmnr_contents where UPPER(sql_redo) like '%UPDATE%'  or UPPER(sql_redo) like '%INSERT%' or UPPER(sql_redo) like '%DELETE%';   --获取DML语句

spool off;

5.结束logminer

EXECUTE DBMS_LOGMNR.END_LOGMNR();

6.分析结果

分析test.txt具体在做哪些操作。

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值