alert日志中出现Private Strand Flush Not Complete的处理方法

还是南京那个客户的库,alert.log日志还报了如下的错误:

Fri Oct 17 19:59:51 2014
Thread 1 cannot allocate new log, sequence 4722
Private strand flush not complete
  Current log# 1 seq# 4721 mem# 0: /oradata/sgomp5/redo01.log
Thread 1 advanced to log sequence 4722 (LGWR switch)
  Current log# 2 seq# 4722 mem# 0: /oradata/sgomp5/redo02.log

在MOS社区中找到了一篇关于这个问题的文章:

Historically, Every user session wrote the changes to redo log buffer and changes from redo log  buffer are flushed to redo logs on disk by lgwr. As number of users increased, the race and the need to get  latch for redo allocation and redo copy on the public redo buffer increased. 
So, starting from 10g, Oracle came up with concept ofprivate redo (x$kcrfstrand) and in-memory undo (x$ktifp). Every session has private redo where session writes to and then a (small) batch of changes  is written to public redo and finally from public redo log buffer to redo log files on disk.  This mechanismreduces the gets/sleeps on redo copy and redo allocation latches on  the public redo buffer and hence makes the architecture more scalable.

It is also worth noting that oracle falls back to old redo mechanism in case transaction is too big (with lots of changes) and if changes done by that transaction can't fit into private redo buffers.

当数据库切换日志时,所有private strand都必须刷新到当前日志,然后才能继续。此信息表示我们在尝试切换时,还没有完全将所有 redo信息写入到日志中。这有点类似于“checkpoint not complete”,不同的是,它仅涉及到正在被写入日志的redo。在写入所有redo前,无法切换日志。

Private Strands是10gR2才有的,它用于处理redo的latch(redo allocation latch)。是一种允许进程利用多个allocation latch更高效地将redo写入redo buffer cache的机制,它与9i中出现的log_parallelism参数相关。提出Strand的概念是为了确保实例的redo生成率达到最佳,并能确保在出现某种redo争用时,可以动态调整strand的数量进行补偿。初始分配的strand数量取决于CPU的数量,最少两个strand,其中一个strand用于active的redo生成。


对于大型的oltp系统,redo生成量非常大,因此当前台进程遇到redo争用时,这些strand会被激活。shared strand总是与多个private strand共存。Oracle 10g的redo(和undo)机制有一些重大变化,目的是为了减少争用。此机制不再实时记录redo,而是先记录在一个private area,并在commit时flush到redo log buffer中去。在这种新机制引入后,一旦用户进程申请到private strand,redo不再保存到pga中,因此不再需要redo copy latch这个过程


如果新事务申请不到private strand的redo allocation latch,则会继续遵循旧的redo buffer机制,申请写入shared strand中。对于这个新的机制,在进行redo被写出到logfile时,LGWR需要将shared strand与private strand的内容写出。当redo flush发生时,所有的public strands的redo allocation latch需要被获取,所有的public strands的redo copy latch需要被检查,所有包含活动事务的private strands需要被持有。

其实,对于这个现象也可以忽略,除非“cannot allocate new log”信息和“advanced to log sequence”信息之间有明显的时间差。

如果想要在alert.log中避免出现Private strand flush not complete事件,那么可以通过增加参数db_writer_processes的值来实现,因为DBWn会触发LGWR将redo写入到logfile,如果有多个DBWn进程一起写,可以加速redo buffer cache写入redo logfile。


可以使用以下命令修改:
SQL> alter system set db_writer_processes=4 scope=spfile;  --该参数时静态参数,必需重启数据库后生效

注意,DBWR进程数应该与逻辑CPU数相当。另外地,当oracle发现一个DB_WRITER_PROCESS不能完成工作时,也会自动增加其数量,前提是已经在初始化参数中设定过最大允许的值。

关于DB_WRITER_PROCESSES和DBWR_IO_SLAVES参数的一些说明:

DB_WRITER_PROCESSES replaces the Oracle7 parameter DB_WRITERS and specifies the initial number of database writer processes for an instance. If you use DBWR_IO_SLAVES, only one database writer process will be used, regardless of the setting for DB_WRITER_PROCESSES

DB_WRITER_PROCESSES参数就是在Oracle 7中的DB_WRITERS参数,用来指定数据库实例的DBWR进程个数,当系统中还配置DBWR_IO_SLAVES参数时(默认为0),则只能利用到一个DBWn进程,而忽略其他的。

DBWR_IO_SLAVES
If it is not practical to use multiple DBWR processes, then Oracle provides a facility whereby the I/O load can be distributed over multiple slave processes. The DBWR process is the only process that scans the buffer cache LRU list for blocks to be written out. However, the I/O for those blocks is performed by the I/O slaves. The number of I/O slaves is determined by the parameter DBWR_IO_SLAVES.

当使用单一DBWR进程时,Oralce提供了使用多个I/O slave进程来完成模拟异步IO,去完成全本应该由DBWR做的事情(写LRU上的数据块到磁盘文件),这个slave的数量是通过DBWR_IO_SLAVES参数来指定的

DBWR_IO_SLAVES is intended for scenarios where you cannot use multiple DB_WRITER_PROCESSES (for example, where you have a single CPU). I/O slaves are also useful when asynchronous I/O is not available, because the multiple I/O slaves simulate nonblocking, asynchronous requests by freeing DBWR to continue identifying blocks in the cache to be written. Asynchronous I/O at the operating system level, if you have it, is generally preferred.

DBWR_IO_SLAVES参数通常被用在单CPU的场景中,因为单CPU即使设置了多DBWR进程数也是没有效果的。无论操作系统是否支持异步IO,使用多个I/O slaves都是有效的,可以分担DBWR的任务。如果使用了异步IO,那就更加推荐设置了

DBWR I/O slaves are allocated immediately following database open when the first I/O request is made. The DBWR continues to perform all of the DBWR-related work, apart from performing I/O. I/O slaves simply perform the I/O on behalf of DBWR. The writing of the batch is parallelized between the I/O slaves.

DBWR的I/O slaves当数据库open时发生第一次I/O请求时被分配,DBWR进程继续完成与自身相关任务,而分离出部分I/O处理任务给I/O slaves,各个I/O slaves之间的I/O处理都是并行的

Choosing Between Multiple DBWR Processes and I/O Slaves
Configuring multiple DBWR processes benefits performance when a single DBWR process is unable to keep up with the required workload. However, before configuring multiple DBWR processes, check whether asynchronous I/O is available and configured on the system. If the system supports asynchronous I/O but it is not currently used, then enable asynchronous I/O to see if this alleviates the problem. If the system does not support asynchronous I/O, or if asynchronous I/O is already configured and there is still a DBWR bottleneck, then configure multiple DBWR processes.

关于如何选择多个DBWR进程和I/O slaves进程
当单一的DBWR进程无法胜任大量的写工作负载,配置多个DBWR进程是有效的。但是在配置多个DBWR进程前,需要先检查OS上是否支持异步I/O,如果支持但未开启,那么先开启;如果系统不支持或已经配置了异步IO后,仍然有DBWR瓶颈,那么就可以配置多个DBWR进程

Using multiple DBWRs parallelizes the gathering and writing of buffers. Therefore, multiple DBWn processes should deliver more throughput than one DBWR process with the same number of I/O slaves. For this reason, the use of I/O slaves has been deprecated in favor of multiple DBWR processes. I/O slaves should only be used if multiple DBWR processes cannot be configured.

开启多个DBWR进程就意味着可以并行写更多的脏缓存(dirty buffer)到数据文件,而多个DBWR的吞吐量,也要比1个DBWR+相当数量的I/O slaves的要高,因此,当开启了多个DBWR进程时,就不应该再配置DBWR_IO_SLAVES(如果原来是非零的话),可以把这个参数设置为0

总结:

DBWR_IO_SLAVES主要用于模拟异步环境,在不支持异步操作的OS上,可以提高IO的读写速度。
多个DBWR进程可以并行地从data buffer中获取dirty block并且并行地写入磁盘。但是,在单DBWR+多个I/O slaves的场景下,只能是一个DBWR负责从data buffer中获取,而多个I/O slaves并行写入。
如果系统支持AIO(disk_async_io=true),一般不用设置多dbwr 或io slaves。

如果在有多个cpu的情况下建议使用DB_WRITER_PROCESSES,因为这样的情况下不用去模拟异步模式,但要注意进程数量不能大于cpu数量。而在只有一个cpu的情况下建议使用DBWR_IO_SLAVES来模拟异步模式,以便提高数据库性能。


  • 3
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值