Latches and Latch Contention

http://learningoracle.wordpress.com/2008/02/19/latches-and-latch-contention/[@more@]

Latches and Latch Contention

Posted in Reading Material, Write-Ups by ealing on February 19, 2008
Tags: concurrency, latches, _spin_count

I’ve taken a short detour into the world of latch contention. I read two papers, Resolving Oracle Latch Contention and Conquering Oracle Latch Contention. The stand-out thing about these two papers is that, modulo a caveat about CPU usage, they recommend exactly opposite ways of reducing latch contention! One of Conquering‘s suggestions is to decrease the value of the parameter _spin_count, but Resolving‘s main suggestion is to increase the value of _spin_count! Let me try to summarise what I’ve learned about latches before explaining why I think they’ve come to different conclusions about how to deal with latch contention problems.

What is a Latch?

Latches are “lightweight” mechanisms that are used to serialise access to memory structures. It is not possible to queue for a latch – a process either gets the latch or does not. Memory structures protected by latches include things like the buffer cache, the java pool and the library cache.

Behaviour on Latch Get Failure

When a process tries to obtain a latch, it may succeed or fail. On most hardware, the latch get operation is implemented as an atomic “test-and-set” instruction. Unlike failure to obtain a lock (which may result in a process sleeping or failing), failure to obtain a latch does not result in the process giving up the CPU. Exactly what happens when a latch is not obtained depends on the mode in which the latch was requested. If it was requested in “immediate” mode then the requesting process resumes exectuion immediately; it may then fail or carry out another action. If the latch was requested in “willing to wait” mode, then the process does not resume immediately if the latch request fails. Instead, the process starts the “spin and sleep” routine (as it’s named in Conquering).

Because of the speed at which latches are obtained and released, the latch-seeking process can remain on the CPU, and is said to be “spin-locked”. Spin-locked processes are consuming CPU, and my be preventing other processes from executing. If a spin-locked process still has not obtained the latch after _spin_count attempts, it will relinquish the CPU and sleep, allowing other processes to resume execution. The sleeping process will post the “latch free” wait.

When the sleeping process is resumed on the CPU, it will again request the latch. If it fails, it will spin again, and request again, until it has once more made _spin_count attempts to acquire the latch, at which point it will sleep again. The first sleep while waiting for a particular latch will be 10ms long, as will the second. Sleep lengths follow an exponential pattern, so the third and fourth are 20ms long, the fifth and sixth are 40ms long, and so on.

Parent and Child Latches

Some latches, such as the cache buffers chains latch protecting cached data blocks, have multiple children. In these cases the parent latch is not, as far as I can see, a useful object. It exists to aggregate statistics in dynamic performance views. Where child latches exist, they offer finer granularity of locking than would otherwise be available.
For instance, if there are N child latches protecting blocks in the buffer cache, then there can be up to N processes altering these blocks concurrently. Because each block “belongs” with a particular latch, it is still possible for running processes to block each other, but this becomes less likely as the granularity of the latching increases.

Oracle Wait Interface[4] is confusing on the exact behaviour of processes trying to obtain latches. On page 145, it says:

If a process fails to get one of the child latches in the first attempt, it will ask for the next, also with the no-wait mode. The willing-to-wait mode is used only on the last latch an when all no-wait requests against other child latches have failed.

That behaviour wouldn’t make sense if the latch requested was protecting a block in the buffer cache. After all, if any of the N child latches was sufficient, then there could be N processes concurrently changing the same block! It’s been suggested to me that the above is wrong in the general case, but may be true in some specific cases like the redo copy latch. This seems plausible to me, but I can’t verify it.

Latch-Related System Information

There are a number of latch-related dynamic performance views. These include:

  • v$latch – lists all latches available in the system
  • v$latchname – a straightforward translation from number to name
  • v$latch_parent /v$latch_children – has nearly identical columns to v$latch, but de-aggregated
  • v$latch_misses – details on where latch acquisition attempts have been missed (but see Ixora article)
  • v$latchholder – contains a SID and PID for each held latch
  • v$event_histogram – contains a histogram of waits, sorted by the event that caused them (including latches) and how many milliseconds they lasted

Dealing With Contention

Resolving makes the point that latch contention must be considered in the context of the whole system. If latch free waits are only a small proportion of the total waits, then there’s little to be gained by trying to eliminate them. Furthermore, it’s not the latch miss ratio that matters, but the total number of misses.

Conquering takes an understanding-led approach. When dealing with latch contention, it suggests three questions:

  1. What is the use of the memory structure protected by this latch?
  2. Why do processes want to access this latch so often?
  3. Why do they hold the latch so long?

After understanding the cause of the problem, possible solutions should present themsleves.

_spin_count: Up or Down?

Resolving points out that _spin_count has been an undocumented parameter since version 8i (released in 1999). Its default value, 2000, has apparently not changed in that period. Since CPUs are much faster today than they were in 1999, the time taken by a spinning process to request a latch 2000 times is much lower today than it was when the default was established. So a process that cannot obtain a much latch spends a much greater proportion of its time on modern hardware sleeping than it would have on an older hardware. This is demonstrated with the results of an experiment, where the optimum value of _spin_count was found to be approximately 12000.

Before advising against raising _spin_count, Conquering notes that by the time the DBA sees a latch free wait, the posting process has already consumed CPU time waiting for the latch. The step in the author’s reasoning that I don’t quite get is the next:

“With this in mind, you may be able to see why it is very common to have latch contention when there is an operating system CPU bottleneck.”

This seems back-to-front to me. I can see that:

more latch contention => more CPU consumption by spinning processes => CPU bottleneck

What I can’t see is that a CPU bottleneck would lead to more latch contention, unless a lot of sleeping CPU-starved processes are holding latches required by processes on the CPU. In any case, I do agree that you probably won’t want to increase _spin_count when there is a CPU bottleneck: this certainly would increase CPU demand, but may not increase the rate at which useful work is done.

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/24559902/viewspace-1037943/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/24559902/viewspace-1037943/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值