转自http://blog.csdn.net/qinzhonghello/archive/2008/12/20/3564984.aspx
BKL(大内核锁)是一个全局自旋锁,使用它主要是为了方便实现从Linux最初的SMP过度到细粒度加锁机制。
BKL的特性:
持有BKL的任务仍然可以睡眠 。因为当任务无法调度时,所加的锁会自动被抛弃;当任务被调度时,锁又会被重新获得。当然,并不是说,当任务持有BKL时,睡眠是安全的,紧急是可以这样做,因为睡眠不会造成任务死锁。
BKL是一种递归锁。一个进程可以多次请求一个锁,并不会像自旋锁那么产生死锁。
BKL可以在进程上下文中。
BKL是有害的。
在内核中不鼓励使用BKL。一个执行线程可以递归的请求锁lock_kernel(),但是释放锁时也必须调用同样次数的unlock_kernel()操作,在最后一个解锁操作完成之后,锁才会被释放。
在Kernel_lock.c(lib)中
/*
* These are the BKL spinlocks - we try to be polite about preemption.
* If SMP is not on (ie UP preemption), this all goes away because the
* _raw_spin_trylock() will always succeed.
*/
#ifdef CONFIG_PREEMPT
static inline void __lock_kernel(void)
{
preempt_disable();
if (unlikely(!_raw_spin_trylock(&kernel_flag))) {
/*
* If preemption was disabled even before this
* was called, there's nothing we can be polite
* about - just spin.
*/
if (preempt_count() > 1) {
_raw_spin_lock(&kernel_flag);
return;
}
/*
* Otherwise, let's wait for the kernel lock
* with preemption enabled..
*/
do {
preempt_enable();
while (spin_is_locked(&kernel_flag))
cpu_relax();
preempt_disable();
} while (!_raw_spin_trylock(&kernel_flag));
}
}
#else
/*
* Non-preemption case - just get the spinlock
*/
static inline void __lock_kernel(void)
{
_raw_spin_lock(&kernel_flag);
}
#endif
static inline void __unlock_kernel(void)
{
/*
* the BKL is not covered by lockdep, so we open-code the
* unlocking sequence (and thus avoid the dep-chain ops):
*/
_raw_spin_unlock(&kernel_flag);
preempt_enable();
}
/*
* Getting the big kernel lock.
*
* This cannot happen asynchronously, so we only need to
* worry about other CPU's.
*/
void __lockfunc lock_kernel(void)
{
int depth = current->lock_depth+1;
if (likely(!depth))
__lock_kernel();
current->lock_depth = depth;
}
void __lockfunc unlock_kernel(void)
{
BUG_ON(current->lock_depth < 0);
if (likely(--current->lock_depth < 0))
__unlock_kernel();
}
BKL在被持有时同样会禁止内核抢占。多数情况下,BKL更像是保护代码而不是保护数据。
本文来自CSDN博客,转载请标明出处:http://blog.csdn.net/qinzhonghello/archive/2008/12/20/3564984.aspx