Chinese translated version of Documentation/networking/vxlan
If you have any comment or update to the content, please contact the
original document maintainer directly. However, if you have a problem
communicating in English you can also ask the Chinese maintainer for
help. Contact the Chinese maintainer if this translation is outdated
or if there is a problem with the translation.
Chinese maintainer: 530999000@qq.com
---------------------------------------------------------------------
Documentation/networking/vxlan 的中文翻译
如果想评论或更新本文的内容,请直接联系原文档的维护者。如果你使用英文
交流有困难的话,也可以向中文版维护者求助。如果本翻译更新不及时或者翻
译存在问题,请联系中文版维护者。
中文版维护者: 王明达 530999000@qq.com
中文版翻译者: 王明达 530999000@qq.com
中文版校译者: 王明达 530999000@qq.com
以下为正文
---------------------------------------------------------------------
CPU Scheduler implementation hints for architecture specific code
CPU调度实现架构特定的代码提示
Nick Piggin, 2005
Context switch
上下文切换
==============
1. Runqueue locking
1。运行队列锁
By default, the switch_to arch function is called with the runqueue
locked. This is usually not a problem unless switch_to may need to
take the runqueue lock. This is usually due to a wake up operation in
the context switch. See arch/ia64/include/asm/system.h for an example.
默认情况下,switch_to主要函数随着运行队列锁定被调用。这通常不是问题,除
非switch_to可能需要运行队列锁。这通常是由于上下文切换中的唤醒操作。见一
个例子arch/ia64/include/asm/system.h。
To request the scheduler call switch_to with the runqueue unlocked,
you must `#define __ARCH_WANT_UNLOCKED_CTXSW` in a header file
(typically the one where switch_to is defined).
要求调度运行随着队列解锁调用switch_to你必须定义__ ARCH_WANT_UNLOCKED_CTXSW`
在头文件(通常switch_to定义的一方)。
Unlocked context switches introduce only a very minor performance
penalty to the core scheduler implementation in the CONFIG_SMP case.
解锁上下文切换只介绍一个非常次要的表现CONFIG_SMP情况下的核心调度实施的罚款。
CPU idle
CPU空闲
========
Your cpu_idle routines need to obey the following rules:
你的cpu_idle程序需要遵守以下规则:
1. Preempt should now disabled over idle routines. Should only
be enabled to call schedule() then disabled again.
1. 抢占现在应该禁用超过闲置例程。应该只启用调用schedule(),然后再禁用。
2. need_resched/TIF_NEED_RESCHED is only ever set, and will never
be cleared until the running task has called schedule(). Idle
threads need only ever query need_resched, and may never set or
clear it.
2. need_resched/TIF_NEED_RESCHED只是设置也永远不会被清零,直到正在运行的
任务调用schedule()。空闲线程只需要查询need_resched,而且可能永远不会
设置或清除它。
3. When cpu_idle finds (need_resched() == 'true'), it should call
schedule(). It should not call schedule() otherwise.
3。当cpu_idle发现(need_resched()=='true'),它应该调用schedule()。
否则它不应该调用schedule()。
4. The only time interrupts need to be disabled when checking
is if we are about to sleep the processor until
the next interrupt (this doesn't provide any protection of
need_resched, it prevents losing an interrupt).
4. 唯一一次中断需要禁用当检查need_resched是否是我们停用的处理器,直到
下一个中断,(不提供任何保护need_resched,它可以防止丢失中断)。
need_resched
4a. Common problem with this type of sleep appears to be:
local_irq_disable();
if (!need_resched()) {
local_irq_enable();
*** resched interrupt arrives here ***
__asm__("sleep until next interrupt");
}
5. TIF_POLLING_NRFLAG can be set by idle routines that do not
need an interrupt to wake them up when need_resched goes high.
In other words, they must be periodically polling need_resched,
although it may be reasonable to do some background work or enter
a low CPU priority.
5. TIF_POLLING_NRFLAG可以由闲置例程设置而不需要当need_resched高电平时
中断唤醒他们。换句话说,它们必须是定期轮询need_resched 虽然它可能是
合理的做一些后台的工作,或进入低的CPU优先级。
5a. If TIF_POLLING_NRFLAG is set, and we do decide to enter
an interrupt sleep, it needs to be cleared then a memory
barrier issued (followed by a test of need_resched with
interrupts disabled, as explained in 3).
如果TIF_POLLING_NRFLAG被设置,并且我们决定进入中断的睡眠,它需
要被清除,然后发出一个内存屏障(need_resched禁用中断测试,如解
释3)。
arch/x86/kernel/process.c has examples of both polling and
sleeping idle functions.
Possible arch/ problems
=======================
Possible arch problems I found (and either tried to fix or didn't):
h8300 - Is such sleeping racy vs interrupts? (See #4a).
The H8/300 manual I found indicates yes, however disabling IRQs
over the sleep mean only NMIs can wake it up, so can't fix easily
without doing spin waiting.
ia64 - is safe_halt call racy vs interrupts? (does it sleep?) (See #4a)
sh64 - Is sleeping racy vs interrupts? (See #4a)
sparc - IRQs on at this point(?), change local_irq_save to _disable.
- TODO: needs secondary CPUs to disable preempt (See #1)