inux 中进程切换涉及到一个调用链:
schedule() –> context_switch() –> switch_to –> __switch_to()
先是 switch_to
#define switch_to(prev,next,last) do {
unsigned long esi,edi;
asm volatile("pushflnt"
"pushl %%ebpnt"
"movl %%esp,%0nt"
"movl %5,%%espnt"
"movl $1f,%1nt"
"pushl %6nt"
"jmp __switch_ton"
"1:t"
"popl %%ebpnt"
"popfl"
:"=m" (prev->thread.esp),"=m" (prev->thread.eip),
"=a" (last),"=S" (esi),"=D" (edi)
:"m" (next->thread.esp),"m" (next->thread.eip),
"2" (prev), "d" (next));
} while (0)
由于 switch_to 宏使用了内联汇编,为了便于理解,将其转换成可读性较强的汇编形式(语法上会有些许差别)。下面逐句分析:pushfl将 eflags 寄存器压到 prev 进程的内核栈中。pushl %ebp由于 switch_to 是 context_switch() 中的一个宏替换,所以这一步是将 context_switch() 的栈桢起始位置压到 prev 进程的内核栈中。movl %esp, [prev->thread.esp]保存栈定位置。
movl [next->thread.esp], %esp将 next->thread.esp 恢复到当前 CPU 的 esp 寄存器中,此时已经切换到了 next 进程的内核栈。
movl $1f, [prev->thread.eip]
1:
popl %ebp
popfl
这是为了以后恢复 prev 作下铺垫,下面将会看到它在 next 中起到的作用。
pushl [next->thread.eip]
将 1: 处的指令地址压入 next 的内核栈的栈顶。到这里已经完成了内核栈的切换,下面的工作就是在 __switch_to() 中对 next 进程进行一些设置。
执行 jmp __switch_to 跳转
struct task_struct fastcall *
__switch_to(struct task_struct *prev_p, struct task_struct *next_p)
{
struct thread_struct *prev = &prev_p->thread,
*next = &next_p->thread;
int cpu = smp_processor_id();
struct tss_struct *tss = &per_cpu(init_tss, cpu);
__unlazy_fpu(prev_p);
load_esp0(tss, next);
load_TLS(next, cpu);
asm volatile("movl %%fs,%0":"=m" (*(int *)&prev->fs));
asm volatile("movl %%gs,%0":"=m" (*(int *)&prev->gs));
if (unlikely(prev->fs | prev->gs | next->fs | next->gs)) {
loadsegment(fs, next->fs);
loadsegment(gs, next->gs);
}
if (unlikely(next->debugreg[7])) {
loaddebug(next, 0);
loaddebug(next, 1);
loaddebug(next, 2);
loaddebug(next, 3);
loaddebug(next, 6);
loaddebug(next, 7);
}
if (unlikely(prev->io_bitmap_ptr || next->io_bitmap_ptr))
handle_io_bitmap(next, tss);
return prev_p;
}
这个函数运行在 next 的内核栈上。因为现在 CPU 已经切换到了 next 进程,所以需要重新设置 CPU 的一些数据。例如 TSS:load_esp0(tss, next);load_TLS(next, cpu);最后执行 return prev_p 将栈顶(prev->thread.eip)恢复到 eip 中。执行:
1:
popl %ebp
popfl