- 学习xv6 Scheduler
1.Multiplexing
Xv6 multiplexes by switching each processor from one process to another in two situations.
- First, xv6’s sleep and wakeup mechanism switches when a process waits for device or pipe I/O to complete, or waits for a child to exit, or waits in the sleep system call.
- Second, xv6 periodically forces a switch when a process is executing user instructions.
2.Context switch
More programs than CPUs. How to share the limited number of CPUs among the programs?
Terminology: I’m going to call a switch from one program to another a context switch.
when the contest switch:
- the process moves from the running to blocked state(for example: I/O request).
- the process exits.
- the process moves from running to ready state.
- Timer interrupt preempts the current process.
- The process voluntarily yields CPU with the yield() system call.
2.1.The program’s view of context switching
For this discussion we’re going to assume that the O/S runs each program in a “process”, which couples an address space with a single thread of control. For the most part each process appears to have a dedicated CPU, but if processes compare notes they can see that the CPU is actually multiplexed, with this process-visible behavior:
- When a process makes a system call that blocks, the kernel arranges to take the CPU away and run a different process. For example, if read() needs to wait for the hard disk to finish reading a block, the kernel will cause a different process to run until the disk has finished.
- The kernel time-slices the CPU among processes that compute a lot.
- Both of the above are largely transparent to the process, in the sense that the kernel resumes a process with the same register and memory state it had when the kernel suspended it. (Except that a system call may explicitly modify state required to express its return value or data read by e.g. read().)
2.2.The kernel’s view of context switching
As you know, the xv6 kernel maintains a kernel stack for each process, on which to run system calls for the process. At any given time, multiple processes may be executing system calls inside the kernel. This is a common kernel arrangement – multiple threads of control in the kernel address space, one per user-level process. A kernel thread must be capable of suspending and giving the CPU away, because the implementation of many system calls involves blocking to wait for I/O. That’s the reason why each xv6 process has its own kernel stack.
Terminology: if a process has entered the kernel, the kernel stack and CPU registers of the resulting thread of control is called the processes kernel thread.
In this arrangment, the kernel doesn’t directly switch CPUs among processes. Instead, the kernel has a number of mechanisms which combine to switch among processes:
- User->kernel transitions, which we’ve seen when we looked at system calls and interrrupts. If a process is running and makes a user->kernel transition, we’re then running in the process’s kernel thread. This transition saves the state of the user process – saving its register in a struct trapframe on the process’s kernel stack. A user->kernel transition initializes fresh register state for the process’s kernel thread.
- Switching between kernel threads. If a kernel thread calls sched() or yield() or sleep(), the kernel saves that kernel thread’s registers, picks another runnable kernel thread, and restores its registers – restoring ESP and EIP cause a stack switch and control switch. scheduler() also switches user segments; this has no immediate effect, it only affects what happens on a future return from kernel to user.
- Kernel->user transitions. A return to user space restores the user registers from the trapframe, and implicitly loses the register state of the process’s kernel thread.
- Timer interrupts. The timer interrupt handler implements time-slicing by calling yield(). At a low level yield() only switches between kernel threads, but the interrupt may have caused a user->kernel transition, so the net effect is often to switch between user processes.
2.3.How context is created?
- userinit()
- fork()
Both of them invoked allocproc to find an UNUSED proc in ptable.
2.4.Interrupt Context Switch
调用流程:_alltraps -> trap()->yield()->sched()->swtch()
void
sched(void)
{
...
struct proc *p = myproc();
...
swtch(&p->context, mycpu()->scheduler);
}
.globl swtch
swtch:
movl 4(%esp), %eax
movl 8(%esp), %edx
# Save old callee-saved registers
pushl %ebp
pushl %ebx
pushl %esi
pushl %edi
# Switch stacks
movl %esp, (%eax)
movl %edx, %esp
# Load new callee-saved registers
popl %edi
popl %esi
popl %ebx
popl %ebp
ret
重点分析:swtch(&p->context, mycpu()->scheduler);
status before context switch:
Switch stacks:
swtch.S:
20 # Switch stacks
21 movl %esp, (%eax) # (%eax) means *old (points to address of old)
22 movl %edx, %esp # stack pointer points to the context we want to switch. Here is the address of the cpu->schedule
Save p’s context:
ESP points to new Context:
Summary:Context Switch Flow(Process -> Scheduler -> Process)
- Interrupt the current process. Then, switch to cpu->scheduler
- Scheduler find a RUNNABLE proc in ptable and context switch from scheduler to another RUNNABLE process
The scheduler is the bridge between two processes.
2.5.How to return back to the user space?
Calling Convention ! Use EIP to get the return address.
Return back call:
switch.S -> sched(proc.c) -> yield(proc.c) -> trap(trap.c) -> trapret(trapasm.S, but set up in allocproc() ) -> iret (trapasm.S)
stack status before returning back to user level:
stack after trap() returns:
3. Overheads
参考资料:
http://pages.cs.wisc.edu/~gerald/cs537/Fall17/projects/p2b.html