6.S081 Lab3 page tables 未完成
文章目录
备注:因为我这里学习的是20年的课程,因此从现在开始做的都是20年的实验, 链接
Before you start coding, read Chapter 3 of the xv6 book, and related files:
kern/memlayout.h
, which captures the layout of memory.kern/vm.c
, which contains most virtual memory (VM) code.kernel/kalloc.c
, which contains code for allocating and freeing physical memory.
这一个大的实验下面,除了第一个小实验以外,另外两个小实验都很难。。。。 - 容易出现panic (因为page table设置错误)
我这里只完成了1,后面的2,3都失败了 – 但是为了实验序号完整性,还是把它传上来了。
1. Print a page table (easy)
To help you learn about RISC-V page tables, and perhaps to aid future debugging, your first task is to write a function that prints the contents of a page table.
Define a function called vmprint()
. It should take a pagetable_t
argument, and print that pagetable in the format described below. Insert if(p->pid==1) vmprint(p->pagetable)
in exec.c just before the return argc
, to print the first process’s page table. You receive full credit for this assignment if you pass the pte printout
test of make grade
.
Now when you start xv6 it should print output like this, describing the page table of the first process at the point when it has just finished exec()
ing init
:
page table 0x0000000087f6e000
..0: pte 0x0000000021fda801 pa 0x0000000087f6a000
.. ..0: pte 0x0000000021fda401 pa 0x0000000087f69000
.. .. ..0: pte 0x0000000021fdac1f pa 0x0000000087f6b000
.. .. ..1: pte 0x0000000021fda00f pa 0x0000000087f68000
.. .. ..2: pte 0x0000000021fd9c1f pa 0x0000000087f67000
..255: pte 0x0000000021fdb401 pa 0x0000000087f6d000
.. ..511: pte 0x0000000021fdb001 pa 0x0000000087f6c000
.. .. ..510: pte 0x0000000021fdd807 pa 0x0000000087f76000
.. .. ..511: pte 0x0000000020001c0b pa 0x0000000080007000
The first line displays the argument to vmprint
. After that there is a line for each PTE, including PTEs that refer to page-table pages deeper in the tree. Each PTE line is indented by a number of " .."
that indicates its depth in the tree. Each PTE line shows the PTE index in its page-table page, the pte bits, and the physical address extracted from the PTE. Don’t print PTEs that are not valid. In the above example, the top-level page-table page has mappings for entries 0 and 255. The next level down for entry 0 has only index 0 mapped, and the bottom-level for that index 0 has entries 0, 1, and 2 mapped.
Your code might emit different physical addresses than those shown above. The number of entries and the virtual addresses should be the same.
Some hints:
- You can put
vmprint()
inkernel/vm.c
. - Use the macros at the end of the file kernel/riscv.h.
- The function
freewalk
may be inspirational. - Define the prototype for
vmprint
in kernel/defs.h so that you can call it from exec.c. - Use
%p
in your printf calls to print out full 64-bit hex PTEs and addresses as shown in the example.
Explain the output of vmprint
in terms of Fig 3-4 from the text. What does page 0 contain? What is in page 2? When running in user mode, could the process read/write the memory mapped by page 1?
这是一个简单题,首先根据提示,在exec.c
的 return argc; 之前插入如下代码:
// added by levi
if (p -> pid == 1)
vmprint(p -> pagetable);
return argc; // this ends up in a0, the first argument to main(argc, argv)
然后到vm.c中开始编写代码↓。注意注释中的:1. useful predefines
和2. the output should be like:
// added by levi
// 1. useful predefines:
// typedef uint64 pte_t;
// typedef uint64 *pagetable_t; // 512 PTEs
// #define PTE_V (1L << 0) // valid
// #define PTE_R (1L << 1)
// #define PTE_W (1L << 2)
// #define PTE_X (1L << 3)
// #define PTE_U (1L << 4) // 1 -> user can access
// shift a PTE to the right place for a physical address.
// #define PTE2PA(pte) (((pte) >> 10) << 12)
// 2. the output should be like:
// page table 0x0000000087f6e000
// ..0: pte 0x0000000021fda801 pa 0x0000000087f6a000
// .. ..0: pte 0x0000000021fda401 pa 0x0000000087f69000
// .. .. ..0: pte 0x0000000021fdac1f pa 0x0000000087f6b000
// .. .. ..1: pte 0x0000000021fda00f pa 0x0000000087f68000
void recursive_vmprint(pagetable_t pg_tb, int pg_level) {
// 512 PTEs (page_size / register_size = 4096B / 8B = 512B)
for (int i = 0; i < 512; ++i) {
pte_t cur_pte = pg_tb[i];
if (cur_pte & PTE_V) {
// 1. print ..
if (pg_level == 2) {
printf("..");
} else if (pg_level == 1) {
printf(".. ..");
} else if (pg_level == 0) {
printf(".. .. ..");
}
// 2. output i, pte, pa
// ref to the xv6 boot output
uint64 cur_PA = PTE2PA(cur_pte);
printf("%d: pte %p pa %p\n", i, cur_pte, cur_PA);
if ((cur_pte & (PTE_R|PTE_W|PTE_X)) == 0) {
recursive_vmprint((pagetable_t)cur_PA, pg_level - 1);
}
} else {
break;
}
}
return;
}
void vmprint(pagetable_t pg_tb) {
printf("page table %p\n", pg_tb);
// L2 - L1 - L0
recursive_vmprint(pg_tb, 2);
}
代码说明:其实pte里面就是uint64,pagetable_t就是uint64* 。首先打印页表首地址,然后递归打印三级页表(其中:当pte有效,并且不允许读写和执行的时候,说明这个pte指向空间,存储的每一项还是pte(而不是内容)) (cur_pte & (PTE_R|PTE_W|PTE_X)) == 0
kernel/defs.h
中添加函数声明(找到vm.c然后添加):
// vm.c
void vmprint(pagetable_t);
结果得到如图所示的结果:
page table 0x0000000087f6e000
..0: pte 0x0000000021fda801 pa 0x0000000087f6a000
.. ..0: pte 0x0000000021fda401 pa 0x0000000087f69000
.. .. ..0: pte 0x0000000021fdac1f pa 0x0000000087f6b000
.. .. ..1: pte 0x0000000021fda00f pa 0x0000000087f68000
.. .. ..2: pte 0x0000000021fd9c1f pa 0x0000000087f67000
2. A kernel page table per process (hard) (未完成)
Xv6 has a single kernel page table that’s used whenever it executes in the kernel. The kernel page table is a direct mapping to physical addresses, so that kernel virtual address x maps to physical address x. Xv6 also has a separate page table for each process’s user address space, containing only mappings for that process’s user memory, starting at virtual address zero. Because the kernel page table doesn’t contain these mappings, user addresses are not valid in the kernel. Thus, when the kernel needs to use a user pointer passed in a system call (e.g., the buffer pointer passed to write()
), the kernel must first translate the pointer to a physical address. The goal of this section and the next is to allow the kernel to directly dereference user pointers.
Your first job is to modify the kernel so that every process uses its own copy of the kernel page table when executing in the kernel. Modify struct proc
to maintain a kernel page table for each process, and modify the scheduler to switch kernel page tables when switching processes. For this step, each per-process kernel page table should be identical to the existing global kernel page table. You pass this part of the lab if usertests
runs correctly.
Read the book chapter and code mentioned at the start of this assignment; it will be easier to modify the virtual memory code correctly with an understanding of how it works. Bugs in page table setup can cause traps due to missing mappings, can cause loads and stores to affect unexpected pages of physical memory, and can cause execution of instructions from incorrect pages of memory.
Some hints:
- Add a field to
struct proc
for the process’s kernel page table. - A reasonable way to produce a kernel page table for a new process is to implement a modified version of
kvminit
that makes a new page table instead of modifyingkernel_pagetable
. You’ll want to call this function fromallocproc
. - Make sure that each process’s kernel page table has a mapping for that process’s kernel stack. In unmodified xv6, all the kernel stacks are set up in
procinit
. You will need to move some or all of this functionality toallocproc
. - Modify
scheduler()
to load the process’s kernel page table into the core’ssatp
register (seekvminithart
for inspiration). Don’t forget to callsfence_vma()
after callingw_satp()
. scheduler()
should usekernel_pagetable
when no process is running.- Free a process’s kernel page table in
freeproc
. - You’ll need a way to free a page table without also freeing the leaf physical memory pages.
vmprint
may come in handy to debug page tables.- It’s OK to modify xv6 functions or add new functions; you’ll probably need to do this in at least
kernel/vm.c
andkernel/proc.c
. (But, don’t modifykernel/vmcopyin.c
,kernel/stats.c
,user/usertests.c
, anduser/stats.c
.) - A missing page table mapping will likely cause the kernel to encounter a page fault. It will print an error that includes
sepc=0x00000000XXXXXXXX
. You can find out where the fault occurred by searching forXXXXXXXX
inkernel/kernel.asm
.
(0). 明确题意
对于用户,每个进程都有自己独立的pagetable,但是对于kernel,每个进程都共用一个kernel_pagetable
,本次实验的目的是:将共享的kernel_pagetable
,变成每个进程使用独立的页表项 — 具体做法,就是根据hint一步步来,不要着急。
(1) 每个进程增加一个kernel_pagetable
根据提示1操作:在struct proc
中添加kernel_pagetable
- 这样每个进程就包含了一个页表。
// proc.h
// added by levi
pagetable_t levi_kernel_pagetable;
(2) 修改kvminit
根据提示2操作:应该生成新的页表而不是修改kernel_pagetable (这里我不太敢改kvminit,而是重建了一个函数levi_kvminit) – 其实就是不再使用kernel_pagetable这个全局变量,而是把用一个局部变量来代替(因此返回值 + 使用全局变量kernel_pagetable的函数都要修改 – 传入局部变量),这里将所有的初始化内容,都传入局部变量kernel_pagetable,返回值也是一样。 – 如果把它看作是自己开发,想想全局变量不用了,应该怎么用局部变量操作就行了。
/*
* created by levi (change global kernel_pagetable with local levi_kernel_pagetable)
* create a page table for the kernel process and
* turn on paging. called early, in supervisor mode.
* the page allocator is already initialized.
*/
pagetable_t
levi_kvminit()
{
pagetable_t levi_kernel_pagetable = (pagetable_t) kalloc();
memset(levi_kernel_pagetable, 0, PGSIZE);
// uart registers
levi_kvmmap(levi_kernel_pagetable, UART0, UART0, PGSIZE, PTE_R | PTE_W);
// virtio mmio disk interface 0
levi_kvmmap(levi_kernel_pagetable, VIRTION(0), VIRTION(0), PGSIZE, PTE_R | PTE_W);
// virtio mmio disk interface 1
levi_kvmmap(levi_kernel_pagetable, VIRTION(1), VIRTION(1), PGSIZE, PTE_R | PTE_W);
// CLINT
levi_kvmmap(levi_kernel_pagetable, CLINT, CLINT, 0x10000, PTE_R | PTE_W);
// PLIC
levi_kvmmap(levi_kernel_pagetable, PLIC, PLIC, 0x400000, PTE_R | PTE_W);
// map kernel text executable and read-only.
levi_kvmmap(levi_kernel_pagetable, KERNBASE, KERNBASE, (uint64)etext-KERNBASE, PTE_R | PTE_X);
// map kernel data and the physical RAM we'll make use of.
levi_kvmmap(levi_kernel_pagetable, (uint64)etext, (uint64)etext, PHYSTOP-(uint64)etext, PTE_R | PTE_W);
// map the trampoline for trap entry/exit to
// the highest virtual address in the kernel.
// levi_kvmmap(levi_kernel_pagetable, TRAMPOLINE, (uint64)trampoline, PGSIZE, PTE_R | PTE_X);
mappages(levi_kernel_pagetable, TRAMPOLINE, (uint64)trampoline, PTE_R | PTE_X);
return levi_kernel_pagetable;
}
修改对应的kvmmap()函数(增加一个传入变量pagetable_t kernel_pgtb,而不是默认使用全局变量kernel_pagetable)。
// added by levi for levi_kvminit()
void
levi_kvmmap(pagetable_t kernel_pgtb, uint64 va, uint64 pa, uint64 sz, int perm)
{
if(mappages(kernel_pgtb, va, sz, pa, perm) != 0)
panic("kvmmap");
}
(3) kernel的stack 从procinit 迁移到 allocproc
根据提示3操作:
麻了麻了。。。
(4) 参考kvminithart 修改scheduler
根据提示4操作:
(5) free
3. 未完成
实验2十分痛苦,实验3好像要更难,实验2没有成功实现,这里要接着2做,所以3也暂时略去了。如果有机会要重做2