浅析linux内核内存管理之线性区
作者:李万鹏
线性区的基础部分已经在《Linux进程地址空间》博文中讲解。本博文主要介绍线性区的处理:线性区的分配和释放。
首先介绍几个底层的函数:
查找给定地址的最邻近区:
struct vm_area_struct * find_vma(struct mm_struct * mm, unsigned long addr) { struct vm_area_struct *vma = NULL; if (mm) { /* Check the cache first. */ /* (Cache hit rate is typically around 35%.) */ vma = mm->mmap_cache; if (!(vma && vma->vm_end > addr && vma->vm_start <= addr)) { struct rb_node * rb_node; rb_node = mm->mm_rb.rb_node; vma = NULL; while (rb_node) { struct vm_area_struct * vma_tmp; vma_tmp = rb_entry(rb_node, struct vm_area_struct, vm_rb); if (vma_tmp->vm_end > addr) { vma = vma_tmp; if (vma_tmp->vm_start <= addr) break; rb_node = rb_node->rb_left; } else rb_node = rb_node->rb_right; } if (vma) mm->mmap_cache = vma; } } return vma; }
mm_struct的mmap_cache字段指向最后一个使用的线性区对象,由于程序访问的局部性,这个线性区很可能被再次访问,所以将vma设置成mmap_cache字段。如果vma && vma->vm_end > addr && vma->vm_start <= addr则直接返回指向线性区的指针,也就是说找到了一个线性区,并且给定的地址在这个线性区内,否则遍历红黑树进行查找。注意的这里的rb_entry()的原型为:
#define rb_entry(ptr, type, member) container_of(ptr, type, member) 从头节点开始遍历,找第一个vma_tmp->vm_end > addr的,注意找到的线性区不应的完全包含这个线性地址,即vma->vm_end > addr && vma->vm_start <= addr。但是这个线性区是addr右端第一个线性区。
查找一个与给定的地址区间相重叠的线性区:
static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * mm, unsigned long start_addr, unsigned long end_addr) { struct vm_area_struct * vma = find_vma(mm,start_addr); if (vma && end_addr <= vma->vm_start) vma = NULL; return vma; } 注意这里是说与一个给定的地址区间相重叠,start_addr~end_addr,有的人会不注意,以为是两个线性区想重叠,两个线性区是不会重叠的(即使特殊性况下重叠,原来的那个会被删除,新的覆盖原来的那个地址区间如果)。所以start_addr < vma_end,end_addr > vma_end,这样找到了重叠的线性区。
查找一个空闲的地址区间:
unsigned long get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { if (flags & MAP_FIXED) { 。。。。。。。。 return addr; } return current->mm->get_unmapped_area(file, addr, len, pgoff, flags); } 可以看到这里如果设置了MAP_FIXED标志就直接返回了。否则根据现行地址区间的类型由函数arch_get_unmapped_area()或arch_get_unmapped_area_topdown()实现get_unmapped_area()。下面来分析arch_get_unmapped_area()的实现过程:
unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { struct mm_struct *mm = current->mm; struct vm_area_struct *vma; unsigned long start_addr; if (len > TASK_SIZE) return -ENOMEM; if (addr) { addr = PAGE_ALIGN(addr); vma = find_vma(mm, addr); if (TASK_SIZE - len >= addr && (!vma || addr + len <= vma->vm_start)) return addr; } start_addr = addr = mm->free_area_cache; full_search: for (vma = find_vma(mm, addr); ; vma = vma->vm_next) { /* At this point: (!vma || addr < vma->vm_end). */ if (TASK_SIZE - len < addr) { /* * Start a new search - just in case we missed * someholes. */ if (start_addr != TASK_UNMAPPED_BASE) { start_addr = addr = TASK_UNMAPPED_BASE; goto full_search; } return -ENOMEM; } if (!vma || addr + len <= vma->vm_start) { /* * Remember the place where we stopped the search: */ mm->free_area_cache = addr + len; return addr; } addr = vma->vm_end; } }
首先检查addr+len不能大于TASK_SIZE,如果addr存在,进行4KB对齐。如果这个地址区间不与某个线性区重叠就返回,否则在线性区的链表中进行查找。mm->free_area_cache被初始化用户地址空间的三分之一(通常是1G)。这三分之一是为预定义起始线性地址的线性区而保留的。查找到addr + len > TASK_SIZE的时候做一下标记,start_addr = addr = TASK_UNMAPPED_BASE,然后再次重三分之一的地方开始搜索,如果再次addr + len > TASK_SIZE,返回-ENOMEM。如果vma == NULL或 addr + len <= vma->vm_start,也就是addr ~ addr+len没有包含在某个线性区内则返回。
向内存描述符链表中插入一个线性区:
int insert_vm_struct(struct mm_struct * mm, struct vm_area_struct * vma) { struct vm_area_struct * __vma, * prev; struct rb_node ** rb_link, * rb_parent; if (!vma->vm_file) { BUG_ON(vma->anon_vma); vma->vm_pgoff = vma->vm_start >> PAGE_SHIFT; } __vma = find_vma_prepare(mm,vma->vm_start,&prev,&rb_link,&rb_parent); if (__vma && __vma->vm_start < vma->vm_end) return -ENOMEM; vma_link(mm, vma, prev, rb_link, rb_parent); return 0; }
找到一个线性区vma->start < __vma->vm_end && vma->end < __vma->vm_start,然后调用vma_link将其添加到进程描述符链表和红-黑树中。
split_vma()函数:
int split_vma(struct mm_struct * mm, struct vm_area_struct * vma, unsigned long addr, int new_below) { struct mempolicy *pol; struct vm_area_struct *new; if (is_vm_hugetlb_page(vma) && (addr & ~HPAGE_MASK)) return -EINVAL; if (mm->map_count >= sysctl_max_map_count) return -ENOMEM; new = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL); if (!new) return -ENOMEM; /* most fields are the same, copy all, and then fixup */ *new = *vma; if (new_below) new->vm_end = addr; else { new->vm_start = addr; new->vm_pgoff += ((addr - vma->vm_start) >> PAGE_SHIFT); } pol = mpol_copy(vma_policy(vma)); if (IS_ERR(pol)) { kmem_cache_free(vm_area_cachep, new); return PTR_ERR(pol); } vma_set_policy(new, pol); if (new->vm_file) get_file(new->vm_file); if (new->vm_ops && new->vm_ops->open) new->vm_ops->open(new); if (new_below) vma_adjust(vma, addr, vma->vm_end, vma->vm_pgoff + ((addr - new->vm_start) >> PAGE_SHIFT), new); else vma_adjust(vma, vma->vm_start, addr, vma->vm_pgoff, new); return 0; }
split_vma()主要功能就是把与线性地址区间交叉的线性区分成两个小线性区。首先检查mm->map_count,即线性区的数量不能大于系统规定的最大数量。为新的线性区分配描述符,与原来的线性区很多成员相同,所以用vma来初始化new的相关字段。如果new_below为1,说明线性地址区间的末尾与线性区重叠,将new->vm_end与vma->vm_start都设为addr;如果new_below为0,说明线性地址区间的前部分与线性区重叠,将new->vm_start与vma->vm_end都设为addr。调用vma_adjust()把线性区描述符链接到线性区链表mm->mmap和红-黑树mm->mm_rb。此外,函数还要根据线性区vma的最新大小对红-黑树进行调整。
下面来介绍线性区的分配和释放:
分配线性地址区间:
do_mmap()函数就是一个前端,分配线性区的任务委托给do_mmap_pgoff():
unsigned long do_mmap_pgoff(struct file * file, unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, unsigned long pgoff) { 。。。。。。。。。。 addr = get_unmapped_area(file, addr, len, pgoff, flags); 。。。。。。。。。 munmap_back: vma = find_vma_prepare(mm, addr, &prev, &rb_link, &rb_parent); if (vma && vma->vm_start < addr + len) { if (do_munmap(mm, addr, len)) return -ENOMEM; goto munmap_back; } /* Check against address space limit. */ if ((mm->total_vm << PAGE_SHIFT) + len > current->signal->rlim[RLIMIT_AS].rlim_cur) return -ENOMEM; 。。。。。。。。。。。 if (!file && !(vm_flags & VM_SHARED) && vma_merge(mm, prev, addr, addr + len, vm_flags, NULL, NULL, pgoff, NULL)) goto out; /* * Determine the object being mapped and call the appropriate * specific mapper. the address has already been validated, but * not unmapped, but the maps are removed from the list. */ vma = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL); if (!vma) { error = -ENOMEM; goto unacct_error; } memset(vma, 0, sizeof(*vma)); vma->vm_mm = mm; vma->vm_start = addr; vma->vm_end = addr + len; vma->vm_flags = vm_flags; vma->vm_page_prot = protection_map[vm_flags & 0x0f]; vma->vm_pgoff = pgoff; 。。。。。。。。。。。。 /* Can addr have changed?? * * Answer: Yes, several device drivers can do it in their * f_op->mmap method. -DaveM */ addr = vma->vm_start; pgoff = vma->vm_pgoff; vm_flags = vma->vm_flags; if (!file || !vma_merge(mm, prev, addr, vma->vm_end, vma->vm_flags, NULL, file, pgoff, vma_policy(vma))) { file = vma->vm_file; vma_link(mm, vma, prev, rb_link, rb_parent); if (correct_wcount) atomic_inc(&inode->i_writecount); } else { if (file) { if (correct_wcount) atomic_inc(&inode->i_writecount); fput(file); } mpol_free(vma_policy(vma)); kmem_cache_free(vm_area_cachep, vma); } out: mm->total_vm += len >> PAGE_SHIFT; __vm_stat_account(mm, vm_flags, file, len >> PAGE_SHIFT); if (vm_flags & VM_LOCKED) { mm->locked_vm += len >> PAGE_SHIFT; make_pages_present(addr, addr + len); } if (flags & MAP_POPULATE) { up_write(&mm->mmap_sem); sys_remap_file_pages(addr, len, 0, pgoff, flags & MAP_NONBLOCK); down_write(&mm->mmap_sem); } acct_update_integrals(); update_mem_hiwater(); return addr; unmap_and_free_vma: if (correct_wcount) atomic_inc(&inode->i_writecount); vma->vm_file = NULL; fput(file); /* Undo any partial mapping done by a device driver. */ zap_page_range(vma, vma->vm_start, vma->vm_end - vma->vm_start, NULL); free_vma: kmem_cache_free(vm_area_cachep, vma); unacct_error: if (charged) vm_unacct_memory(charged); return error; }
首先调用get_unmapped_area()找到一个空闲的线性区(并不一定就是空闲,对get_unmapped_area分析的代码可以看到如果设置了MAP_FIXED标志就直接返回了),在munmap_back标号处进行检查,看这个找到的“空闲的线性区”是否与某个线性区重叠,如果如上边所说,是因为设置了MAP_FIXED标志就直接返回的,则调用do_munmap()对之前那个线性区进行释放,如果成功释放再次执行到munmap_back标号处,如果没有重叠的线性区此时向下执行了。如果新区间是私有的(没有设置VM_SHARED),且映射的不是磁盘上的一个文件,那么,调用vma_merge()查看前一个线性区是否可以以这样的方式进行扩展来包含新的线性区。如果可以扩展直接跳到out标号处,增加整个线性区的大小。否则分配vma描述符,设置相应字段,并将描述符添加到描述符链表和红-黑树。检查是否设置了VM_LOCKED标志,如果设置了就是这个线性区对应的物理页现在必须分配而不是等到访问的时候才分配,这是个特例要记住。设置VM_LOCKED的目的就是为了页框在内存中,访问比较快,避免了访问的时候通过page fault分配页框了,比较慢。所以此时调用make_pages_present函数,make_pages_present又调用get_user_pages函数,如果页框不在内存中就要通过page fault分配页框,并设置页表项。
释放线性地址区间:
int do_munmap(struct mm_struct *mm, unsigned long start, size_t len) { unsigned long end; struct vm_area_struct *mpnt, *prev, *last; if ((start & ~PAGE_MASK) || start > TASK_SIZE || len > TASK_SIZE-start) return -EINVAL; if ((len = PAGE_ALIGN(len)) == 0) return -EINVAL; /* Find the first overlapping VMA */ mpnt = find_vma_prev(mm, start, &prev); if (!mpnt) return 0; /* we have start < mpnt->vm_end */ /* if it doesn't overlap, we have nothing.. */ end = start + len; if (mpnt->vm_start >= end) return 0; /* * If we need to split any vma, do it now to save pain later. * * Note: mremap's move_vma VM_ACCOUNT handling assumes a partially * unmapped vm_area_struct will remain in use: so lower split_vma * places tmp vma above, and higher split_vma places tmp vma below. */ if (start > mpnt->vm_start) { int error = split_vma(mm, mpnt, start, 0); if (error) return error; prev = mpnt; } /* Does it split the last one? */ last = find_vma(mm, end); if (last && end > last->vm_start) { int error = split_vma(mm, last, end, 1); if (error) return error; } mpnt = prev? prev->vm_next: mm->mmap; /* * Remove the vma's, and unmap the actual pages */ detach_vmas_to_be_unmapped(mm, mpnt, prev, end); spin_lock(&mm->page_table_lock); unmap_region(mm, mpnt, prev, start, end); spin_unlock(&mm->page_table_lock); /* Fix up all other VM information */ unmap_vma_list(mm, mpnt); return 0; }
将mpnt指向要释放的地址区间后面的线性区。如果线性地址区间前部分与mpnt指向的线性区重叠,就将线性区拆成两个部分,后一部分作为一个线性区在线性地址空间内。如果线性地址区间后半部分与mpnt指向的线性区重叠,就将线性区拆成两个部分,前一部分作为一个线性区在线性地址空间内。这样就分三种情况了:
- 线性地址区间前部分与mpnt指向的线性区重叠,后面不重叠
- 线性地址区间后部分与mpnt指向的线性区重叠,前面不重叠
- 线性地址区间在一个线性区的中间把这个线性区分成了三段,中间那段是要被释放的
调用detach_vmas_to_be_unmapped函数从进程的线性地址空间中删除位于线性地址区间中的线性区。调用unmap_region函数清除与线性地址区间对应的页表项并释放相应的页框。