CMU Computer Systems: Virtual Memory (Concepts)

Address Spaces
  • Linear address space: Ordered set of contiguous non-negative integer addresses
  • Virtual address space: Set of N = 2 n N = 2^n N=2n virtual address
  • Physical address space: Set of M = 2 m M= 2^m M=2m physical addresses
Why Virtual Memory
  • Use main memory efficiently
    • Use DRAM as a cache for parts of a virtual address space
  • Simplifies memory management
    • Each process gets the same uniform linear address space
  • Isolates address spaces
    • One process can’t interfere with another’s memory
    • User program cannot access privileged kernel information and code
VM as a Tool for Caching
  • Conceptually, virtual memory is an array of N contiguous bytes stored on disk
  • The contents of the array on disk are cached in physical memory (DRAM cache)
    • These cache blocks are called pages (size is P = 2 p P= 2^p P=2p bytes)
DRAM Cache Organization
  • DRAM cache organization driven by the enormous miss penalty
    • DRAM is about 10x slower than SRAM
    • Disk is about 10,000x slower than DRAM
  • Consequences
    • Large page (block) size: typically 4 KB, sometimes 4 MB
    • Fully associative
      • Any VP can be placed in any PP
      • Requires a “large” mapping function - different from cache memories
    • Highly sophisticated, expensive replacement algorithms
      • Too complicated and open-ended to be implemented in hardware
    • Write-back rather than write-through
Enabling Data Structure: Page Table
  • A page table is an array of page table entries (PTEs) that maps virtual pages to physical pages
    • Per-process kernel data structure in DRAM
Page Hit and Page Fault
  • Page hit: reference to VM word that is in physical memory (DRAM cache hit)
  • Page fault: reference to VM word that is not in physical memory (DRAM cache miss)
Locality to the Rescue Again
  • Virtual memory seems terribly inefficient, but it works because of locality
  • At any point in time, programs tend to access a set of active virtual pages called the working set
    • Programs with better temporal locality will have smaller working sets
  • I f ( w o r k i n g s e t s i z e < m a i n m e m o r y s i z e ) If (working set size < main memory size) If(workingsetsize<mainmemorysize)
    • Good performance for one process after compulsory misses
  • I f ( S U M ( w o r k i n g s e t s i z e s ) > m a i n m e m o r y s i z e ) If (SUM(working set sizes) > main memory size) If(SUM(workingsetsizes)>mainmemorysize)
    • Thrashing: Performance meltdown where pages are swapped in and out continuously
VM as a Tool for Memory Management
  • Key idea: each process has its own virtual address space
    • It can view memory as simple linear array
    • Mapping function scatters addresses through physical memory
      • Well-chosen mappings can improve locality
  • Simplifying memory allocation
    • Each virtual page can be mapped to any physical page
    • A virtual page can be stored in different physical pages at different times
  • Sharing code and data among processes
    • Map virtual pages to the same physical page
Simplifying Linking and Loading
  • Linking
    • Each program has similar virtual address space
    • Code, data, and heap always start at the same addresses
  • Loading
    • execve allocates virtual pages for .text and .data sections & creates PTEs marked as invalid
    • The .text and .data sections are copied, page by page, on demand by the virtual memory system
VM as a Tool for Memory Protection
  • Extend PTEs with permission bits
  • MMU checks these bits on each access
VM Address Translation
  • Virtual Address Space
    • V = {0, 1, …, N-1}
  • Physical Address Space
    • P = {0, 1, …, M-1}
  • Address Translation
    • MAP: V →P ∪{∅}
    • For virtual address α
      • MAP(α)= α ′ α^{'} α if data at virtual address α is at physical address α ′ α^{'} α in P
      • MAP(α)= ∅ if data at virtual address α is not in physical memory
Summary of Address Translation Symbols
  • Basic Parameters
    • N = 2 n N = 2^n N=2n Number of addresses in virtual address space
    • M = 2 m M= 2^m M=2m Number of addressses in physical address space
    • P = 2 p P= 2^p P=2p Page size (bytes)
  • Components of the virtual address (VA)
    • TLBI: TLB index
    • TLBT: TLB tag
    • VPO: Virtual page offset
    • VPN: Virtual page number
  • Components of the physical address (PA)
    • PPO: Physical page offset (same as VPO)
    • PPN: Physical page number
Address Translation: Page Fault

  1) Processor sends virtual address to MMU

  2-3) MMU fetches PTE from page table in memory

  4) Valid bit is zero, so MMU triggers page fault exception

  5) Handler identifies victim (and, if dirty, pages it out to disk)

  6) Handler pages in new page and updates PTE in memory

  7) Handler returns to original process, restarting faulting instruction

Speeding up Translation with a TLB
  • Page table entries (PTEs) are cached in L1 like any other memory word
    • PTEs may be evicted by other data references
    • PTE hit still requires a small L1 delay
  • Solution: Translation Lookaside Buffer (TLB)
    • Small set-associative hardware cache in MMU
    • Maps virtual page numbers to physical page numbers
    • Contains complete page table entries for small number of pages
Multi-Level Page Tables
  • Solve the need of large memory if use page table
  • Example: 2-level page table
    • Level 1 table: each PTE points to a page table (always memory resident)
    • Level 2 table: each PTE points to a page (paged in and out like any other data)
Summary
  • Programmer’s view of virtual memory
    • Each process has its own private linear address space
    • Cannot be corrupted by other processes
  • System vie of virtual memory
    • Uses memory efficiently by caching virtual memory pages
      • Efficient only because of locality
    • Simplifies memory management and programming
    • Simplifies protection by providing a convenient interpositioning point to check permissions
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值