Computer Organization Chapter 5

存储器层次结构


本章主要讨论如何构建一个容量无限大的虚拟快速存储器

Questions to ask
  1. What I’ve known?
    • 我知道存储器是分级的;而分级可以减小因为磁盘和CPU速度差而导致的速度变慢
  2. What I don’t know?
    • 不知道为什么分级可以提高速度;不知道存储器具体的层次结构
  3. What I can learn?
    • 存储器是怎么分层的;为什么分层能提高速度;关于存储器的速度具体怎么计算
  4. What I can’t learn?
    • 应该怎么去系统性地设计存储器;存储器硬件的具体实现比如DRAM
Abstract
  • Temporal locality, space locality, block, line, hit rate, miss rate,
  • SRAM, DRAM, 地址交叉, EEPROM, 磁道、扇区、
  • direct mapped, valid bit、cache相关计算、cache确实处理、写直达、写缓冲、写回
  • cache性能评估、计算cache性能、set associative
  • virtual memory, page table, TLB: translation-Lookaside Buffer
Terms
  1. Temporal locality:(locality in time): if an item is referenced, it will tend to be referenced again soon
  2. Spatial locality (locality in space): if an item is referenced, items whose addresses are close by will tend to be referenced soon
  3. Block, line: the minimum unit of infor mation that can be either present or not present in the two­level hierarchy is called a block or a line
  4. The hit rate, or hit ratio, is the fraction of mem ory ac cesses found in the upper level
  5. Direct mapped: A cache structure in which each memory location is mapped to exactly one location in the cache
  6. write-through: A scheme in which writes always update both the cache and the next lower level of the memory hierarchy, ensuring that data is always con sistent between the two.
  7. write buffer: A queue that holds data while the data is waiting to be written to memory.
  8. write-back: A scheme that han dles writes by updating values only to the block in the cache, then writing the modified block to the lower level of the hierar chy when the block is replaced.
  9. Handling Cache Misses:
    1. Send the original PC value (current PC – 4) to the memory.
    2. Instruct main memory to perform a read and wait for the memory to complete its access.
    3. Write the cache entry, putting the data from memory in the data portion of the entry, writing the upper bits of the address (from the ALU) into the tag field, and turning the valid bit on.
    4. Restart the instruction execution at the first step, which will refetch the instruction, this time finding it in the cache.
  10. fully associative cache: A cache structure in which a block can be placed in any location in the cache.
  11. set-associative cache: A cache that has a fixed number of loca tions (at least two) where each block can be placed.
  12. virtual memory: A technique that uses main memory as a “cache” for secondary storage.
  13. physical address An address in main memory.
  14. protection A set of mecha nisms for ensuring that multiple processes sharing the processor, memory, or I/O devices cannot interfere, intentionally or unin tentionally, with one another by reading or writing each other’s data. These mechanisms also isolate the operating system from a user process.
  15. page fault An event that occurs when an accessed page is not present in main memory.
  16. virtual address An address that corresponds to a location in virtual space and is translated by address mapping to a physical address when memory is accessed.
  17. address translation Also called address mapping. The process by which a virtual address is mapped to an address used to access memory.
  18. segmentation A variable­size address mapping scheme in which an address consists of two parts: a segment number, which is mapped to a physical address, and a segment offset.
  19. page table The table contain ing the virtual to physical address translations in a virtual memory system. The table, which is stored in memory, is typically indexed by the virtual page number; each entry in the table contains the physical page number for that virtual page if the page is currently in memory.
  20. swap space The space on the disk reserved for the full virtual memory space of a process.
  21. reference bit Also called use bit. A field that is set whenever a page is accessed and that is used to implement LRU or other replacement schemes.
  22. translation-lookaside buffer (TLB) A cache that keeps track of recently used address mappings to try to avoid an access to the page table.
  23. virtually addressed cache A cache that is accessed with a vir tual address rather than a physi cal address.
  24. aliasing A situation in which the same object is accessed by two addresses; can occur in vir tual memory when there are two virtual addresses for the same physical page.
  25. physically addressed cache A cache that is addressed by a physical address.
1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 、4下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合;、下载 4使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。 1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合;、 4下载使用后,可先查看README.md或论文文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。 5、资源来自互联网采集,如有侵权,私聊博主删除。 6、可私信博主看论文后选择购买源代码。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值