system memory之Memory Access and Access Time

Memory Access and Access Time

When memory is read or written, this is called a memory access. A specific procedure is used to control each access to memory, which consists of having the memory controller generate the correct signals to specify which memory location needs to be accessed, and then having the data show up on the data bus to be read by the processor or whatever other device requested it.

In order to understand how memory is accessed, it is first necessary to have a basic understanding of how memory chips are addressed. Let's take as an example a common 16Mbit chip, configured as 4Mx4. This means that there are 4M (4,194,304) addresses with 4 bits each; so there are 4,194,304 different memory locations--sometimes called cells--each of which contains 4 bits of data.. 4,194,304 is equal to 2^22, which means 22 bits are required to uniquely address that number of memory locations. Thus, in theory 22 address lines are required.

However, in practice, memory chips do not have this many address lines. They are instead logically organized as a "square" of rows and columns. The low-order 11 bits are considered the "row" and the high-order 11 bits the "column". First the row address is sent to the chip, and then the column address. For example, let's suppose that we want to access memory location 2,871,405 in this chip. This corresponds to a binary address of "10101111010 00001101101". First, "00001101101" would be sent to select the "row", and then "10101111010" would be sent to select the column. This combination selects the unique location of memory address 2,871,405. This is analogous to how you might select a particular cell on a spreadsheet: go to row #34, say, and then look at column "J" to find cell "J34".

Intuitively, it would seem that designing memory chips in this manner is both more complex and slower than just putting one address pin on the chip for each address line required to uniquely address the chip--why not just put 22 address pins on the chip? It may not surprise you to learn that the answer is "cost". By using the row/column method, it is possible to greatly reduce the number of pins on the DRAM chip. Here, 11 address pins are required instead of 22 (though you lose a small part of the "savings" of 22-11=11 to additional control signals that are needed to manage the row/column timing.) You also save some of the buffers and other circuitry that are required for each address line. Certainly having to send the address in two "chunks" slows down the addressing process, but keeping the chip smaller and with fewer inputs allows it to use less power, which makes it possible to run the chip faster, partially offsetting the loss in access speed.

Of course, a PC doesn't have a single memory chip; most have dozens, depending on total memory capacity and the size of DRAMs being used. The chips are arranged into modules, and then into banks, and the memory controller manages which sets of chips are read from or written to. Since a modern PC reads or writes 64 bits at a time, each read or write involves simultaneous accesses to as many as 64 different DRAM chips.

Here is a simplified walkthrough of how a basic read memory access is performed. This is a conventional asynchronous read, because the timing signals are not tied to the main system clock; synchronous DRAM uses different timing signals:

  1. The address for the memory location to be read is placed on the address bus.
  2. The memory controller decodes the memory address and determines which chips are to be accessed.
  3. The lower half of the address ("row") is sent to the chips to be read.
  4. After allowing sufficient time for the row address signals to stabilize, the memory controller sets the row address strobe (sometimes called row address select) signal to zero. (This line is abbreviated as "RAS" with a horizontal line over it. The horizontal line is a short-hand code that tell engineers working with the circuit that the signal is "active low", meaning that the chip is looking for it to be set to zero as a signal to "do something". There's no way in HTML to reliably use this notation so instead, I will write "/RAS".)
  5. When the /RAS signal has settled at zero, the entire row selected (all 2^11 columns in the example above, or 2048 different cells of 4 bits each) is read by the circuits in the chip. Note that this action refreshes all the cells in that row; refreshing is done one row at a time.
  6. The higher half of the address ("column") is sent to the chips to be read.
  7. After allowing sufficient time for the column address signals to stabilize, the memory controller sets the column address strobe (or column address select) signal to zero. This line is abbreviated as "CAS" with a horizontal line over it, or "/CAS".
  8. When the /CAS signal has settled at zero, the selected column is fed to the output buffers of the chip.
  9. The output buffers of all the accessed memory chips feed the data out onto the data bus, where the processor or other device that requested the data can read it.

Note that this is a very simplified example, since it doesn't mention all of the various timing signals, and it also ignores common performance enhancements such as multiple-banked modules, burst mode, etc. A write process is performed similarly, except of course that the data is read into the chips instead of being sent out by them. A special signal called "R/W" (actually written with a horizontal line over the "W") controls whether a read or write is being performed during the access.

The amount of time that it takes for the memory to produce the data required, from the start of the access until when the valid data is available for use, is called the memory's access time, sometimes abbreviated tAC. It is normally measured in nanoseconds (ns). Today's memory normally has access time ranging from 5 to 70 nanoseconds. This is the speed of the DRAM memory itself, which is not necessarily the same as the true speed of the overall memory system. Note that much of the difference in access times of various DRAM technologies has to do with how the memory chips are arranged and controlled, not anything different in the core DRAM chips themselves.

Next: Asynchronous and Synchronous DRAM

 
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Speed up the execution of important database queries by making good choices about which indexes to create. Choose correct index types for different scenarios. Avoid indexing pitfalls that can actually have indexes hurting performance rather than helping. Maintain indexes so as to provide consistent and predictable query response over the lifetime of an application. Expert Oracle Indexing and Access Paths is about the one database structure at the heart of almost all performance concerns: the index. Database system performance is one of the top concerns in information technology today. Administrators struggle to keep up with the explosion of access and activity driven by the proliferation of computing into everything from phones to tablets to PCs in our increasingly connected world. At the heart of any good-performing database lies a sound indexing strategy that makes appropriate use of indexing, and especially of the vendor-specific indexing features on offer. Few databases fully exploit the wealth of data access mechanisms provided by Oracle. Expert Oracle Indexing and Access Paths helps by bringing together information on indexing and how to use it into one blissfully short volume that you can read quickly and have at your fingertips for reference. Learn the different types of indexes available and when each is best applied. Recognize when queries aren’t using indexes as you intend. Manage your indexing for maximum performance. Confidently use the In Memory column store feature as an alternate access path to improve performance. Let Expert Indexing in Oracle Database 12c be your guide to deep mastery of the most fundamental performance optimization structure in Oracle Database. Explains how indexes help performance, and sometimes hinder it too Demystifies the various index choices so that you can chose rightly Describes the database administration chores associated with indexes Demonstrates the use of the In Memory column store as an alternate access path to the data Table of Contents Chapter 1. Introduction to Oracle Indexes Chapter 2. B-Tree Indexes Chapter 3. Bitmap Indexes Chapter 4. Index-organized Tables Chapter 5. Specialized Indexes Chapter 6. Partitioned Indexes Chapter 7. Tuning Index Usage Chapter 8. Maintaining Indexes Chapter 9. SQL Tuning Advisor Chapter 10. In Memory Column Store
Speed up the execution of important database queries by making good choices about which indexes to create. Choose correct index types for different scenarios. Avoid indexing pitfalls that can actually have indexes hurting performance rather than helping. Maintain indexes so as to provide consistent and predictable query response over the lifetime of an application. Expert Oracle Indexing and Access Paths is about the one database structure at the heart of almost all performance concerns: the index. Database system performance is one of the top concerns in information technology today. Administrators struggle to keep up with the explosion of access and activity driven by the proliferation of computing into everything from phones to tablets to PCs in our increasingly connected world. At the heart of any good-performing database lies a sound indexing strategy that makes appropriate use of indexing, and especially of the vendor-specific indexing features on offer. Few databases fully exploit the wealth of data access mechanisms provided by Oracle. Expert Oracle Indexing and Access Paths helps by bringing together information on indexing and how to use it into one blissfully short volume that you can read quickly and have at your fingertips for reference. Learn the different types of indexes available and when each is best applied. Recognize when queries aren’t using indexes as you intend. Manage your indexing for maximum performance. Confidently use the In Memory column store feature as an alternate access path to improve performance. Let Expert Indexing in Oracle Database 12c be your guide to deep mastery of the most fundamental performance optimization structure in Oracle Database. Explains how indexes help performance, and sometimes hinder it too Demystifies the various index choices so that you can chose rightly Describes the database administration chores associated with indexes Demonstrates the use of the In Memory column store as an alternate access path to the data What You Will Learn Create an overall indexing strategy to guide your decisions Choose the correct indexing mechanisms for your applications Manage and maintain indices to avoid degradation and preserve efficiency Take better advantage of underused index types such as index-organized tables Choose the appropriate columns to index, with confidence Blend partitioning and materialized views into your indexing strategy Who This Book Is For Expert Oracle Indexing and Access Paths is for all levels of database administrators and application developers who are struggling with the database performance and scalability challenge. Any database administrator involved with indexing, which is any database administrator period, will appreciate the wealth of advice packed into this gem of a book. Table of Contents Chapter 1: Introduction to Oracle Indexes Chapter 2: B-tree Indexes Chapter 3: Bitmap Indexes Chapter 4: Index-Organized Tables Chapter 5: Specialized Indexes Chapter 6: Partitioned Indexes Chapter 7: Tuning Index Usage Chapter 8: Maintaining Indexes Chapter 9: SQL Tuning Advisor Chapter 10: In-Memory Column Store

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值