1 / 19

COMP 206: Computer Architecture and Implementation

COMP 206: Computer Architecture and Implementation. Montek Singh Wed, Nov 16, 2005 Topic: Main Memory (DRAM) Organization. Outline. Introduction DRAM Organization Challenges Bandwidth Granularity Performance Reading: HP3 5.8 and 5.9. Basics of DRAM Technology. DRAM (Dynamic RAM)

tamyra
Download Presentation

COMP 206: Computer Architecture and Implementation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COMP 206:Computer Architecture and Implementation Montek Singh Wed, Nov 16, 2005 Topic: Main Memory (DRAM) Organization

  2. Outline • Introduction • DRAM Organization • Challenges • Bandwidth • Granularity • Performance Reading: HP3 5.8 and 5.9

  3. Basics of DRAM Technology DRAM (Dynamic RAM) Used mostly in main mem. Capacitor + 1 transistor/bit Need refresh every 4-8 ms 5% of total time Read is destructive (need for write-back) Access time < cycle time (because of writing back) Density (25-50):1 to SRAM Address lines multiplexed pins are scarce! SRAM (Static RAM) Used mostly in caches (I, D, TLB, BTB) 1 flip-flop (4-6 transistors) per bit Read is not destructive Access time = cycle time Speed (8-16):1 to DRAM Address lines not multiplexed high speed of decoding imp.

  4. DRAM Organization: Fig. 5.29

  5. Chip Organization • Chip capacity (= number of data bits) • tends to quadruple • 1K, 4K, 16K, 64K, 256K, 1M, 4M, … • In early designs, each data bit belonged to a different address (x1 organization) • Starting with 1Mbit chips, wider chips (4, 8, 16, 32 bits wide) began to appear • Advantage: Higher bandwidth • Disadvantage: More pins, hence more expensive packaging

  6. Chip Organization Example: 64Mb DRAM

  7. DRAM Access Several steps in DRAM access: • Half of the address bits select a row of the square array • Whole row of bits is brought out of the memory array into a buffer register (slow, 60-80% of access time) • Other half of address bits select one bit of buffer register (with the help of multiplexer), which is read or written • Whole row is written back to memory array Notes: • This organization is demanded by needs of refresh • Has advantages: e.g., nibble, page, and static column mode operation

  8. DRAM Refresh • Refresh complicates operation of memory • Refresh control competes with CPU for access to DRAM • Each row refreshed once every 4-8 ms irrespective of the use of that row • Refreshes are performed one row at a time. • Consider a 1Mx1 DRAM chip with 190 ns cycle time • Time for refreshing one row at a time • 19010-9 103 = 0.19 ms < 4-8 ms • 64 MX1 (213 x 213) 19010-9 8 x103 = 1.52 < 4m. • 256Mx1 (214 x 214) 19010-9  16 x103 =3.04 < 4m • IG X1 (215 x 215) 19010-9x 32103 = 6ms. Critical work on reducing cycle time???? • Want to keep refresh fast (< 5-10% of total time)

  9. Memory Performance Characteristics • Latency (access time) • The time interval between the instant at which the data is called for (READ) or requested to be stored (WRITE), and the instant at which it is delivered or completely stored • Cycle time • The time between the instant the memory is accessed, and the instant at which it may be validly accessed again • Bandwidth (throughput) • The rate at which data can be transferred to or from memory • Reciprocal of cycle time • “Burst mode” bandwidth is of greatest interest • Cycle time > access time for conventional DRAM • Cycle time < access time in “burst mode” when a sequence of consecutive locations is read or written

  10. Improving Performance • Latency can be reduced by • Reducing access time of chips • Using a cache (“cache trades latency for bandwidth”) • Bandwidth can be increased by using • Wider memory (more chips) • More data pins per DRAM chip • Increased bandwidth per data pin

  11. Two Recent Problems • DRAM chip sizes quadrupling every three years • Main memory sizes doubling every three years • Thus, the main memory of the same kind of computer is being constructed from fewer and fewer DRAM chips • This results in two serious problems • Diminishing main memory bandwidth • Increasing granularity of memory systems

  12. Increasing Granularity of Memory Systems • Granularity of memory system is the minimum memory size, and also the minimum increment in the amount of memory permitted by the memory system • Too large a granularity is undesirable • Increases cost of system • Restricts its competitiveness • Granularity can be decreased by • Widening the DRAM chips • Increasing the per-pin bandwidth of the DRAM chips

  13. Granularity Example 8 2 We are using 16K1 DRAM parts, running at 2.5 MHz (400ns cycle time). Eight such DRAM parts provide 16KB of memory with 2.5MB/s bandwidth. Industry switches to 64Kb (64K1) DRAM parts. Two such DRAM parts provide the desired 16KB of memory. Such a system would have a 2-bit wide bus. To maintain a 2.5MB/s bandwidth, parts would need to run at 10 MHz. But the parts run only at 3.7 MHz. What are the options?

  14. Granularity Example (2) 8 8 Solution 1 Use eight 64K1 DRAM parts (six would suffice for required bandwidth). Problem: Now we have 64KB of memory rather than 16KB. Solution 2 Use two 16K4 DRAM parts (same capacity, different organization). This provides 16KB of memory at the required bandwidth.

  15. Improving Memory Chip Performance Several techniques to get more bits/sec from a DRAM chip: • Allow repeated accesses to the row buffer without another row access time • burst mode, fast page mode, EDO mode, … • Simplify the DRAM-CPU interface • add a clock to reduce overhead of synchronizing with the controller • = synchronous DRAM (SDRAM) • Transfer data on both rising and falling clock edges • double data rate (DDR) • Each of the above adds a small amount of logic to exploit the high internal DRAM bandwidth

  16. Achieving Higher Memory Bandwidth Fig. 5.27HP3

  17. Memory Interleaving • Goal: Try to take advantage of bandwidth of multiple DRAMs in memory system • Memory address A is converted into (b,w) pair, where • b = bank index • w = word index within bank • Logically a wide memory • Accesses to B banks staged over time to share internal resources such as memory bus • Interleaving can be on • Low-order bits of address (cyclic) (low order interleaving) • b = A mod B, w = A div B • High-order bits of address (block) (high order interleaving) • Combination of the two (block-cyclic)

  18. Mixed Interleaving • Memory address register is 6 bits wide • Most significant 2 bits give bank address • Next 3 bits give word address within bank • LSB gives (parity of) module within bank • 6 = 0001102 = (00, 011, 0) = (0, 3, 0) • 41 = 1010012 = (10, 100, 1) = (2, 4, 1)

  19. Other types of Memory • ROM = Read-only Memory • Flash = ROM which can be written once in a while • Used in embedded systems, small microcontrollers • Offer IP protection, security • Other?

More Related