1 / 40

Computer Architecture

Computer Architecture. Memory organization. Types of Memory. Cache Memory Serves as a buffer for frequently accessed data Small  High Cost RAM (Main Memory) Stores programs and data that the computer needs when executing a program Dynamic RAM (DRAM) Uses Tiny Capacitors

blarry
Download Presentation

Computer Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Architecture Memory organization

  2. Types of Memory • Cache Memory • Serves as a buffer for frequently accessed data • Small  High Cost • RAM (Main Memory) • Stores programs and data that the computer needs when executing a program • Dynamic RAM (DRAM) • Uses Tiny Capacitors • Needs to be recharged every few milliseconds to keep the stored data • Static RAM (SRAM) • Holds its data as long as the power is on • D Flip Flop

  3. Types of Memory (Cont.) • ROM • Stores critical information necessary to operate the system. • Hardwired  can not be programmed • Programmable Read Only Memory (PROM) • Can be programmed once using appropriate equipment • Erasable PROM (EPROM) • Can be programmed with special tool • It has to be totally erased to be reprogrammed • Electrical Erasable PROM (EEPROM) • No special tools required • Can erase a portion

  4. Memory Hierarchy • The idea • Hide the slower memory behind the fast memory • Cost and performance play major roles in selecting the memory.

  5. Hit Vs. Miss • Hit • The requested data resides in a given level of memory. • Miss • The requested data is not found in the given level of memory • Hitrate • The percentage of memory accesses found in a given level of memory. • Missrate • The percentage of memory accesses not found in a given level of memory.

  6. Hit Vs. Miss (Cont.) • Hit time • The time required to access the requested information in a given level of memory. • Miss penalty • The time required to process a miss, • Replacing a block in an upper level of memory, • The additional time to deliver the requested data to the processor.

  7. Miss Scenario • The processor sends a request to the cache for location X • if found  cache hit • If not  try next level • When the location is found  load the whole block into the cache • Hoping that the processor will access one of the neighbor locations next. • One miss may lead to multiple hits Locality • Can we compute the average access time based on this memory Hierarchy?

  8. Average Access Time • Assume a memory hierarchy with three levels (L1, L2, and L3) • What is the memory average access time?

  9. Locality of Reference • One miss may lead to multiple hits Locality • Temporal locality • Recently accessed items tend to be accessed again in the near future. • Spatial locality • when a given address has been referenced, it is most likely that addresses near it will be referenced within a short period of time. (for example, as in arrays or loops). • Sequential locality  part of the spatial locality • Instructions tend to be accessed sequentially.

  10. Cache memory • Cache • Stores recently used data closer to the CPU • Your home is the cache and the main memory is the grocery store • Buy what is most probably to be needed in the coming week • How a processor can know which block(s) to bring to the cache? • No way to know but can benefit from the locality concept

  11. Impact of Temporal Locality • Assume that: • A loop instruction that is executed n times • The request data created a cache miss requires tmto load the requested block from the main memory to the cache • tc is the cache access time • What is the average access time? What does it mean? n tavg

  12. Impact of Spatial Locality • Assume that: • m elements are requested due to spatial locality. • The request data created a cache miss that requires tmto load the requested block from the main memory to the cache • tc is the cache access time • What is the average access time? What does it mean? m tavg

  13. Cache Mapping Schemes

  14. Cache Mapping Schemes • Cache memory is smaller than the main memory • Only few blocks can be loaded at the cache • The cache does not use the same memory addresses • Which block in the cache is equivalent to which block in the memory? • The processor uses Memory Management Unit (MMU) to convert the requested memory address to a cache address

  15. Direct Mapping • Assigns cache mappings using a modular approach j = i mod n • j cache block number • i memory block number • n number of cache blocks Cache Memory

  16. Example • Given M memory blocks to be mapped to 10 cache blocks, show the direct mapping scheme? How do you know which block is currently in the cache?

  17. Direct Mapping (Cont.) • Bits in the main memory address are divided into three fields. • Word  identifies specific word in the block • Block  identifies a unique block in the cache • Tag  identifies which block from the main memory currently in the cache

  18. Example • Consider, for example, the case of a main memory consisting of 4K blocks, a cache memory consisting of 128 blocks, and a block size of 16 words. Show the direct mapping and the main memory address format? Tag

  19. Example (Cont.)

  20. Direct Mapping • Advantage • Easy • Does not require any search technique to find a block in cache • Replacement is a straight forward • Disadvantages • Many blocks in MM are mapped to the same cache block • We may have others empty in the cache • Poor cache utilization

  21. Group Activity • Consider, the case of a main memory consisting of 4K blocks, a cache memory consisting of 8 blocks, and a block size of 4 words. Show the direct mapping and the main memory address format?

  22. Group Activity • Given the following direct mapping chart, what is the cache and memory location required by the following addresses:

  23. Fully Associative Mapping • Allowing any memory block to be placed anywhere in the cache • A search technique is required to find the block number in the tag field

  24. Example • We have a main memory with 214 words , a cache with 16 blocks , and blocks is 8 words. How many tag & word fields bits? • Word field requires 3 bits • Tagfield requires 11 bits 214 /8 = 2048 blocks

  25. Which MM block in the cache? • Naïve Method: • Tag fields are associated with each cache block • Compare tag field with tag entry in cache to check for hit. • CAM (Content Addressable Memory) • Words can be fetched on the basis of their contents, rather than on the basis of their addresses or locations. • For example: • Find the addresses of all “Smiths” in Dallas.

  26. Fully Associative Mapping • Advantages • Flexibility • Utilizing the cache • Disadvantage • Required tag search • Associative search  Parallel search • Might require extra hardware unit to do the search • Requires a replacement strategy if the cache is full • Expensive

  27. N-way Set Associative Mapping • Combines direct and fully associative mapping • The cache is divided into a set of blocks • All sets are the same size • Main memory blocks are mapped to a specific set based on : s = i mod S • s specific to which block i mapped • S total number of sets • Any coming block is assigned to any cache block inside the set

  28. N-way Set Associative Mapping • Tag field uniquely identifies the targeted block within the determined set. • Word field  identifies the element (word) within the block that is requested by the processor. • Setfield identifies the set

  29. N-way Set Associative Mapping

  30. Group Activity • Compute the three parameters (Word, Set, and Tag) for a memory system having the following specification: • Size of the main memory is 4K blocks, • Size of the cache is 128 blocks, • The block size is 16 words. • Assume that the system uses 4-way set-associative mapping.

  31. Answer

  32. N-way Set Associative Mapping • Advantages: • Moderate utilization to the cache • Disadvantage • Still needs a tag search inside the set

  33. If the cache is full and there is a need for block replacement , Which one to replace?

  34. Cache Replacement Policies • Random • Simple • Requires random generator • First In First Out (FIFO) • Replace the block that has been in the cache the longest • Requires keeping track of the block lifetime • Least Recently Used (LRU) • Replace the one that has been used the least • Requires keeping track of the block history

  35. Cache Replacement Policies (Cont.) • Most Recently Used (MRU) • Replace the one that has been used the most • Requires keeping track of the block history • Optimal • Hypothetical • Must know the future

  36. Example • Consider the case of a 4X8 two-dimensional array of numbers, A. Assume that each number in the array occupies one word and that the array elements are stored column-major order in the main memory from location 1000 to location 1031. The cache consists of eight blocks each consisting of just two words. Assume also that whenever needed, LRU replacement policy is used. We would like to examine the changes in the cache if each of the direct mapping techniques is used as the following sequence of requests for the array elements are made by the processor:

  37. Array elements in the main memory

  38. Conclusion • 16 cache miss • No single hit • 12 replacements • Only 4 cache blocks are used

  39. Group Activity • Do the same in case of fully and 4-way set associative mappings ?

More Related