1 / 17

MIMD Shared Memory

MIMD Shared Memory. Multiprocessors. MIMD -- Shared Memory. Each processor has a full CPU Each processors runs its own code can be the same program as other processors or different All processors access the same memory Same address space for all processors UMA Uniform Memory Access

Download Presentation

MIMD Shared Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MIMD Shared Memory Multiprocessors

  2. MIMD -- Shared Memory • Each processor has a full CPU • Each processors runs its own code • can be the same program as other processors or different • All processors access the same memory • Same address space for all processors • UMA Uniform Memory Access • all memory accessible in same time for every processor • NUMA Non-Uniform Memory Access • memory is localized • each processor can access some memory faster than other

  3. MIMD - SM - UMA C O N N E C T I O N P R O C E S S O R S M E M O R Y M O D U L E S

  4. Options for Connection -- UMA • Bus • Sequential, can be used for one message at a time • Switching Network • Can send many messages at once • depends on connection scheme • Crossbar • Maximal connections • expensive • Omega (also called Butterfly, Banyan) • several permutations of proc-mem possible

  5. Bus • Needs smart local cache schemes to reduce bus traffic • Works for low number of processors • Depending on technology 20-50 processors overloads bus, performance degrades • Common on 4, 8 processor SMP servers

  6. Bus Memory Bus Cache Processors

  7. Crossbar switch • Every permutation of processor to memory can work • Expensive N*M switches where where N = number of processors, M = Number of memory modules

  8. Crossbar switch M e m o r y Switches Processors

  9. Omega Network • Every Processor Connects to Every Memory • Many, but not all, permutations possible • An Extra stage adds redundancy and more permutations • Number of switches = (N/2) log N • For N processors, N memory modules • Number of stages = log N (determines latency)

  10. Omega Network P r o c e s s o r s M e m o r y

  11. Omega Network 000 000 001 001 010 010 011 011 100 100 101 101 110 110 111 111 Destination = 101

  12. Omega Network -- A Permutation 000 000 001 001 010 010 011 011 100 100 101 101 110 110 111 111 Destination = 101

  13. Omega Network with combining • Smart Switches • combine two requests with same destination • make memory accesses equivalent to serial sequence • split return values appropriately • Time trade-off • Used in NYU Ultra-computer • also in IBM RP3 experimental machine • Example: Fetch and Increment

  14. Omega Network 000 000 001 001 010 010 011 011 100 100 101 101 110 110 111 111 Destination = 101

  15. Options for Connection -- NUMA • Each Processor has a segment of memory closer than others • Could be several different levels of access • All Processors still use same address space • Omega network with wrap around • BBN Butterfly • Hierarchy of Rings (or other switches) • Kendall Square Research KSR-1 • SGI Origin series

  16. Hierarchical Rings Directory Nodes Compute Node To higher level ring

  17. Issues for MIMD Shared Memory • Memory Access • Can reads be simultaneous? • How to control multiple writes? • Synchronization mechanism needed • semaphores • monitors • Local caches need to be coordinated • cache coherency protocols

More Related