1 / 7

2. Multiprocessors Main Structures

Shared-Memory (Global-Memory) Multiprocessor :  All processors can access all memory locations.  All the necessary variables are shared by all processors . They may read or write any shared variable. Distributed-Memory Multiprocessor ( Multicomputer ) :

shadi
Download Presentation

2. Multiprocessors Main Structures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Shared-Memory (Global-Memory) Multiprocessor:  All processors can access all memory locations.  All the necessary variables are shared by all processors. They may read or write any shared variable. Distributed-Memory Multiprocessor (Multicomputer):  Each processor has its own local memory.  Each processor can read or write only its own local (cache) memory. Any synchronization has to be done using explicit message passing between processors. Proc Proc Proc Mem Mem Mem Memory Access Interconnection Network Proc Proc Proc Message Passing Interconnection Network Mem Mem Mem Distributed-Memory Multiprocessor (Multicomputer) Shared-Memory Multiprocessor 2. Multiprocessors Main Structures • 2.1 Shared Memory x Distributed Memory

  2. 2. Multiprocessors Main Structures • 2.1 Shared Memory x Distributed Memory • Shared memory multiprocessors always have local cache memories private to each processor. This reduces global memory conflicts. As the size of the cache memory increases, then the usefulness of global memory is reduced to the buffering of interprocess data (Message Passing Interconnect Network). • Concurrent processes reside on various processors. When some processors fail, the remaining processors can continue the work, though at a lesser throughput, but ensuring high availability. • Recovery and process redistribution after a failure is more difficult in distributed-memory multiprocessors (the recovery process requires that other processors read the faulty processor cache).

  3. 2. Multiprocessors Main Structures • 2.2 Fine Grain x Coarse Grain Grain: refers to the number of instructions executed in a processor before synchronizing or communicating some data with another processor. Fine-grain parallel processing involves synchronizing the processors after few instructions. Coarse-grain parallel processing involves synchronizing the processors after tens of thousands of instructions. Medium-grain parallel processing has characterized a grain size somewhere in the middle, for example, several hundred instructions between synchronizations.

  4. 2. Multiprocessors Main Structures • 2.3 Moderate Parallel x Massively Parallel • Moderate Parallel Processing: architecture using 10 to 100 processors. • Massively Parallel Processing: architecture using hundreds of processors.

  5. 2. Multiprocessors Main Structures • 2.4 SIMD x MIMD • SIMD: Single Instruction Multiple Data Processor. - A central controller broadcasts the same instruction to different processors, each processor then executes the instruction on its data. - Some processors can be masked from the instruction by appropriately setting some mask registers at each processor.

  6. 2. Multiprocessors Main Structures • 2.4 SIMD x MIMD • MIMD: Multiple Instruction Multiple Data Processor. - Processors execute different instructions on different data. - Shared-memory MIMD multiprocessors are programmed assuming that all the necessary variables are shared by all processors, and that they may read or write any shared variable. - Distributed-memory MIMD multiprocessors are programmed assuming that each processor can read or write only its own local memory. Any synchronization has to be done using explicit message passing between processors.

  7. 2. Multiprocessors Main Structures • 2.5 Topology of Interconnect Typical interconnection networks include: • BUS • CROSSBAR • MULTISTAGE • MESHES • TREES • HYPERCUBES

More Related