Parallel computer architectures
This presentation is the property of its rightful owner.
Sponsored Links
1 / 66

Parallel Computer Architectures PowerPoint PPT Presentation


  • 139 Views
  • Uploaded on
  • Presentation posted in: General

Parallel Computer Architectures. Chapter 8. Parallel Computer Architectures. (a) On-chip parallelism. (b) A coprocessor. (c) A multiprocessor. (d) A multicomputer. (e) A grid. Parallelism. Introduced at various levels Within CPU chip (multiple instructions per cycle)

Download Presentation

Parallel Computer Architectures

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Parallel computer architectures

Parallel Computer Architectures

Chapter 8


Parallel computer architectures1

Parallel Computer Architectures

(a) On-chip parallelism. (b) A coprocessor. (c) A multiprocessor.

(d) A multicomputer. (e) A grid.


Parallelism

Parallelism

  • Introduced at various levels

  • Within CPU chip (multiple instructions per cycle)

    • Instruction level VLIW (Very Long Instruction word)

    • Superscalar

    • On Chip Multithreading

    • Single chip multiprocessors

  • Extra CPU boards (

  • Multiprocessor/Multicomputer

  • Computer grids

  • Tightly Coupled – computationally intimate

  • Loosely Coupled – computationally remote


Instruction level parallelism

Instruction-Level Parallelism

(a) A CPU pipeline. (b) A sequence of VLIW instructions.

(c) An instruction stream with bundles marked.


The trimedia vliw cpu 1

The TriMedia VLIW CPU (1)

A typical TriMedia instruction, showing five possible operations.


The trimedia vliw cpu 2

The TriMedia VLIW CPU (2)

The TM3260 functional units, their quantity, latency,

and which instruction slots they can use.


The trimedia vliw cpu 3

The TriMedia VLIW CPU (3)

The major groups of TriMedia custom operations.


The trimedia vliw cpu 4

The TriMedia VLIW CPU (4)

(a) An array of 8-bit elements. (b) The transposed array.

(c) The original array fetched into four registers.

(d) The transposed array in four registers.


Multithreading

Multithreading

  • Fine-grained multithreading

    • Run multiple threads one instruction from each

    • Will never stall if enough active threads

    • Requires hardware to track which instruction is from which thread

  • Coarse-grain multithreading

    • Run thread until stall and switch (one cycle wasted)

  • Simultaneous multithreading

    • Coarse grain with no cycle wasted

  • Hyperthreading

    • 5% increase in size give 25% gain

    • Resource sharing

      • Partitionedfull resource sharing

      • Threshold sharing


On chip multithreading 1

On-Chip Multithreading (1)

(a) – (c) Three threads. The empty boxes indicated that the thread

has stalled waiting for memory. (d) Fine-grained multithreading.

(e) Coarse-grained multithreading.


On chip multithreading 2

On-Chip Multithreading (2)

Multithreading with a dual-issue superscalar CPU.

(a) Fine-grained multithreading.

(b) Coarse-grained multithreading.

(c) Simultaneous multithreading.


Hyperthreading on the pentium 4

Hyperthreading on the Pentium 4

Sharing between two thread white and gray

Resource sharing between threads in the

Pentium 4 NetBurst microarchitecture.


Single chip multiprocessor

Single-Chip Multiprocessor

  • Two areas of interest servers and consumer electronics

  • Homogeneous chips

    • 2 piplines, one CPU

    • 2 CPU (same design)

  • Hetrogeneous chips

    • CPU’s for DVD player or CELL phones

    • More software => slower but cheaper

    • Many different cores (essentially libraries)


Sample chip

Sample chip

  • Cores on a chip for DVD player:

    • Control

    • MPEG video

    • Audio decoder

    • Video encoder

    • Disk controller

    • Cache

      Cores require interconnect

      IBM CoreConnect

      AMBA Advanced Microcontroller Bus Architecture

      VCI Virtual Component Interconnect

      OCP-IP Open Core Protocol


Homogeneous multiprocessors on a chip

Homogeneous Multiprocessors on a Chip

Single-chip multiprocessors.

(a) A dual-pipeline chip. (b) A chip with two cores.


Heterogeneous multiprocessors on a chip 1

Heterogeneous Multiprocessors on a Chip (1)

The logical structure of a simple DVD player contains a heterogeneous

multiprocessor containing multiple cores for different functions.


Heterogeneous multiprocessors on a chip 2

Heterogeneous Multiprocessors on a Chip (2)

An example of the IBM CoreConnect architecture.


Coprocessors

Coprocessors

  • Come in a variety of sizes

    • Separate cabinets for mainframes

    • Separate boards

    • Separate chips

  • Primary purpose to offload work and assist main processor

  • Different types

    • I/O

    • DMA

    • Floating point

    • Network

    • Graphics

    • Encryption


Introduction to networking 1

Introduction to Networking (1)

How users are connected to servers on the Internet.


Networks

Networks

  • LAN – local area network

  • WAN – Wide area network

  • Packet – chunk of data on network 64-1500 bytes

  • Store-and-forward packet switching – what a router does

  • Internet – series of WAN’s linked by routers

  • ISP – Internet service provider

  • Firewall – specialized computer that filters traffic

  • Protocols – set of formats, exchange sequences, and rules

  • HTTP – HyperText Transfer Protocol

  • TCP – Transmission Control Protocol

  • IP – Internet protocol


Networks1

Networks

  • CRC – Cyclic Redundancy Check

  • TCP Header – information about data for TCP level

  • IP header – routing header source, destination, hops

  • Ethernet Header Next hop address, address, CRC

  • ASIC – Application Specific Integrated Circuit

  • FPGA – Field programmable Gate Array

  • Network processor – programmable device that handles incoming and outgoing packets a wire speed

  • PPE – Protocol/Programmable/Packet Processing Engines


Introduction to networking 2

Introduction to Networking (2)

A packet as it appears on the Ethernet.


Introduction to network processors

Introduction to Network Processors

A typical network processor board and chip.


Packet processing

Packet Processing

  • Checksum verification

  • Field Extraction

  • Packet Classification

  • Path Selection

  • Destination network determination

  • Route Lookup

  • Fragmentation and reassembly

  • Computation (compression/ encryption)

  • Header Management

  • Queue management

  • Checksum generation

  • Accounting

  • Statistics gathering


Improving performance

Improving Performance

  • Performance is name of game.

  • How to measure it.

    • Packets per second

    • Bytes per second

  • Ways to speed up

    • Performance is not linear with clock speed

    • Introduce more PPE’s

    • Specialized processors

    • More internal busses

    • Widen existing busses

    • Replace SDRAM with SRAM


The nexperia media processor

The Nexperia Media Processor

The Nexperia heterogeneous multiprocessor on a chip.


Multiprocessors

Multiprocessors

(a) A multiprocessor with 16 CPUs sharing a common memory.

(b) An image partitioned into 16 sections, each being analyzed by a different CPU.


Shared memory multiprocessors

Shared-Memory Multiprocessors

  • Multiprocessor – has shared memory

  • SMP (Symetric Multiprocessor) – every multiprocessor can access any I/O device

  • Multicomputer – (distributed memory system) – each computer has it’s own memory

  • Multiprocessor – has one address space

  • Multicomputer has one address space per computer

  • Multicomputers pass messages to communicate

  • Ease or programming vs ease of construction

  • DSM – distributed shared memory page fault memory for distributed computers


Multicomputers 1

Multicomputers (1)

(a) A multicomputer with 16 CPUs, each with its own private memory.

(b) The bit-map image of Fig. 8-17 split up among the 16 memories.


Multicomputers 2

Multicomputers (2)

Various layers where shared memory can be implemented. (a) The

hardware. (b) The operating system. (c) The language runtime system.


Taxonomy of parallel computers 1

Taxonomy of Parallel Computers (1)

Flynn’s taxonomy of parallel computers.


Taxonomy of parallel computers 2

Taxonomy of Parallel Computers (2)

A taxonomy of parallel computers.


Mimd categories

MIMD categories

  • UMA – uniform memory access

  • NUMA – NonUniform Memory Access

  • COMA – Cache only memory Access

  • Multicomputers are NORMA ( No remote Memory Access)

    • MPP Massive Parallel processor

Tanenbaum, Structured Computer Organization, Fifth Edition, (c) 2006 Pearson Education, Inc. All rights reserved. 0-13-148521-0


Consistency models

Consistency Models

  • How hardware and software will work with memory

  • Strict Consistency any read of location X returns the most recent value written to location X

  • Sequential Consistency – values will be returned in the order they are written (true order

  • Processor consistency – Writes by any CPU are seen in the order they are written

  • For every memory word, all CPU see all writes to it in the same order

  • Weak Consistency – no guarantee unless synchronization is used.

  • Release Consistency writes must occur before critical section is reentered.

Tanenbaum, Structured Computer Organization, Fifth Edition, (c) 2006 Pearson Education, Inc. All rights reserved. 0-13-148521-0


Sequential consistency

Sequential Consistency

(a) Two CPUs writing and two CPUs reading a common memory

word. (b) - (d) Three possible ways the two writes and four

reads might be interleaved in time.


Weak consistency

Weak Consistency

Weakly consistent memory uses synchronization operations to

divide time into sequential epochs.


Uma symmetric multiprocessor architectures

UMA Symmetric Multiprocessor Architectures

Three bus-based multiprocessors. (a) Without caching. (b) With

caching. (c) With caching and private memories.


Cache as cache can

Cache as Cache can

  • Cache coherence protocol keep memory in maximum of one cache (eg. Write through)

  • Snooping cache monitor bus fir access to cache memory

  • Choose between update strategy or invalidate strategy

  • MESI protocol named after states

    • Invalid

      Shared

      Exclusive

      Modified

Tanenbaum, Structured Computer Organization, Fifth Edition, (c) 2006 Pearson Education, Inc. All rights reserved. 0-13-148521-0


Snooping caches

Snooping Caches

The write through cache coherence protocol.

The empty boxes indicate that no action is taken.


The mesi cache coherence protocol

The MESI Cache Coherence Protocol

The MESI cache coherence protocol.


Uma multiprocessors using crossbar switches

UMA Multiprocessors Using Crossbar Switches

(a) An 8 × 8 crossbar switch.

(b) An open crosspoint.

(c) A closed crosspoint.


Uma multiprocessors using multistage switching networks 1

UMA Multiprocessors Using Multistage Switching Networks (1)

(a) A 2 × 2 switch.

(b) A message format.


Uma multiprocessors using multistage switching networks 2

UMA Multiprocessors Using Multistage Switching Networks (2)

An omega switching network.


Numa multiprocessors

NUMA Multiprocessors

A NUMA machine based on two levels of buses. The Cm* was

the first multiprocessor to use this design.


Cache coherent numa multiprocessors

Cache Coherent NUMA Multiprocessors

(a) A 256-node directory-based multiprocessor. (b) Division of a 32-bit

memory address into fields. (c) The directory at node 36.


The sun fire e25k numa multiprocessor 1

The Sun Fire E25K NUMA Multiprocessor (1)

The Sun Microsystems E25K multiprocessor.


The sun fire e25k numa multiprocessor 2

The Sun Fire E25K NUMA Multiprocessor (2)

The SunFire E25K uses a four-level interconnect. Dashed lines

are address paths. Solid lines are data paths.


Message passing multicomputers

Message-Passing Multicomputers

A generic multicomputer.


Topology

Topology

Various topologies. The heavy dots represent switches. The CPUs

and memories are not shown. (a) A star. (b) A complete interconnect.

(c) A tree. (d) A ring. (e) A grid. (f) A double torus.

(g) A cube. (h) A 4D hypercube.


Bluegene 1

BlueGene (1)

The BlueGene/L custom processor chip.


Bluegene 2

BlueGene (2)

The BlueGene/L. (a) Chip. (b) Card. (c) Board.

(d) Cabinet. (e) System.


Red storm 1

Red Storm (1)

Packaging of the Red Storm components.


Red storm 2

Red Storm (2)

The Red Storm system as viewed from above.


A comparison of bluegene l and red storm

A Comparison of BlueGene/L and Red Storm

A comparison of

BlueGene/L and Red Storm.


Google 1

Google (1)

Processing of a Google query.


Google 2

Google (2)

A typical Google

cluster.


Scheduling

Scheduling

Scheduling a cluster. (a) FIFO. (b) Without head-of-line blocking.

(c) Tiling. The shaded areas indicate idle CPUs.


Distributed shared memory 1

Distributed Shared Memory (1)

A virtual address space consisting of 16 pages

spread over four nodes of a multicomputer.

(a) The initial situation. ….


Distributed shared memory 2

Distributed Shared Memory (2)

A virtual address space consisting of 16 pages

spread over four nodes of a multicomputer. …

(b) After CPU 0 references page 10. …


Distributed shared memory 3

Distributed Shared Memory (3)

A virtual address space consisting of 16 pages

spread over four nodes of a multicomputer. …

(c) After CPU 1 references page 10, here assumed to be a read-only page.


Linda

Linda

Three Linda tuples.


Parallel computer architectures

Orca

A simplified ORCA stack object, with internal data and two operations.


Software metrics 1

Software Metrics (1)

Real programs achieve less than the perfect speedup

indicated by the dotted line.


Software metrics 2

Software Metrics (2)

(a) A program has a sequential part and a parallelizable part.

(b) Effect of running part of the program in parallel.


Achieving high performance

Achieving High Performance

(a) A 4-CPU bus-based system. (b) A 16-CPU bus-based system.

(c) A 4-CPU grid-based system. (d) A 16-CPU grid-based system.


Grid computing

Grid Computing

The grid layers.


  • Login