1 / 39

2014-4-3 John Lazzaro (not a prof - “John” is always OK)

www-inst.eecs.berkeley.edu/~cs152/. CS 152 Computer Architecture and Engineering. Lecture 19 -- Dynamic Scheduling II. 2014-4-3 John Lazzaro (not a prof - “John” is always OK). TA: Eric Love. Play:. Case studies of dynamic execution.

mircea
Download Presentation

2014-4-3 John Lazzaro (not a prof - “John” is always OK)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. www-inst.eecs.berkeley.edu/~cs152/ CS 152 Computer Architecture and Engineering Lecture 19 -- Dynamic Scheduling II 2014-4-3 John Lazzaro (not a prof - “John” is always OK) TA: Eric Love Play:

  2. Case studies of dynamic execution DEC Alpha 21264: High performance from a relatively simple implementation of a modern instruction set. Short Break Simultaneous Multi-threading: Adapting multi-threading to dynamic scheduling. IBM Power: Evolving dynamic designs over many generations.

  3. DEC Alpha 21164: 4-issue in-order design. 21264: 4-issue out-of-order design. 21264 was 50% to 200% faster in real-world applications.

  4. 500 MHz 0.5µ parts for in-order 21164 and out-of-order 21264. Similarly-sized on-chip caches (116K vs 128K) 21264 has a 1.7x advantage on integer code, and a 2.7x advantage of floating-point code. In-order 21164 has larger off-chip cache. 21264 has 55% more transistors than the 21164. The die is 44% larger. 21264 consumes 46% more power than the 21164.

  5. The Real Difference: Speculation If the ability to recover from mis-speculation is built into an implementation ... it offers the option to add speculative features to all parts of the design.

  6. 21264 die OoO OoO Separate OoO control for integer and floating point. FP Pipe Int Pipe Int Pipe RISC decode happens in OoO blocks Fetch and predict Unlabeled areas devoted to memory system control Data Cache Data Cache I-Cache I-Cache

  7. 21264 pipeline diagram Slot: absorbs delay of long path on last slide. Rename and Issue stages are primary locations of dynamic scheduling logic. Load/store disambiguation support resides in Memory stage.

  8. Fetch stage close-up: Speculative Each cache line stores predictions of the next line, and the cache way to be fetched. If predictions are correct, fetcher maintains the required 4 instructions/cycle pace.

  9. Rename stage close-up: For mis-speculation recovery Time-stamped. (1) Allocates new physical registers for destinations, (2) Looks up physical register numbers for sources, (3) Handle rename dependences within the 4 issuing instructionsin one clock cycle! Output: 12 physical registers numbers: 1 destination and 2 sources for the 4 instructions to be issued. Input: 4 instructions specifying architected registers.

  10. Recall: malloc() -- free() in hardware The record-keeping shown in this diagram occurs in the rename stage.

  11. Issue stage close-up: (1) Newly issued instructions placed in top of queue. (2) Instructions check scoreboard: are 2 sources ready? (3) Arbiter selects 4 oldest “ready” instructions. (4) Update removes these 4 from queue. Output: The 4 oldest instructions whose 2 source registers are ready for use. Input: 4 just-issued instructions, renamed to use physical registers. Scoreboard: Tracks writes to physical registers.

  12. Execution close-up: (1) Two copies of register files, to reduce port pressure. (2) Forwarding buses are low-latency paths through CPU. Relies on speculations

  13. Latencies, from issue to retirement. Short latencies keep buffers to a reasonable size. Retirement managed here. 8 retirements per cycle can be sustained over short time periods. Peak rate is 11 retirements in a single cycle.

  14. Execution unit close-up: (1) Two arbiters: one for top pipes, one for bottom pipes. (2) Instructions statically assigned to top or bottom. (3) Arbiter dynamically selects left or right. Top Top Bottom Thus, 2 dual-issue dynamic machines, not a 4-issue machine. Why? Simplifies arbiter. Performance penalty? A few %.

  15. Memory stages close-up: 3rd stop: Flush STQ to the data cache ... on a miss, place in Miss Address File. 1 GHz 2nd stop: Load Queue (LDQ) and Store Queue (SDQ) each hold 32 instructions, until retirement ... “Double pumped” 1st stop: TLB, to convert virtual memory addresses. (MAF == MHSR) So we can roll back! Loads and stores from execution unit appear as “Cluster 0/1 memory unit” in the diagram below. Input: Say something

  16. LDQ/STQ close-up: Hazards we are trying to prevent: To do so, LDQ and SDQ lists of up to 32 loads and stores, in issued order. When a new load or store arrives, addresses are compared to detect/fix hazards:

  17. LDQ/STQ speculation It also marks the load instruction in a predictor, so that future invocations are not speculatively executed. Subsequent execution First execution

  18. Designing a microprocessor is a team sport. Below are the author and acknowledgement lists for the papers whose figures I use. micro-architects circuits architect There is no “i” in T-E-A-M ...

  19. Break Play:

  20. Multi-Threading (Dynamic Scheduling)

  21. Single-threaded predecessor to Power 5. 8 execution units in out-of-order engine, each may issue an instruction each cycle. Power 4 (predates Power 5 shown earlier)

  22. Observation: Most hardware in an out-of-order CPU concerns physical registers. Could several instruction threads share this hardware? For most apps, most execution units lie idle For an 8-way superscalar. From: Tullsen, Eggers, and Levy, “Simultaneous Multithreading: Maximizing On-chip Parallelism, ISCA 1995.

  23. Two threads, 8 units Cycle M M FX FX FP FP BR CC Simultaneous Multi-threading ... One thread, 8 units Cycle M M FX FX FP FP BR CC M = Load/Store, FX = Fixed Point, FP = Floating Point, BR = Branch, CC = Condition Codes

  24. Power 4 2 commits (architected register sets) 2 fetch (PC), 2 initial decodes Power 5

  25. Power 5 data flow ... Why only 2 threads? With 4, one of the shared resources (physical registers, cache, memory bandwidth) would be prone to bottleneck.

  26. For balanced operation, both threads run slower than if they “owned” the machine. Power 5 thread performance ... Relative priority of each thread controllable in hardware.

  27. Multi-Core

  28. Observation: In many cases, the on-chip cache and DRAM I/O bandwidth is also underutilized by one CPU. So, let 2 cores share them. Recall: Superscalar utilization by a thread For an 8-way superscalar.

  29. Most of Power 5 die is shared hardware Shared Components L2 Cache L3 Cache Control DRAM Controller Core #1 Core #2

  30. Core-to-core interactions stay on chip (1) Threads on two cores that use shared libraries conserve L2 memory. (2) Threads on two cores sharememory via L2 cache operations. Much faster than 2 CPUs on 2 chips.

  31. Sun Niagara

  32. Observation: Some apps struggle to reach a CPI == 1. For throughput on these apps, a large number ofsingle-issuecores is better than afew superscalars. The case for Sun’s Niagara ... For an 8-way superscalar.

  33. Niagara (original): 32 threads on one chip 8 cores: Single-issue, 1.2 GHz 6-stage pipeline 4-way multi-threaded Fast crypto support Die size: 340 mm² in 90 nm. Power: 50-60 W Shared resources: 3MB on-chip cache 4 DDR2 interfaces 32G DRAM, 20 Gb/s 1 shared FP unit GB Ethernet ports Sources: Hot Chips, via EE Times, Infoworld. J Schwartz weblog (Sun COO)

  34. The board that booted Niagara first-silicon Source: J Schwartz weblog (then Sun COO, now CEO)

  35. Web server benchmarks used to position the T2000 in the market. Used in Sun Fire T2000: “Coolthreads” Claim: server uses 1/3 the power of competing servers.

  36. IBM RISC chips, since Power 4 (2001) ... 2014

  37. Three big ideas: register renaming, data-driven detection of RAW resolution, bus-based architecture. Has saved architectures that have a small number of registers: IBM 360 floating-point ISA, Intel x86 ISA. Very complex, but enables many things: out-of-order execution, multiple issue, loop unrolling, etc. Recap: Dynamic Scheduling

  38. On Tuesday Epilogue ... Have a good weekend!

More Related