1 / 28

Combining Memory and a Controller with Photonics through 3D-Stacking to Enable Scalable and Energy-Efficient Systems

Combining Memory and a Controller with Photonics through 3D-Stacking to Enable Scalable and Energy-Efficient Systems. Aniruddha N. Udipi Naveen Muralimanohar* Rajeev Balasubramonian Al Davis Norm Jouppi * University of Utah and *HP Labs. Memory Trends - I.

melvina
Download Presentation

Combining Memory and a Controller with Photonics through 3D-Stacking to Enable Scalable and Energy-Efficient Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Combining Memory and a Controller with Photonics through 3D-Stacking to Enable Scalable and Energy-Efficient Systems Aniruddha N. Udipi Naveen Muralimanohar* Rajeev Balasubramonian Al Davis Norm Jouppi* University of Utah and *HP Labs

  2. Memory Trends - I • Multi-socket, multi-core, multi-thread • High bandwidth requirement • 1 TB/s by 2017 • Edge-bandwidth bottleneck • Pin count, per pin bandwidth • Signal integrity and off-chip power • Limited number of DIMMs • Without melting the system • Or setting up in the Tundra! Source: Tom’s Hardware Source: ZDNet

  3. Memory Trends - II • The job of the memory controller is hard • 18+ timing parameters for DRAM! • Maintenance operations • Refresh, scrub, power down, etc. • Several DIMM and controller variants • Hard to provide interoperability • Need processor-side support for new memory features • Now throw in heterogeneity • Memristors, PCM, STT-RAM, etc.

  4. Improving the interface Memory Interconnect - Efficient application of Silicon Photonics, without modifying DRAM dies DIMM CPU 1 … MC 2 Communication protocol – Streamlined Slot-based Interface Memory interface under severe pressure

  5. PART 1 – Memory Interconnect

  6. Silicon Photonic Interconnects We need something that can break the edge-bandwidth bottleneck Ring modulator based photonics Off chip light source Indirect modulation using resonant rings Relatively cheap coupling on- and off-chip DWDM for high bandwidth density As many as 67 wavelengths possible Limited by Free Spectral Range, and coupling losses between rings Source: Xu et al. Optical Express 16(6), 2008 DWDM 64 λ × 10 Gbps/ λ = 80 GB/s per waveguide

  7. Static Photonic Energy • Photonic interconnects • Large static power dissipation: ring tuning • Much lower dynamic energy consumption – relatively independent of distance • Electrical interconnects • Relatively small static power dissipation • Large dynamic energy consumption • Should not over-provision photonic bandwidth, use only where necessary

  8. The Questions We’re Trying to Answer What should the role of electrical signaling be? How do we make photonics less invasive to memory die design? Should we replace all interconnects with photonics? On-chip too? What should the role of 3D be in an optically connected memory? Should we be designing photonic DRAM dies? Stacks? Channels?

  9. Contributions Beyond Prior Work • Beamer et al. (ISCA 2010) • First paper on fully integrated optical memory • Studied electrical-optical balance point • Focus on losses, proposed photonic power guiding • We build upon this • Focus on tuning power constraints • Effect of low-swing wires • Effect of 3D and daisy-chaining

  10. Energy Balance Within a DRAM Chip Photonic Energy Electrical Energy

  11. Single Die Design 1 Photonic DRAM die Full-swing on-chip wires Low-swing on-chip wires 46% energy reduction going between best full-swing config (4 stops) and best low-swing config (1 stop). Similar to state-of-the-art design, based on prior work. Argues for a specially designed photonic DRAM. More efficient on-chip electrical communication provides the added benefit of allowing fewer photonic resources.

  12. 3D Stacking Imminent for Capacity • Simply stack photonic dies? • Vertical coupling and hierarchical power guiding suggested by prior work • This is our baseline design • But, more photonic rings in the channel • Exactly the same number active as before • Energy optimal point shifts towards fewer “stops” • single set of rings becomes optimal • 2.4x energy consumption, for 8x capacity 8 Optimally Designed Photonic DRAM dies

  13. Key Idea – Exploiting TSVs • Move all photonic components to a separate interface die, shared by several memory dies • Photonics off-chip only • TSVs for inter-die communication • Best of both worlds; high BW and low static energy • Efficient low-swing wires on-die 8 Optimally Designed Photonic DRAM dies 8 Commodity DRAM dies Single photonic interface die

  14. Proposed Design ADVANTAGE 1: Increased activity factor, more efficient use of photonics ADVANTAGE 3: Not disruptive to the design of commodity memory dies ADVANTAGE 2: Rings are co-located; easier to isolate or tune thermally DRAM chips Processor DIMM Waveguide Memory controller Photonic Interface die

  15. Energy Characteristics Single die on the channel Four 8-die stacks on the channel Static energy trumps distance-independent dynamic energy

  16. Final System DRAM chips • 23% reduced energy consumption • 4X capacity per channel • Potential for performance improvements due to increased bank count • Less disruptive to memory die design Processor DIMM Waveguide Memory controller Photonic Interface die Makes the job of the memory controller difficult!

  17. PART 2 – Communication Protocol

  18. The Scalability Problem • Large capacity, high bandwidth, and evolving technology trends will increase pressure on the memory interface • Processor-side support required for every memory innovation • Current micro-management requires several signals • Heavy pressure on address/command bus • Worse with several independent banks, large amounts of state

  19. Proposed Solution • Release MC’s tight control, make memory stack more autonomous • Move mundane tasks to the interface die • Maintenance operation (refresh, scrub, etc.) • Routine operations (DRAM precharge, NVM wear leveling) • Timing control (18+ constraints for DRAM alone) • Coding and any other special requirements

  20. What would it take to do this? • “Back-pressure” from the memory • But, “Free-for-all” would be inefficient • Needs explicit arbitration • Novel slot-based interface • Memory controller retains control over data bus • Memory module only needs address, returns data

  21. Memory Access Operation ML ML > ML x x x S1 S2 Arrival Issue Start looking First free slot Backup slot Time Slot – Cache line data bus occupancy X – Reserved Slot ML – Memory Latency = Addr. latency + Bank access + Data bus latency

  22. Advantages • Plug and play • Everything is interchangeable and interoperable • Only interface-die support required (communicate ML) • Better support for heterogeneous systems • Easier DRAM-NVM data movement on the same channel • More innovation in the memory system • Without processor-side support constraints • Fewer commands between processor and memory • Energy, performance advantages

  23. Target System and Methodology • Terascale memory node in an Exascale system • 1 TB of memory, 1 TB/s of bandwidth • Assuming 80 GB/s per channel, we need 16 channels, with 64 GB per channel • 2 GB dies x 8 dies per stack x 4 stacks per channel • Focus on the design of a single channel • In-house DRAM simulator + SIMICS • PARSEC, STREAM, synthetic random traffic • Max. traffic load used, just below channel saturation

  24. Performance Impact – Synthetic Traffic < 9% latency impact, even at maximum load Virtually no impact on achieved bandwidth

  25. Performance Impact – PARSEC/STREAM Apps have very low BW requirements Scaled down system, similar trends

  26. Tying it together – The Interface Die

  27. Summary of Design • Proposed 3D-stacked interface die with 2 major functions • Holds photonic devices for Electrical-Optical-Electrical conversion • Photonics only on the busy shared bus between this die and the processor • Intra-memory communication all-electrical exploiting TSVs and low-swing wires • Holds device controller logic • Handles all mundane/routine tasks for the memory devices • Refresh, scrub, coding, timing constraints, sleep modes, etc. • Processor-side controller deals with more important functions such as scheduling, channel arbitration, etc. • Simple speculative slot based interface

  28. Key Contributions • Efficient application of photonics • 23% lower energy • 4X capacity, potential for performance improvements • Minimally disruptive to memory die design • Single memory die design for photonics and electronics • Streamlined memory interface • More interoperability and flexibility • Innovation without processor-side changes • Support for heterogeneous memory

More Related