1 / 21

Debunking then Duplicating Ultracomputer Performance Claims by Debugging the Combining Switches

Debunking then Duplicating Ultracomputer Performance Claims by Debugging the Combining Switches. Eric Freudenthal and Allan Gottlieb {freudenthal, gottlieb}@nyu.edu. Talk Summary. Review of combining networks

rocio
Download Presentation

Debunking then Duplicating Ultracomputer Performance Claims by Debugging the Combining Switches

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Debunking then Duplicating Ultracomputer Performance Claims by Debugging the Combining Switches Eric Freudenthal and Allan Gottlieb {freudenthal, gottlieb}@nyu.edu

  2. Talk Summary • Review of combining networks • MIMD architecture expected to provide high performance for hot spot traffic & centralized coordination • Duplicating & debunking • High hot spot latency, slow centralized coord. • Why • Improvements to architecture • Larger buffers, adaptive queue capacity • Improved hot-spot & coordination performance

  3. PRAM and Fetch & Add Fetch & Add: atomic primitive equivalent to int FAA(int *v, addend) int r = *v; *v = r + addend; return r Serialization free FAA-based centralized Queues, barriers Shared locks Counting semaphores, R/W (Not binary semaphores) Single memory ref if uncontended These algorithms generate hot spot memory traffic which may serialize in the memory system. PEn PE4 PE3 PE2 PE1 PE0 PRAM Idealized Multi-Port Shared Memory NYU Ultracomputer approximatesthis model.

  4. 23 PE computer with omega network PE7 PE6 PE2 PE5 PE3 PE1 PE0 PE4 MM7 MM6 MM1 MM0 MM2 MM3 MM4 MM5 SW SW SW SW SW SW SW SW SW SW SW SW NUMA Connections MemoryModules ProcessingElements Switches Routing: 20 21 22 “Dance Hall” (All Processors equally distant from all Memory Modules) “Budoir” (Processors & Memory Modules can be co-resident)

  5. Network congestion due to polling of hot spot in MM3 PE7 PE6 PE5 PE4 PE3 PE2 PE0 MM1 MM7 MM6 MM0 MM4 MM3 MM5 MM2 SW SW SW SW SW SW SW SW SW SW SW PE1 SW • Each PE has single outstanding reference to same variable. • Low offered load • If switches simply route messages • Polling requests serialize at MM3 • Switch queues in “funnel” near MM3 (in red) fill • If switches combine references to same variable • A single MM operation satisfies multiple requests • Lower network congestion & access latency

  6. Combining of Fetch & Add FAA(X,1) X:12 FAA(X,15) FAA(X,3) FAA(X,2) 1U X:12 X:13 X:0 FAA(X,12) FAA(X,4) 4U X:0 X:0 FAA(X,8) X:4 Semantics equivalent to some serialization. 3º Start: X=0 4º 12L End: X=15 1º MM 2º Addend for decombine in wait buffer. Upper port first, its addend=4

  7. NYU Combining Queue Design chute chute chute chute in in in in out out out out ALU ALU ALU ALU No associative memory required Background: Guibas & Liang Systolic FIFO out in Ultracomputer Combining Queue

  8. Prior Results • Architecture reasonable and motivated • Switches not prohibitively expensive • Serialization-free coordination algorithms • Queues in switches permit high bandwidth • Low latency for random & mixed hot spot traffic • Hot spot congestion remains problematic • Queues in switches near hot memory fill • High latency for all overlapping traffic • Ultra-3 flow-control believed helpful

  9. Rest of this talk • Duplication of old results: • Low averagelatency for low hot spot fraction • High latency for hotspot polling • New results • Debunking: High latency despite Ultra3 flow control • Distributed synchronization algorithms superior to centralized • Deconstructing: Understanding of high latency • Reduced combining due to wait buffer exhaustion • Queuing delays in network – reduced Q capacity helps • Debugging: Improvements to combining switches • Larger wait buffer needed • Adaptive reduction of queue capacity when combining occurs • Duplication: Centralized algorithms competitive • Much superior for concurrent-access locks

  10. Ultra III “baseline” switchesMemory Latency, one request / PE 100%, no combining 100% ~4x 40% ~2x 0-10% ideal % hot spot

  11. Two “Fixes” to Ultra III Switch Design • Problem: Full wait buffers reduce combining • Ultra III flow control: • Full wait buffer → block To-MM input ports • Switches in funnel starved from “combinable” traffic. • “Sufficient” capacity → 45% latency reduction • Problem: Congestion in “combining funnel” • Combined messages fill queues near MMs • Closed system: |PEs| messages, most near MMs • Few messages in other switches; combining unlikely • Shortened queues →backpressure • Lower per-stage queuing delays, more combining • Reduces latency another 30%; Centralized algs competitive

  12. Design tension I: “Best” queue length • Problem • Non-hot spot latency benefits fromlargequeues • Hot-spot combining benefits fromsmallqueues • Solution • Detect switches engaged in combining • Multiple combined messages awaiting transmission • Adaptively reduce capacity of these switches • Other switches unaffected • Results • Reduced polling latency, good non-poll latency

  13. Other Combining SwitchDesign Tensions Single input queues simpler Hard to double clock rate Dual input combining queue can be built from two single-input queues Messages from different ports ineligible for combining DecoupledALUs Decoupling allows faster clock speeds Head item can not combine Max(transmission, ALU) rather than sum Three enqueued messages required for combining ALU ALU ALU ALU mux

  14. Memory latency, 1024 PE SystemsOver a range of accepted load • Baseline Ultra III switch • Limited wait buffer • Fixed queue size • Waitbuf100 • Baseline • Sufficient wait buffer • Improved • Waitbuf100 • Adaptive queue length • Aggressive • Improved • Combines from both ports & on first slice • Assume no reduction of clock rate (optimistic) 100% hot 20% hot Uniform

  15. Mellor-Crummey & Scott (MCS):Local-spin coordination • No hot spot polling • Each PE spins on distinct shared var in co-located MM • Other parts of algorithm may generate hot spot traffic • Serialization-free barriers • Barrier satisfaction “disseminated” without generating hotspot traffic • Each processor has log2(N) rendezvous • Locks: Global state in hot spot variables • Heads of linked lists (blocked requestors) • Count of readers • Hot spot accesses benefit from combining

  16. Synchronization: BarriersMCS also serialization-free IntenseLoop: barrier RealisticLoop: Ref 15 or 30 shared vars barrier Faster

  17. Reader-Writer Experiment • Loop: Determine if reader or writer Sleep for 100 cycles Lock Reference 10 shared variables Unlock • Reader-writer mix • All reader, all writer • 1, 10 expected writer -- P(writer) = 1/N • Plots on next slides • Rate readers and writer locks granted (unit=rate/kc) • Greater values indicate greater progress

  18. All Readers / All Writers All Readers Combining helps MCS Serialization-free (FAA algorithm) faster All Writers Essentially a semaphore Little network traffic MCS fastest if contention Faster

  19. 1 Expected Writer Rate readers proceed FAA faster MCS benefits from combining Rate writers proceed FAA faster on best arch MCS benefits from combining Faster

  20. Conclusions • Architecture • Large wait buffers decrease hot spot latency • Adaptive Q capacity decreases latency • General technique? • Performance of Centralized Algorithms • Centralized R/W competitive with MCS alternative • Much superior when readers dominate • Require combining. • Centralized barrier • Almost as fast as distributed with “improved ultra3” switches • Faster than distributed with “new” switch design • Benefits diminish as superstep size increases

  21. Relevance & Future Work • Large shared memory systems are manufactured • Combining possible on all topologies • Return messages must be routed to combine sites • Combining demonstrated as useful for inter-process coordination. • Application of adaptive queue capacity modulation to other domains • Such as responding to flash-flood & DOS traffic • Analytic model of queuing delays for hot spot combining under development

More Related