1 / 27

A Perspective on the Limits of Computation

A Perspective on the Limits of Computation. Oskar Mencer May 2012. Limits of Computation. Objective: Maximum Performance Computing (MPC) What is the fastest we can compute desired results? Conjecture: Data movement is the real limit on computation.

heath
Download Presentation

A Perspective on the Limits of Computation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Perspective on the Limits of Computation Oskar Mencer May 2012

  2. Limits of Computation Objective: Maximum Performance Computing (MPC) What is the fastest we can compute desired results? Conjecture: Data movement is the real limit on computation.

  3. Less Data Movement = Less Data + Less Movement Maximum Performance Computing (MPC) The journey will take us through: Information Theory: Kolmogorov Complexity Optimised Arithmetic: Winograd Bounds Optimisation via Kahneman and Von Neumann Real World Dataflow Implications and Results

  4. Kolmogorov Complexity (K) Definition (Kolmogorov): “If a description of string s, d(s), is of minimal length, […] it is called a minimal description of s. Then the length of d(s), […] is the Kolmogorov complexity of s, written K(s), where K(s) = |d(s)|” Of course K(s) depends heavily on the Language L used to describe actions in K. (e.g. Java, Esperanto, an Executable file, etc) Kolmogorov, A.N. (1965). "Three Approaches to the Quantitative Definition of Information". Problems Inform. Transmission1 (1): 1–7.

  5. A Maximum Performance Computing Theorem f r i For a computational task f, computing the result r, given inputs i, i.e. task f: r = f( i ), or Assuming infinite capacity to compute and remember inside box f, the time T to compute task f depends on moving the data in and out of the box. Thus, for a machine f with infinite memory and infinitely fast arithmetic, Kolmogorov complexity K(i+r) defines the fastest way to compute task f.

  6. SABR model: We integrate in time (Euler in log-forward, Milstein in volatility) logic state σ, F The representation K(σ,F) of the state σ,Fis critical!

  7. MPC– Bad News MPC – Good News Today’sarithmetic units are fast enough. So in practice... Kolmogorov Complexity => Discretisation & Compression => MPC depends on the Representation of the Problem. 1. Real computers do not have either infinite memory or infinitely fast arithmetic units. 2. KolmogorovTheorem. K is not a computable function.

  8. Euclids Elements, Representing a²+b²=c²

  9. 17 × 24 = ?

  10. Thinking Fast and Slow Daniel Kahneman Nobel Prize in Economics, 2002 back to 17 × 24 Kahneman splits thinking into: System 1: fast, hard to control ... 400 System 2: slow, easier to control ... 408

  11. Remembering Fast and Slow John von Neumann, 1946: “We are forced to recognize the possibility of constructing a hierarchy of memories, each of which has greater capacity than the preceding, but which is less quickly accessible.”

  12. Consider Computation and Memory Together and +,-,×,÷ +,-,×,÷ • polynomial or rational approx • continued fractions • multi-partite tables • uniform vs non-uniform • number of table entries • how many coefficients Underlying hardware/technology changes the optimum Computing f(x) in the range [a,b] with |E| ≤ 2⁻ⁿ Table Table+Arithmetic Arithmetic

  13. MPC in PracticeTradeoff Representation, Memory and Arithmetic

  14. Limits on Computing + and ×Shmuel Winograd, 1965 Bounds on Addition • Binary: O(log n) • Residue Number System: O(log 2log α(N)) • Redundant Number System: O(1) Bounds on Multiplication • Binary: O(log n) • Residue Number System: O(log 2log β(N)) • Using Tables: O(2[log n/2]+2+[log 2n/2]) • Logarithmic Number System: O(Addition) However, Binary and Log numbers are easy to compare, others are not! Lesson: If you optimize only a little piece of the computation, the result is useless in practice => Need to optimize ENTIRE programs. Or in other words: abstraction kills performance.

  15. Addition in O(1) Redundant: 2 bits represent 1 binary digit=> use counters to reduce the input a b c out1 out2 (3,2) counters reduce three numbers (a,b,c) to two numbers (out1, out2) so that a + b + c = out1 + out2

  16. From Theory to PracticeOptimise Whole Programs Customise Architecture Customise Numerics

  17. Mission Impossible?

  18. Less Data Movement = Less Data + Less Movement Maximum Performance Computing (MPC) The journey will take us through: Information Theory: Kolmogorov Complexity OptimisedArithmetic: Winograd Bounds Optimisation via Kahneman and Von Neumann Real World Dataflow Implications and Results

  19. Optimise Whole Programs with Finite Resources SYSTEM 1 x86 cores SYSTEM 2 flexible memory+logic Low Latency Memory High Throughput Memory Balance Computation and Memory

  20. The Ideal System 2 is a Production Line SYSTEM 1 x86 cores SYSTEM 2 flexible memory+logic Low Latency Memory High Throughput Memory Balance Computation and Memory

  21. 8 Maxeler DFEs replacing 1,900 Intel CPU cores presented by ENI at the Annual SEG Conference, 2010 Compared to 32 3GHz x86 cores parallelized using MPI 100kWatts of Intel cores => 1kWatt of Maxeler Dataflow Engines

  22. Example: Sparse Matrix Computations O. Lindtjorn et al, HotChips 2010 Given matrix A, vector b, find vector x in Ax = b. DOES NOT SCALE BEYOND SIX x86 CPU CORES MAXELER SOLUTION: 20-40x in 1U Domain Specific Address and Data Encoding (*Patent Pending)

  23. Example: JP Morgan Derivatives Pricing O Mencer, S Weston, Journal on Concurrency and Computation, July 2011. • Compute value and risk of complex credit derivatives. • Moving overnight run to realtime intra day • Reported Speedup: 220-270x8 hours => 2 minutes • Power consumption per node drops from 250W to 235W per node See JP Morgan talk at Stanford on Youtube, search “weston maxeler”

  24. Maxeler Loop Flow Graphs for JP Morgan Credit DerivativesWhole Program Transformation Options

  25. Maxeler Data Flow Graph for JP Morgan Interest Rates Monte Carlo Acceleration

  26. Example: data flow graph generated by MaxCompiler 4866 static dataflow cores in 1 chip

  27. Maxeler Dataflow Engines (DFEs) High Density DFEsIntel Xeon CPU cores and up to 6 DFEs with 288GB of RAM The Dataflow Appliance Dense compute with 8 DFEs, 384GB of RAM and dynamic allocation of DFEs to CPU servers with zero-copy RDMA access The Low Latency Appliance Intel Xeon CPUs and 1-2 DFEs with direct links to up to six 10Gbit Ethernet connections Dataflow Engines 48GB DDR3, high-speed connectivity and dense configurable logic MaxRack 10, 20 or 40 node rack systems integrating compute, networking & storage MaxWorkstation Desktop dataflow development system MaxCloud Hosted, on-demand, scalable accelerated compute

More Related