1 / 10

Efficient Parallel and Incremental Computation Model for High Performance

This research paper presents a model for parallel and incremental computation that aims to improve performance by leveraging self-adjusting computation principles. The model allows for deterministic, potentially parallel computations, alongside non-deterministic tasks that may involve I/O operations. Through the self-adjusting computation approach, small changes in input result in small changes in output, enhancing efficiency and reusability. The study demonstrates the Morph application as an example to showcase the benefits of this model, emphasizing data decomposition and identification of independent subtasks. Tasks and Tiles methodology further elucidates natural decomposition for parallelism and incrementalism, leading to significant speedups compared to sequential processing on multi-core machines.

Download Presentation

Efficient Parallel and Incremental Computation Model for High Performance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Two for the Price of One:A Model for Parallel and Incremental Computation Thomas Ball, Sebastian Burckhardt, Daan Leijen Microsoft Research – Redmond Caitlin Sadowski, Jaeheon Yi UC Santa Cruz

  2. Expensive, Repeated Tasks • compute: deterministic, potentially parallel, no I/O • mutate: non-deterministic, may perform I/O

  3. Self-Adjusting Computation:recordand repeat equivalent Goal of SAC: Incremental Computation Yields Higher Performance

  4. Demo: Morph Application

  5. Self-Adjusting Computation:Small Change to Input

  6. Small Changes in Input Produce Small Changes in Output Record Repeat

  7. Key Observations • Same abstraction (tasks) can be used for • Parallelism • Self-adjusting computation • Leverage the work a programmer has done to decompose a large task into parallel subtasks • Data decomposition (partitioning) • Identifying independent and dependent subtasks

  8. Tasks and Tiles: Natural Decomposition for Parallelism and Incrementalism B E 0 0 1 1 2 2 3 3 0 1 Out[3]= F (B[1-3],E[1-3]) Out 2 3

  9. Tasks and Tiles 1 0 B[3] E[3] E[1] B[1] 3 2 Out[3] Out[2] Out[0] Out[1] B[2] E[2]

  10. Results • Simple set of primitives for simultaneously expressing potential parallel and incremental computation • Algorithm to • record • control /data dependencies of a deterministic parallel computation and • cache input/output effect of tasks • repeat • the computation while re-executing only tasks whose dependencies have changed, • using cached results for unaffected tasks • Evaluation • 12x to 37x speedup compared to sequential baseline on an 8-core machine

More Related