lecture 2 the art of concurrency n.
Skip this Video
Download Presentation
Lecture 2 The Art of Concurrency

Loading in 2 Seconds...

play fullscreen
1 / 47

Lecture 2 The Art of Concurrency - PowerPoint PPT Presentation

  • Uploaded on

Lecture 2 The Art of Concurrency. 张奇 复旦 大学. COMP630030 Data Intensive Computing. 并行让程序运行的更快. Why Do I Need to Know This? What’s in It for Me?. There’s no way to avoid this topic Multicore processors are here now and here to stay

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Lecture 2 The Art of Concurrency' - damia

Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
lecture 2 the art of concurrency

Lecture 2 The Art of Concurrency



COMP630030 Data Intensive Computing

why do i need to know this what s in it for me
Why Do I Need to Know This? What’s in It for Me?
  • There’s no way to avoid this topic
    • Multicore processors are here now and here to stay
    • “The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software” (Dr. Dobb’s Journal, March 2005)
isn t concurrent programming hard
Isn’t Concurrent Programming Hard?
  • Concurrent programming is no walk in the park.
  • With a serial program
    • execution of your code takes a predictable path through the application.
  • Concurrent algorithms
    • require you to think about multiple execution streams running at the same time
primer on concurrent programming
  • Concurrent programming is all about independent computations that the machine can execute in any order.
  • Not everything within an application will be independent, so you will still need to deal with serial execution amongst the concurrency.
four steps of a threading methodology
Four Steps of a Threading Methodology
  • Step 1. Analysis: Identify Possible Concurrency
    • Find the parts of the application that contain independent computations.
    • identify hotspots that might yield independent computations
four steps of a threading methodology1
Four Steps of a Threading Methodology
  • Step 2. Design and Implementation: Threading the Algorithm
  • This step is what this book is all about
four steps of a threading methodology2
Four Steps of a Threading Methodology
  • Step 3. Test for Correctness: Detecting and Fixing Threading Errors
  • Step 4. Tune for Performance: Removing Performance Bottlenecks
design models for concurrent algorithms
Design Models for Concurrent Algorithms
  • The way you approach your serial code will influence how you reorganize the computations into a concurrent equivalent.
    • Task decomposition
      • Independent tasks that threads
    • Data decomposition
      • Compute every element of the data independently.
task decomposition
Task Decomposition
  • Any concurrent transformation process is to identify computations that are completely independent.
  • Satisfy or remove dependencies
example numerical integration
Example: numerical integration
  • What are the independent tasks in this simple application?
  • Are there any dependencies between these tasks and, if so, how can we satisfy them?
  • How should you assign tasks to threads?
three key elements for any task decomposition design
Three key elements for any task decomposition design
  • What are the tasks and how are they defined?
  • What are the dependencies between tasks and how can they be satisfied?
  • How are the tasks assigned to threads?
two criteria for the actual decomposition into tasks
Two criteria for the actual decomposition into tasks
  • There should be at least as many tasks as there will be threads (or cores).
  • The amount of computation within each task (granularity) must be large enough to offset the overhead that will be needed to manage the tasks and the threads.
what are the dependencies between tasks and how can they be satisfied
What are the dependencies between tasks and how can they be satisfied?
  • Order dependency
    • some task relies on the completed results of the computations from another task
    • schedule tasks that have an order dependency onto the same thread
    • insert some form of synchronization to ensure correct execution order
what are the dependencies between tasks and how can they be satisfied1
What are the dependencies between tasks and how can they be satisfied?
  • Data dependency
    • assignment of values to the same variable that might be done concurrently
    • updates to a variable that could be read concurrently
    • create variables that are accessible only to a given thread.
    • Atomic Operation
data decomposition
Data Decomposition
  • execution is dominated by a sequence of update operations on all elements of one or more large data structures.
  • these update computations are independent of each other
  • dividing up the data structure(s) and assigning those portions to threads, along with the corresponding update computations (tasks)
three key elements for data decomposition design
Three key elementsfor data decomposition design
  • How should you divide the data into chunks?
  • How can you ensure that the tasks for each chunk have access to all data required for updates?
  • How are the data chunks assigned to threads?
how should you divide the data into chunks1
How should you divide the data into chunks?
  • Granularity of chunk
  • Shape of chunk
    • the neighboring chunks are and how any exchange of data
  • More vigilant with chunks of irregular shapes
example game of life on a finite grid2
Example: Game of Life on a finite grid
  • What is the large data structure in this application and how can you divide it into chunks?
  • What is the best way to perform the division?
what s not parallel
What’s Not Parallel
  • Algorithms with State
    • something kept around from one execution to the next.
    • For example, the seed to a random number generator or the file pointer for I/O would be considered state.
what s not parallel2
What’s Not Parallel
  • Induction Variables
what s not parallel3
What’s Not Parallel
  • Reduction
    • Reductions take a collection (typically an array) of data and reduce it to a single scalar value through some combining operation.
rule 1 identify truly independent computations
Rule 1: Identify Truly Independent Computations
  • It’s the crux of the whole matter!
rule 2 implement concurrency at the highest level possible
Rule 2: Implement Concurrency at the Highest Level Possible
  • Two directions:bottom-up and top-down
  • bottom-up
    • consider threading the hotspots directly
    • If this is not possible, search up the call stack
  • top-down
    • first consider the whole application and what the computation is coded to accomplish
    • While there is no obvious concurrency, distill the parts of the computation

Video encoding application:

individual pixels  frames  videos

rule 3 plan early for scalability to take advantage of increasing numbers of cores
Rule 3: Plan Early for Scalability to Take Advantage of Increasing Numbers of Cores
  • Quad-core processors are becoming the default multicore chip.
  • Flexible code that can take advantage of different numbers of cores.
  • C. Northcote Parkinson, “Data expands to fill the processing power available.”
rule 4 make use of thread safe libraries wherever possible
Rule 4: Make Use of Thread-Safe Libraries Wherever Possible
  • Intel Math Kernel Library (MKL)
  • Intel Integrated Performance Primitives (IPP)
rule 5 use the right threading model
Rule 5: Use the Right Threading Model
  • Don’t use explicit threads if an implicit threading model (e.g., OpenMPor Intel Threading Building Blocks) has all the functionality you need.
rule 7 use thread local storage whenever possible or associate locks to specific data
Rule 7: Use Thread-Local Storage Whenever Possible or Associate Locks to Specific Data
  • Synchronization is overhead that does not contribute to the furtherance of the computation
  • Should actively seek to keep the amount of synchronization to a minimum.
  • Using storage that is local to threads or using exclusive memory locations
rule 8 dare to change the algorithm for a better chance of concurrency
Rule 8: Dare to Change the Algorithm for a Better Chance of Concurrency
  • When choosing between two or more algorithms, programmers may rely on the asymptotic order of execution
  • O(n log n) algorithm will run faster than an O(n2) algorithm
  • If you cannot easily turn a hotspot into threaded code, you should consider using a suboptimal serial algorithm to transform, rather than the algorithm currently in the code.
pram algorithm1
PRAM Algorithm

Can we use the PRAM algorithm for parallel sum in a threaded code?

a more practical algorithm
A More Practical Algorithm
  • Divide the data array into chunks equal to the number of threads to be used.
  • Assign each thread a unique chunk and sum the values within the assigned subarrayinto a private variable.
  • Add these local partial sums to compute the total sum of the array elements.
prefix scan1
Prefix Scan
  • PRAM computation for prefix scan
prefix scan2
Prefix Scan
  • A More Practical Algorithm