parallel compilers n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Parallel Compilers PowerPoint Presentation
Download Presentation
Parallel Compilers

Loading in 2 Seconds...

play fullscreen
1 / 15

Parallel Compilers - PowerPoint PPT Presentation


  • 68 Views
  • Uploaded on

Parallel Compilers. Ian Lehmann Jordan Wright. What is it?. Find parallelism in sequential code and exploit it to speed up execution time without the programmer needing to modify their source code. Why do we want it?.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Parallel Compilers' - leoma


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
parallel compilers

Parallel Compilers

Ian Lehmann

Jordan Wright

what is it
What is it?
  • Find parallelism in sequential code and exploit it to speed up execution time without the programmer needing to modify their source code.
why do we want it
Why do we want it?
  • The goal of automatic parallelization is to relieve programmers from the tedious and error-prone manual parallelization process.
how does it work
How does it work?
  • 2 Methods
    • cyclic multi-threading
    • pipelined multi-threading
embarrassingly parallel
“Embarrassingly Parallel”
  • Search for loops without dependences between iterations.
  • No synchronization needed
  • Speedup proportional to workers
embarrassingly parallel1
“Embarrassingly Parallel”

Before

After

void foo()

{

start( task(0,4));

start( task(1,4));

start( task(2,4));

start( task(3,4));

wait();

}

voidtask(k, M)

{

for(int i=k;i< N;i+= M)

array[i]= work(array[i]);

}

voidfoo()

{

for(inti=0; i < N; i++)

array[i]= work(array[i]);

}

difficulties
Difficulties
  • Dependence analysis is hard for code using indirect addressing, pointers, recursion, and indirect function calls

intfactorial(n)

{

if(n ==0)

return1;

else

returnn * factorial(n -1);

}

difficulties1
Difficulties
  • Loops have an unknown number of iterations

for(inti =0; i < N; i++)

{

if(array[i]>5)

break;

array[i]+=5;

}

difficulties2
Difficulties
  • Accesses to global resources are difficult to coordinate in terms of memory allocation, I/O, and shared variables.

for(inti =0; i < N; i++)

{

array[i]= work(array[i-1]);

}

real world projects
Real World Projects
  • Intel Compiler
  • Harvard HELIX Research Project
slide13
Demo

#include <cmath>

intmain()

{

constint N =200000000;

double*array =newdouble[N];

for(inti=0;i< N;i++){

array[i]=double(i)*double(i)+1.0; array[i]=sqrt(array[i]);

}

delete[] array;

return0;

}

sources
Sources
  • http://en.wikipedia.org/wiki/Automatic_parallelization
  • http://helix.eecs.harvard.edu/index.php/Main_Page
  • http://software.intel.com/en-us/articles/automatic-parallelization-with-intel-compilers
  • http://en.wikipedia.org/wiki/Automatic_parallelization_tool