1 / 10

CENG 532 - Distributed Computing Systems

CENG 532 - Distributed Computing Systems. Measures of Performance. Grosch’s Law-1960’s. “ To sell a computer tw i ce as much, i t must be four times as fast” It was Ok at the time, but soon it became meaningless After 1970, it was possible to make faster computers and sell even cheaper….

meganp
Download Presentation

CENG 532 - Distributed Computing Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CENG 532- Distributed Computing Systems Measures of Performance

  2. Grosch’s Law-1960’s • “To sell a computer twice as much, it mustbe four times as fast” • It was Ok at the time, but soon it became meaningless • After 1970, it was possible to make faster computers and sell even cheaper…. • Ultimately the switching speeds reach a limit, which is the speed of the light on an integrated circuit…

  3. Von Neumann’s Bottleneck • Serial single processor computer architectures based on John Von Neumann’s architecture of 1940-1950 has: One processor, single control unit, single memory • This is no more valid: Low cost parallel computers can easily deliver the performance of the fastest single processor computer…

  4. Amdahl’s Law; 1967 Amdahl’s law is still valid! • Let speedup (S) be ratio of serial time (one processor) to parallel time (N processors) S=T1/TN < 1/f Where f is the serial fraction of the problem, 1-f is the parallel fraction of the problem, T1 is one processor sequential time /TN is N processor parallel time, then The proof of Amdahl’s law: TN= T1*f+T1(1-f)/N S=1/(f+(1-f)/N), thus S < 1/f

  5. Amdahl’s Law; 1967 • At f=0.10, Amdahl’ Law predicts, at best a tenfold speedup, which is very pessimistic • This was soon broken, encouraged by Gordon Bell Prize*! *Gordon Bell is computer scientist contributing to parallel computing while at DEC

  6. Gustafson-Barsis Law; 1988 • The team of researchers of Sandia Labs (John Gustafson and Ed Barsis) , using 1024 processor nCube/10, overthrew Amdahl’s Law, by achieving 1000 fold speedup with f=0.004 to 0.008. • According to Amdahl’s Law, the speedup would have been from 125 to 250. • The key point was found to be that 1-f was not independent of N. The relationship between N and 1-f may not be linear… Parallel algorithms may perform better than their sequential counter parts.

  7. Gustafson-Barsis Law; 1988 • They interpreted the speedup formula, by scaling up the problem to fit the parallel machine: T1=f+(1-f)N After redefining TN asTN =f+(1-f)=1, then the speedup can be computed as S=T1/TN= (f+(1-f)N)/1= f+N-Nf= S=N-(N-1)f

  8. Extreme case analysis • Assuming Amdahl’s Law, an upper and lower bound can be given for the speedup!: N/log2N <= S <= N where logN is based on divide and conquer

  9. Inclusion of the communication time • Some researchers (Gelenbe) suggests speedup to be approximated by S=1/C(N) where C(N) is some function of N • For example, C(N) can be estimated as C(N)=A+Blog2N where A and B are constants determined by the communication mechanisms

  10. Benchmark Performance • Benchmark is a program whose purpose is to measure a performance characteristic of a computer system, such as floating point speed, I/O speed, or for a restricted class of problems • The benchmarks are arranged to be either • Kernels of real applications, such as Linpacks, Livermore Loops, or • Synthetic, approximating the behavior of the real problem, such as Whetstone and Wichmann…These benchmarks were synthetic, consisting of artificial kernels intended to represent the computationally intensive part of certain scientific codes. They have been in use since 1972…

More Related