1 / 18

# CMPT 128: Introduction to Computing Science for Engineering Students - PowerPoint PPT Presentation

CMPT 128: Introduction to Computing Science for Engineering Students. Running Time Big O Notation. How fast is my algorithm?. We have seen that there are many algorithms to solve any problems. How to we choose the most efficient? What is efficient?

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about 'CMPT 128: Introduction to Computing Science for Engineering Students' - lisbet

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### CMPT 128: Introduction to Computing Science for Engineering Students

Running Time

Big O Notation

• We have seen that there are many algorithms to solve any problems.

• How to we choose the most efficient?

• What is efficient?

• One measure is how fast our algorithm can determine the solution

• This is not the only measure, nor is it always the best measure

• How do we measure ‘how fast’

• What contributes to how fast a program runs?

• The speed the CPU can process operations

• The efficiency of your code (the number of operations needed to complete your calculations)

• This depends on the algorithm used

• This depends of the particular implementation of the algorithm

• This may depend on the size of the data set being analyzed

• How many other things your computer is doing at the same time

• Two approaches

• determine an upper limit on the number of operations needed

• Know the speed of your CPU

• Calculate an upper limit on running time for an input data set of a particular size

• Implement your algorithm and make measurements of how long it takes to run for data sets of varying sizes

• Create a common baseline, run tests on same machine with same background load

• Disadvantage: you already have spend the time coding and testing if the algorithm is not practical this may have been wasted

• Consider the operation used in your code

• +, -, *, /, %, <, <=, >, >=, ==, =, !=, &, !, &&, || …

• Make a simplifying assumption that each of these operations take the same length of time to execute

• Now we just need to count the operations in your program to get an estimate of ‘how fast’ it will run

• This estimate is independent of the machine on which the code runs.

• Once we know the time take by an ‘operation’ on our machine we immediately know how long our code will take

• Simple linear or branching code

if( neighborcount > 3 || neighborcount < 2 )

{ nextGenBoard[indexrow][indexcol] = '.'; }

else if( neighborcount == 3 )

{ nextGenBoard[indexrow][indexcol] = organism;}

else

{ nextGenBoard[indexrow][indexcol] = lifeBoard[indexrow][indexcol]; }

• The first if executes three operations, >, ||, and <

• If the first if is true then the block of code above executes with 6 operation (3 in the if and [ ] , [ ], and =)

• If the first if is false then the elseif adds one more operation, == (giving a total of 3+1=4 operations)

if( neighborcount > 3 || neighborcount < 2 )

{ nextGenBoard[indexrow][indexcol] = '.'; }

else if( neighborcount == 3 )

{ nextGenBoard[indexrow][indexcol] = organism;}

else

{ nextGenBoard[indexrow][indexcol] = lifeBoard[indexrow][indexcol]; }

• If the first if and the else if are both false the last line of code executes with 5 operations ([ ], [ ], =, [ ], [ ]). In this case the block of code above executes with a total of 4+5= 9 operations

• If the elseif is true then the block of code above executes with a total of 4+3 = 7 operations (3 operations [ ], [ ], =)

• Take the worst case number of operations is 9

• While loop

count = 0;

while (count < n )

{

localSum = dataArray[count] + 2 * localSum;

count++;

}

• n determine the number of times through the while loop.

• Each time through the while loop, the body of the loop executes 5 operations( =, [ ], +, *, ++)

• Each time through the while loop the test is executed which adds another operation (Total operations per time through loop is 5+1=6)

• While loop

count = 0;

while (count < n )

{

localSum = dataArray[count] + 2 * localSum;

count++;

}

• Total operations each time through loop is 6

• The initialization of count takes one operation before the loop begins executing

• The loop is executed n times

• The number of operations is 6*n + 1

• While loop

count = 0;

while (count < n )

{

localSum = dataArray[count] + 2 * localSum;

count++;

}

• The number of operations is 6*n + 1

• The test in the while is executed one additional time at the end of the loop

• The number of operations is 6*n + 2

• While loop

count = 0;

while (count < n )

{

localSum = dataArray[count] + 2 * localSum;

count++;

}

• Total operations each time through loop is 6

• The initialization of count takes one operation before the loop begins executing

• The loop is executed n times

• The number of operations is 6*n + 1

• Nested Loops

for( k=0; k<n; k++)

{

for( j=0; j<n; j++)

{

matOut[k] = matIn1[k] * matIn2[j] + matin1[j] * matin2[k];

}

}

• The number of rows and columns determine the number of operations.

• A count of the number of operations will be a function of n

• The inner loop contains 9 operations so the number of operations inside the inner loop will be 9*n

• In order the operations are [ ], =, [ ], *, [ ], +, [ ], * [ ]

• Nested loops

for( k=0; k<n; k++)

{

for( j=0; j<n; j++)

{

matOut[k] = matIn1[k] * matIn2[j] + matin1[j] * matin2[k];

}

}

• The number of rows and columns determine the number of operations.

• Each time the inner for statement is executed 2 more operations occur (<, ++), so each time through the inner loop takes 11*n operations

• Before starting through the inner loop the initialization j=0 takes place, one more operation

• Therefore, each time through the body of the outer loop 11*n+1 operations

• Nested loops

for( k=0; k<n; k++)

{

for( j=0; j<n; j++)

{

matOut[k] = matIn1[k] * matIn2[j] + matin1[j] * matin2[k];

}

}

• The number of rows and columns determine the number of operations.

• Each time the outer for statement is executed 2 more operations occur (<, ++), so each time through the outer loop takes 3 +11*n operations or a total of (3 + 11*n) * n operations

• Before starting through the inner loop the initialization j=0 takes place, one more operation

• Total number of operation 1+(3 + 11*n) * n = 1+3n+11n2

• Nested loops

for( k=0; k<n; k++)

{

for( j=0; j<n; j++)

{

matOut[k] = matIn1[k] * matIn2[j] + matin1[j] * matin2[k];

}

}

• The number of rows and columns determine the number of operations.

• Each time the outer for statement is executed 3 more operations occur (<, ++, and one more test in inner loop), so each time through the outer loop takes 4 +11*n operations or a total of (4 + 11*n) * n operations

• Before starting through the inner loop the initialization j=0 takes place, one more operation, and one additional test occurs

• Total number of operation 2+(4 + 11*n) * n = 2+4n+11n2

• Estimate the order of the number of calculations needed

• Order is the largest power of n in the estimated upper limit of the number of operations

• For most n (amount of data) it is generally true that an order n algorithm is significantly faster than an order n+1 algorithm

• An algorithm with order n operations is said to run in linear time

• An algorithm with order n2 operations is said to run in quadratic time

• Looking for a ‘good’ upper limit

• Just consider the Order.

• The order is the largest power of n

• First example: 9 operations

• O(9) = 0 Order 0 (not a function of n)

• Second example: 6*n +1 operations

• O(6*n +1) = n Order 1 (largest power of n is 1)

• Third example: 1+3n+11n2

• O(1+3n+11n2) = n2 Order 2 (largest power of n is 2)

• How good are our estimates

• The estimates we have made are worst case estimates.

• In some cases algorithms will finish much faster if input data has particular properties

• Be careful the measurement is only as good as the assumptions

• We can directly measure ‘how fast’ for particular types of data sets of particular sizes

• You are doing this is your lab

• This is still a way to approximate performance in a general case on a wider variety of sizes

• Measure for some sizes

• Fit results with a curve, then predict using the curve what the performance would be expected to be