Cs203 lecture 6
This presentation is the property of its rightful owner.
Sponsored Links
1 / 60

CS203 Lecture 6 PowerPoint PPT Presentation


  • 45 Views
  • Uploaded on
  • Presentation posted in: General

CS203 Lecture 6. John Hurley Cal State LA.

Download Presentation

CS203 Lecture 6

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Cs203 lecture 6

CS203 Lecture 6

John Hurley

Cal State LA


Execution time

Suppose two algorithms perform the same task such as search (linear search vs. binary search) and sorting (selection sort vs. insertion sort). Which one is better? One possible approach to answer this question is to implement these algorithms in Java and run the programs to get execution time. But there are two problems for this approach:

  • First, there are many tasks running concurrently on a computer. The execution time of a particular program is dependent on the system load.

  • Second, the execution time is dependent on specific input. Consider linear search and binary search for example. If an element to be searched happens to be the first in the list, linear search will find the element quicker than binary search.

Execution Time


Growth rate

It is very difficult to compare algorithms by measuring their execution time. To overcome these problems, a theoretical approach was developed to analyze algorithms independent of computers and specific input.

This approach approximates the effect of a change on the size of the input. In this way, you can see how fast an algorithm’s execution time increases as the input size increases, so you can compare two algorithms by examining their growth rates.

Growth Rate


Big o notation

Consider linear search. The linear search algorithm compares the key with the elements in the array sequentially until the key is found or the array is exhausted. If the key is not in the array, it requires n comparisons for an array of size n. If the key is in the array, it requires n/2 comparisons on average.

The algorithm’s execution time is proportional to the size of the array. If you double the size of the array, you will expect the number of comparisons to double. The algorithm grows at a linear rate. The growth rate has an order of magnitude of n. Computer scientists use the Big O notation to abbreviate for “order of magnitude.” Using this notation, the complexity of the linear search algorithm is O(n), pronounced as “order of n.”

Big O Notation


Best worst and average

For the same input size, an algorithm’s execution time may vary, depending on the input. An input that results in the shortest execution time is called the best-case input and an input that results in the longest execution time is called the worst-case input.

Best-case and worst-case are not representative, but worst-case analysis is very useful. You can show that the algorithm will never be slower than the worst-case. An average-case analysis attempts to determine the average amount of time among all possible input of the same size.

Average-case analysis is ideal, but difficult to perform, because it is hard to determine the relative probabilities and distributions of various input instances for many problems. Worst-case analysis is easier to obtain and is thus common. So, the analysis is generally conducted for the worst-case.

Best, Worst, and Average


Ignore multiplicative constants

The linear search algorithm requires n comparisons in the worst-case and n/2 comparisons in the average-case. Using the Big O notation, both cases require O(n) time. The multiplicative constant (1/2) can be omitted. Algorithm analysis is focused on growth rate. The multiplicative constants have no impact on growth rates. The growth rate for n/2 or 100n is the same as n, i.e., O(n) = O(n/2) = O(100n).

Ignore Multiplicative Constants


Ignore non dominating terms

Consider the algorithm for finding the maximum number in an array of n elements. If n is 2, it takes one comparison to find the maximum number. If n is 3, it takes two comparisons to find the maximum number. In general, it takes n-1comparisons to find maximum number in a list of n elements.

Algorithm analysis is concerned with large input size. If the input size is small, there is no significance to estimate an algorithm’s efficiency. As n grows larger, the n part in the expression n-1 dominates the complexity. The Big O notation allows you to ignore the non-dominating part (e.g., -1 in the expression n-1) and highlight the important part (e.g., n in the expression n-1). So, the complexity of this algorithm is O(n).

Ignore Non-Dominating Terms


Useful mathematic summations

Useful Mathematic Summations


Repetition simple loops

constant time

executed

n times

Repetition: Simple Loops

for (i = 1; i <= n; i++) {

k = k + 5;

}

Time Complexity

T(n) = (a constant c) * n = cn = O(n)

Ignore multiplicative constants (e.g., “c”).


Repetition nested loops

constant time

inner loop

executed

n times

executed

n times

Repetition: Nested Loops

for (i = 1; i <= n; i++) {

for (j = 1; j <= n; j++) {

k = k + i + j;

}

}

Time Complexity

T(n) = (a constant c) * n * n = cn2 = O(n2)

Ignore multiplicative constants (e.g., “c”).


Repetition nested loops1

constant time

inner loop

executed

i times

executed

n times

Repetition: Nested Loops

for (i = 1; i <= n; i++) {

for (j = 1; j <= i; j++) {

k = k + i + j;

}

}

Time Complexity

T(n) = c + 2c + 3c + 4c + … + nc = cn(n+1)/2 = (c/2)n2 + (c/2)n = O(n2)

Ignore non-dominating terms

Ignore multiplicative constants


Repetition nested loops2

constant time

inner loop

executed

20 times

executed

n times

Repetition: Nested Loops

for (i = 1; i <= n; i++) {

for (j = 1; j <= 20; j++) {

k = k + i + j;

}

}

Time Complexity

T(n) = 20 * c * n = O(n)

Ignore multiplicative constants (e.g., 20*c)


Sequence

inner loop

executed

20 times

executed

n times

executed

10 times

Sequence

for (j = 1; j <= 10; j++) {

k = k + 4;

}

for (i = 1; i <= n; i++) {

for (j = 1; j <= 20; j++) {

k = k + i + j;

}

}

Time Complexity

T(n) = c *10 + 20 * c * n = O(n)


Selection

O(n)

Let n be

list.size().

Executed

n times.

Selection

if (list.contains(e)) {

System.out.println(e);

}

else

for (Object t: list) {

System.out.println(t);

}

Time Complexity

T(n) = test time + worst-case (if, else)

= O(n) + O(n)

= O(n)


Constant time

The Big O notation estimates the execution time of an algorithm in relation to the input size. If the time is not related to the input size, the algorithm is said to take constant time with the notation O(1). For example, a method that retrieves an element at a given index in an array takes constant time, because it does not grow as the size of the array increases.

Constant Time


Implementation of data structures

In the last couple of weeks, we have covered various data structures that are implemented in the Java Collections Framework. As a working programmer, you may often use these, and most of the rest of the time, you will use various libraries that supply alternatives. You will not often have to implement the data structures yourself.

However, in order to understand how these structures work at a lower level, we will cover implementation this week.

Implementation of Data Structures


Lists

A list stores data in sequential order. For example, a list of students, a list of available rooms, a list of cities, and a list of books, etc. can be stored using lists. The common operations on a list are usually the following:

·Retrieve an element from this list.

·Insert a new element to this list.

·Delete an element from this list.

·Find how many elements are in this list.

·Find if an element is in this list.

·Find if this list is empty.

Lists


Two ways to implement lists

There are two common ways to implement a list in Java.

  • Using arrays. One is to use an array to store the elements. The array is dynamically created. If the capacity of the array is exceeded, create a new larger array and copy all the elements from the current array to the new array.

  • Using a linked list. The other approach is to use a linked structure. A linked structure consists of nodes. Each node is dynamically created to hold an element. All the nodes are linked together to form a list.

Two Ways to Implement Lists


Two ways to implement lists1

Sample Code is linked from course page

Two Ways to Implement Lists


Arraylist and linkedlist

For convenience, let’s name these two classes: MyArrayList and MyLinkedList. These two classes have common operations, but different data fields. The common operations can be generalized in an interface or an abstract class. A good strategy is to combine the virtues of interfaces and abstract classes by providing both interface and abstract class in the design so the user can use either the interface or the abstract class whichever is convenient. Such an abstract class is known as a convenience class.

ArrayListand LinkedList


Mylist interface and myabstractlist class

MyList

MyList Interface and MyAbstractList Class

MyAbstractList


Array list

Once an array is created, its size cannot be changed. Nevertheless, you can still use array to implement dynamic data structures. The trick is to create a new larger array to replace the current array if the current array cannot hold new elements in the list.

Initially, an array, say data of Object[] type, is created with a default size. When inserting a new element into the array, first ensure there is enough room in the array. If not, create a new array twice the size of the current one. Copy the elements from the current array to the new array. The new array now becomes the current array.

Array List


Insertion

Before inserting a new element at a specified index, shift all the elements after the index to the right and increase the list size by 1.

Insertion


Deletion

To remove an element at a specified index, shift all the elements after the index to the left by one position and decrease the list size by 1.

Deletion


Implementing myarraylist

Implementing MyArrayList


Linked lists

Since MyArrayList is implemented using an array, the methods get(int index) and set(int index, Object o) for accessing and modifying an element through an index and the add(Object o) for adding an element at the end of the list are efficient.

However, the methods add(int index, Object o) and remove(int index) are inefficient because it requires shifting potentially a large number of elements. You can use a linked structure to implement a list to improve efficiency for adding and removing an element anywhere in a list.

Linked Lists


Nodes in linked lists

A linked list consists of nodes. Each node contains an element, and each node is linked to its next neighbor. Thus a node can be defined as a class, as follows:

class Node<E> {

E element;

Node next;

public Node(E o) {

element = o;

}

}

Nodes in Linked Lists


Adding three nodes

The variable head refers to the first node in the list, and the variable tail refers to the last node in the list. If the list is empty, both are null. For example, you can create three nodes to store three strings in a list, as follows:

Step 1: Declare head and tail:

Adding Three Nodes


Adding three nodes cont

Step 2: Create the first node and insert it to the list:

Adding Three Nodes, cont.


Adding three nodes cont1

Step 3: Create the second node and insert it to the list:

Adding Three Nodes, cont.


Adding three nodes cont2

Step 4: Create the third node and insert it to the list:

Adding Three Nodes, cont.


Traversing all elements in the list

Each node contains the element and a data field named next that points to the next element. If the node is the last in the list, its pointer data field next contains the value null. You can use this property to detect the last node. For example, you may write the following loop to traverse all the nodes in the list.

Node<E> current = head;

while (current != null) {

System.out.println(current.element);

current = current.next;

}

Traversing All Elements in the List


Mylinkedlist

MyLinkedList

MyLinkedList

TestMyLinkedList

Run


Implementing addfirst e o

public void addFirst(E o) {

Node<E> newNode = new Node<E>(o);

newNode.next = head;

head = newNode;

size++;

if (tail == null)

tail = head;

}

Implementing addFirst(E o)


Implementing addlast e o

public void addLast(E o) {

if (tail == null) {

head = tail = new Node<E>(element);

}

else {

tail.next = new Node(element);

tail = tail.next;

}

size++;

}

Implementing addLast(E o)


Implementing add int index e o

public void add(int index, E o) {

if (index == 0) addFirst(o);

else if (index >= size) addLast(o);

else {

Node<E> current = head;

for (int i = 1; i < index; i++)

current = current.next;

Node<E> temp = current.next;

current.next = new Node<E>(o);

(current.next).next = temp;

size++;

}

}

Implementing add(int index, E o)


Implementing removefirst

public E removeFirst() {

if (size == 0) return null;

else {

Node<E> temp = head;

head = head.next;

size--;

if (head == null) tail = null;

return temp.element;

}

}

Implementing removeFirst()


Implementing removelast

public E removeLast() {

if (size == 0) return null;

else if (size == 1)

{

Node<E> temp = head;

head = tail = null;

size = 0;

return temp.element;

}

else

{

Node<E> current = head;

for (int i = 0; i < size - 2; i++)

current = current.next;

Node temp = tail;

tail = current;

tail.next = null;

size--;

return temp.element;

}

}

Implementing removeLast()


Implementing remove int index

public E remove(int index) {

if (index < 0 || index >= size) return null;

else if (index == 0) return removeFirst();

else if (index == size - 1) return removeLast();

else {

Node<E> previous = head;

for (int i = 1; i < index; i++) {

previous = previous.next;

}

Node<E> current = previous.next;

previous.next = current.next;

size--;

return current.element;

}

}

Implementing remove(int index)


Time complexity for arraylist and linkedlist

Time Complexity for ArrayList and LinkedList


Circular linked lists

  • A circular, singly linked list is like a singly linked list, except that the pointer of the last node points back to the first node.

Circular Linked Lists


Doubly linked lists

  • A doubly linkedlist contains the nodes with two pointers. One points to the next node and the other points to the previous node. These two pointers are conveniently called a forward pointer and a backward pointer. So, a doubly linked list can be traversed forward and backward.

Doubly Linked Lists


Circular doubly linked lists

  • A circular, doubly linked list is doubly linked list, except that the forward pointer of the last node points to the first node and the backward pointer of the first pointer points to the last node.

Circular Doubly Linked Lists


Stacks

A stack can be viewed as a special type of list, where the elements are accessed, inserted, and deleted only from the end, called the top, of the stack.

Stacks


Queues

A queue represents a waiting list. A queue can be viewed as a special type of list, where the elements are inserted into the end (tail) of the queue, and are accessed and deleted from the beginning (head) of the queue.

Queues


Implementing stacks and queues

  • Using an array list to implement Stack

  • Use a linked list to implement Queue

    Since the insertion and deletion operations on a stack are made only at the end of the stack, using an array list to implement a stack is more efficient than a linked list. Since deletions are made at the beginning of the list, it is more efficient to implement a queue using a linked list than an array list. This section implements a stack class using an array list and a queue using a linked list.

Implementing Stacks and Queues


Design of the stack and queue classes

There are two ways to design the stack and queue classes:

  • Using inheritance: You can define the stack class by extending the array list class, and the queue class by extending the linked list class.

  • Using composition: You can define an array list as a data field in the stack class, and a linked list as a data field in the queue class.

Design of the Stack and Queue Classes


Composition is better

Both designs are fine, but using composition is better because it enables you to define a complete new stack class and queue class without inheriting the unnecessary and inappropriate methods from the array list and linked list.

Composition is Better


Mystack and myqueue

GenericStack

GenericQueue

MyStack and MyQueue


Priority queue

A regular queue is a first-in and first-out data structure. Elements are appended to the end of the queue and are removed from the beginning of the queue. In a priority queue, elements are assigned with priorities. When accessing elements, the element with the highest priority is removed first. A priority queue has a largest-in, first-out behavior. For example, the emergency room in a hospital assigns patients with priority numbers; the patient with the highest priority is treated first.

Priority Queue


Cs203 lecture 6

The term "Heap" is used in several different ways. The Priority Queue example code uses a heap that adheres to this definition:

A heap is a binary tree (one in which each node has at most two children) that with these two properties:

Order property:

If A is a parent node of B then the key of node A is ordered with respect to the key of node B with the same ordering applying across the heap. Either the keys of parent nodes are always greater than or equal to those of the children and the highest key is in the root node (max heap) or the keys of parent nodes are less than or equal to those of the children and the lowest key is in the root node (min heap).

Shape property:

1- All leaves are either at depth d or d-1 (for some value d).

2- All of the leaves at depth d-1 are to the right of the leaves at depth d.

3- (a) There is at most 1 node with just 1 child. (b) Any such child is the left child of its parent, and (c) it is the rightmost leaf at depth d.

Heap


Cs203 lecture 6

Heap


Representing a heap

For a node at position i, its left child is at position 2i+1 and its right child is at position 2i+2, and its parent is (i-1)/2. For example, the node for element 39 is at position 4, so its left child (element 14) is at 9 (2*4+1), its right child (element 33) is at 10 (2*4+2), and its parent (element 42) is at 1 ((4-1)/2).

Representing a Heap


Adding elements to the heap

Adding 3, 5, 1, 19, 11, and 22 to a heap, initially empty

Adding Elements to the Heap


Rebuild the heap after adding a new node

Adding 88 to the heap

Rebuild the heap after adding a new node


Removing the root and rebuild the tree

Removing root 62 from the heap

Removing the Root and Rebuild the Tree


Removing the root and rebuild the tree1

Move 9 to root

Removing the Root and Rebuild the Tree


Removing the root and rebuild the tree2

Swap 9 with 59

Removing the Root and Rebuild the Tree


Removing the root and rebuild the tree3

Swap 9 with 44

Removing the Root and Rebuild the Tree


Removing the root and rebuild the tree4

Swap 9 with 30

Removing the Root and Rebuild the Tree


  • Login