1 / 65

CSC 211 Data Structures Lecture 30

CSC 211 Data Structures Lecture 30. Dr. Iftikhar Azim Niaz ianiaz@comsats.edu.pk. 1. Last Lecture Summary. Shortest Path Problem Dijkastra’s Algorithm Bellman Ford Algorithm All Pairs Shortest Path Spanning Tree Minimum Spanning Tree Kruskal’s Algorithm Prim’s Algorithm. 2.

gus
Download Presentation

CSC 211 Data Structures Lecture 30

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSC 211Data StructuresLecture 30 Dr. Iftikhar Azim Niaz ianiaz@comsats.edu.pk 1

  2. Last Lecture Summary • Shortest Path Problem • Dijkastra’sAlgorithm • Bellman Ford Algorithm • All Pairs Shortest Path • Spanning Tree • Minimum Spanning Tree • Kruskal’s Algorithm • Prim’s Algorithm 2

  3. Objectives Overview • Dictionaries • Concept and Implementation • Table • Concept, Operations and Implementation • Array based, Linked List, AVL, Hash table • Hash Table • Concept • Hashing and Hash Function • Hash Table Implementation • Chaining, Open addressing, Overflow Area • Application of Hash Tables

  4. Dictionaries • Collection of pairs. • (key, element) • Pairs have different keys. • Operations. • get(Key) • put(Key, Element) • remove(Key)

  5. Dictionaries - Application • Collection of student records in this class. • (key, element) = (student name, linear list of assignment and exam scores) • All keys are distinct. • Get the element whose key is Ahmed Hassan • Update the element whose key is Rahim Khan • put() implemented as update when there is already a pair with the given key. • remove() followed by put().

  6. Dictionary With Duplicates • Keys are not required to be distinct. • Word dictionary. • Pairs are of the form (word, meaning). • May have two or more entries for the same word. • (bolt, a threaded pin) • (bolt, a crash of thunder) • (bolt, to shoot forth suddenly) • (bolt, a gulp) • (bolt, a standard roll of cloth) • etc.

  7. Dictionary – Represent as a Linear List • L = (e0, e1, e2, e3, …, en-1) • Each eiis a pair(key, element). • 5-pair dictionary D = (a, b, c, d, e). • a = (aKey, aElement), b = (bKey, bElement), etc. • Array or linked representation.

  8. a b c d e Dictionary – Array Representation • Unsorted array • Get(Key) • O(size) time • Put(Key, Element) • O(size) time to verify duplicate, O(1) to add at end • Remove(Key) • O(size) time

  9. A B C D E Dictionary – Array Representation • Sorted array • Elements are in ascending order of Key • Get(Key) • O(log size) time • Put(Key, Element) • O(log size) time to verify duplicate, O(size) to add at end • Remove(Key) • O(size) time

  10. firstNode null a b c d e Dictionary – List Representation • Unsorted Chain • Get(Key) • O(size) time • Put(Key, Element) • O(size) time to verify duplicate, O(1) to add at end • Remove(Key) • O(size) time

  11. firstNode null A B C D E Dictionary – List Representation • Sorted Chain • Elements are in ascending order of Key • Get(Key) • O(size) time • Put(Key, Element) • O(size) time to verify duplicate, O(1) to add at end • Remove(Key) • O(size) time

  12. Dictionary - Applications • Many applications require a dynamic set that supports dictionary operations • Example: a compiler maintaining a symbol table where keys correspond to identifiers

  13. Table • Table is an abstract storage device that contains dictionary entries • Each table entry contains a unique keyk. • Each table entry may also contain some information,I, associated with its key. • A table entry is an ordered pair (K, I) • Operations: • insert: given a key and an entry, inserts the entry into the table • find: given a key, finds the entry associated with the key • remove: given a key, finds the entry associated with the key, and removes it

  14. How Should We Implement a Table? • How often are entries inserted and removed? • How many of the possible key values are likely to be used? • What is the likely pattern of searching for keys? e.g. Will most of the accesses be to just one or two key values? • Is the table small enough to fit into memory? • How long will the table exist? Our choice of representation for the Table ADT depends on the answers to the following

  15. Implementation 1: Unsorted Sequential Array • An array in which TableNodes are stored consecutively in any order • insert: add to back of array; O(1) • find: search through the keys one at a time, potentially all of the keys; O(n) • remove: find + replace removed node with last node; O(n) key entry and so on

  16. Implementation 2: Sorted Sequential Array • An array in which TableNodes are stored consecutively, sorted by key • insert: add in sorted order; O(n) • find: binary search; O(log n) • remove: find, remove node and shuffle down; O(n) key entry and so on We can use binary search because the array elements are sorted

  17. Implementation 3: Linked List • TableNodes are again stored consecutively (unsorted or sorted) • insert: add to front; • O(1)or O(n) for a sorted list • find: search through potentially all the keys, one at a time; • O(n)for unsorted or for a sorted list • remove: find, remove using pointer alterations; O(n) key entry and so on

  18. key key key key entry entry entry entry Implementation 4: AVL Tree • An AVL tree, ordered by key • insert: a standard insert; O(log n) • find: a standard find (without removing, of course); O(log n) • remove: a standard remove; O(log n) and so on

  19. Implementation 5: Direct Addressing • Suppose the range of keys is 0..m-1 and keys are distinct • Idea is to set up an array T[0..m-1] in which • T[i] = x if x T and key[x] = i • T[i] = NULL otherwise • This is called a direct-address table • Operations take O(1) time! ,the most efficient way to access the data • Works well when the Universe U of keys is reasonable small • When Universe U is very large • Storing a table T of size U may be impractical, given the memory available on a typical computer. • The set K of the keys actually stored may be so small relative to U that most of the space allocated for T would be wasted

  20. Direct Addressing

  21. An Example • A table for 50 students in a class • Key is 9 digit SSN number to identify each student • Number of different 9 digit number=109 • The fraction of actual keys needed. 50/109, 0.000005% • Percent of the memory allocated for table wasted, 99.999995% • An ideal table needed • Table should be of small fixed size • Any key in the universe should be able to be mapped in the slot into table, using some mapping function

  22. Implementation 6: Hashing • An array in which TableNodes are not stored consecutively • Their place of storage is calculated using the key and a hash function • Keys and entries are scattered throughout the array key entry 4 10 hash function array index Key 123

  23. Hashing Idea: • Use a function h to compute the slot for each key • Store the element in sloth(k) • A hash functionh transforms a key into an index in a hash table T[0…m-1]: h : U → {0, 1, . . . , m - 1} • We say that khashes to slot h(k)

  24. Hash Table • All search structures so far • Relied on a comparison operation • Performance O(n) or O( log n) • Assume we have a function • f ( key ) ® integer • i.e. one that maps a key to an integer • What performance might we expect now? 24

  25. Hash Table - Structure • Simplest Case • Assume items have integer keys in the range 1 .. m • Use the value of the key itselfto select a slot in a direct access table in which to store the item • To search for an item with key, k,just look in slot k • If there’s an item there,you’ve found it • If the tag is 0, it’s missing. • Constant time, O(1)

  26. Hash Table - Constraints • Keys must be unique • Keys must lie in a small range • For storage efficiency,keys must be dense in the range • If they’re sparse (lots of gaps between values),a lot of space is used to obtain speed • Space for speed trade-off

  27. Hash Tables –Relaxing the Constraints • Keys must be unique • Construct a linked list of duplicates “attached” to each slot • If a search can be satisfiedby any item with key, k,performance is still O(1) • but • If the item has some other distinguishing featurewhich must be matched,we get O(nmax), where nmax is the largest number of duplicates - or length of the longest chain

  28. Hash Tables –Relaxing the Constraints • Keys are integers • Need a hash functionh( key ) ® integer • ie one that maps a key to an integer • Applying this function to thekey produces an address • If h maps each key to a uniqueinteger in the range 0 .. m-1then search is O(1)

  29. Hash Tables –Hash Functions • Form of the hash function • Example - using ann-character key • int hash( char *s, int n ) {int sum = 0; while( n-- ) sum = sum + *s++; return sum % 256; }returns a value in 0 .. 255 • xorfunction is also commonly usedsum = sum ^ *s++; • But any function that generates integers in 0..m-1 for some suitable (not too large) m will do • As long as the hash function itself is O(1) !

  30. Hash Tables - Collisions • Hash function • With this hash function • int hash( char *s, int n ) {int sum = 0; while( n-- ) sum = sum + *s++; return sum % 256; } • hash( “AB”, 2 ) and hash( “BA”, 2 )return the same value! • This is called a collision • A variety of techniques are used for resolving collisions

  31. Hash Tables - Collisions • h : U → {0, 1, . . . , m - 1} • Hash table Size : m • Collisions occur when h(ki)=h(kj), i≠j 0 U (universe of keys) h(k1) h(k4) k1 K (actual keys) h(k2) = h(k5) k4 k2 k3 k5 h(k3) m - 1

  32. Hash Tables – Collision Handling • Collision occur when the hash function maps two different keys to the same address • The table must be able to recognize and resolve this • Recognize • Store the actual key with the item in the hash table • Compute the address k = h( key ) • Check for a hit • if ( table[k].key == key ) then hit else try next entry • Resolution • Variety of techniques We’ll look at various “try next entry” schemes

  33. Hash Tables – Implementation • Chaining • Open addressing (Closed Hashing) • Overflow Area • Bucket

  34. Hash Tables – Chaining • Collisions - Resolution • Linked list attached to each primary table slot • h(i) == h(i1) • h(k) == h(k1) == h(k2) • Searching for i1 • Calculate h(i1) • Item in table, i,doesn’t match • Follow linked list to i1 • If NULL found, key isn’t in table

  35. Chaining • Idea: Put all elements that hash to the same slot into a linked list • Slot j contains a pointer to the head of the list of all elements that hash to j

  36. Chaining • How to choose the size of the hash table m? • Small enough to avoid wasting space. • Large enough to avoid many collisions and keep linked-lists short. • Typically 1/5 or 1/10 of the total number of elements. • Should we use sorted or unsorted linked lists? • Unsorted • Insert is fast • Can easily remove the most recently inserted elements

  37. Hash Table Operations - Chaining • CHAINED-HASH-SEARCH(T, k) • search for an element with key k in list T[h(k)] • Running time depends on the length of the list of elements in slot h(k) • CHAINED-HASH-INSERT(T, x) • insert x at the head of list T[h(key[x])] • T[h(key[x])] takes O(1) time; insert will take O(1) time overall since lists are unsorted. • CHAINED-HASH-DELETE(T, x) • delete x from the list T[h(key[x])] • T[h(key[x])] takes O(1) time • Finding the item depends on the length of the list of elements in slot h(key[x])

  38. T 0 chain m - 1 Analysis of Chaining – Worst Case • How long does it take to search for an element with a given key? • Worst case: • All n keys hash to the same slot • then O(n) + time to compute the hash function

  39. T n0 = 0 n2 n3 nj nk nm – 1 = 0 Analysis of Chaining – Average Case • It depends on how well the hash function distributes the n keys among the m slots • Under the following assumptions: (1) n = O(m) (2) any given element is equally likely to hash into any of the m slots (i.e., simple uniform hashing property) then  O(1) time + time to compute the hash function

  40. Open Addressing – (Closed Hashing) • So far we have studied hashing with chaining, using a linked-list to store keys that hash to the same location. • Maintaining linked lists involves using pointers • which is complex and inefficient in both storage and time requirements. • Another option is to store all the keys directly in the table. This is known as open addressing • where collisions are resolved by systematically examining other table indexes, i0 , i1 , i2 , … until an empty slot is located

  41. Open Addressing • Another approach for collision resolution. • All elements are stored in the hash table itself • so no pointers involved as in chaining • To insert: if slot is full, try another slot, and another, until an open slot is found (probing) • To search, follow same sequence of probes as would be used when inserting the element

  42. Open Addressing • Idea: store the keys in the table itself • No need to use linked lists anymore • Basic idea: • Insertion: if a slot is full, try another one, until you find an empty one. • Search: follow the same probe sequence. • Deletion: need to be careful! • Search time depends on the length of probe sequences! e.g., insert 14 probe sequence: <1, 5, 9>

  43. Open Addressing – Hash Function • A hash function contains two arguments now: (i) key value, and (ii) probe number h(k,p), p=0,1,...,m-1 • Probe sequence: <h(k,0), h(k,1), h(k,2), …. > • Probe sequence must be a permutation of <0,1,...,m-1> • There are m! possible permutations • Example: e.g., insert 14 Probe sequence: <h(14,0), h(14,1), h(14,2)>=<1, 5, 9>

  44. Common Open Addressing Methods • Linear Probing • Quadratic probing • Double hashing • None of these methods can generate more than m2different probe sequences!

  45. Linear Probing • The re-hash function • Many variations • Linear probing • h’(x) is +1 • Go to the next slotuntil you find one empty • Can lead to bad clustering • Re-hash keys fill in gapsbetween other keys and exacerbatethe collision problem

  46. Linear Probing • The key is first mapped to a slot: • If there is a collision subsequent probes are performed: • If the offset constant, c and mare not relatively prime, we will not examine all the cells. Ex.: • Consider m=4 and c=2, then only every other slot is checked. • When c=1 the collision resolution is done as a linear search.

  47. Insertion in Hash Table HASH_INSERT(T,k) • i 0 • repeat j  h(k,i) • if T[j] = NIL • then T[j] = k • return j • else i  i +1 • until i = m • error “ hash table overflow” Worst case for inserting a key is O(n)

  48. Search from Hash Table HASH_SEARCH(T,k) 1i 0 2 repeat j  h(k,i) 3 if T[j] = k 4 then return j 5 i  i +1 6 until T[j] = NIL or i = m 7 return NIL Worst case for Searching a key is O(n) Running time depends on the length of probe sequences Need to keep probe sequences short to ensure fast search

  49. Delete from Hash Table • First, find the slot containing the key to be deleted. • Can we just mark the slot as empty? • It would be impossible to retrieve keys inserted after that slot was occupied! • Solution • “Mark” the slot with a sentinel value DELETED (introduced a new class of entries, full, empty and removed) • The deleted slot can later be used for insertion. e.g., delete 98

  50. Open addressing - Disadvantages • The position of the initial mapping i0 of key k is called the home position of k. • When several insertions map to the same home position, they end up placed contiguously in the table. This collection of keys with the same home position is called a cluster. • As clusters grow, the probability that a key will map to the middle of a cluster increases, increasing the rate of the cluster’s growth. This tendency of linear probing to place items together is known as primary clustering. • As these clusters grow, they merge with other clusters forming even bigger clusters which grow even faster

More Related