1 / 63

Searching

Searching. Searching is the process of finding the location of given element in the linear array. The search is said to be successful if the given element is found i.e. , the element does exists in the array; otherwise unsuccessful. There are two approaches to search operation: Linear search

Download Presentation

Searching

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Searching Searching is the process of finding the location of given element in the linear array. The search is said to be successful if the given element is found i.e. , the element does exists in the array; otherwise unsuccessful. There are two approaches to search operation: • Linear search • Binary search

  2. Linear Search • This method, which traverse a sequentially to locate item is called linear search or sequential search. • The algorithm that one chooses generally depends on organization of the array elements, if the elements are in random order, then one have to use linear search technique

  3. Algorithm Linearsearch (a,n,item,loc) Here a is the linear array of the size n. this algorithm finds the location of the elements item in linear array a. if search ends in success it sets loc to the index of the element; otherwise it sets loc to -1 Begin for i=0 to (n-1) by 1 do if (a[i] = item) then set loc=I exit endif endfor set loc -1 end

  4. C implementation of algorithm Int linearsearch (int *a, int n, int item) { int k; for(k=0;k<n;k++) { if(a[k]==item) return k; } return -1; }

  5. Analysis of Linear Search • In the best possible case, the item may occur at first position. In that case, the search operation terminates in success with just one comparison. • Worst case occurs when either the item is present at last position or missing from the array. In former case, the search terminates in success with n comparisons. • In the later case, the search terminates in failure with n comparisons. Thus, we find that in worst case the linear search is O(n) operations.

  6. Binary Search Suppose the elements of the array are sorted in ascending order. The best sorting algorithm, called binary search, is used to fined the location of the given element.

  7. Example 3,10,15,20,35,40,60 We want to search element 15 Given array • We take the beg=0, end=6 and compute location of the middle element as mid=(beg+end)/2 = (0+6)/2=3

  8. Compare the item with mid i.e. a[mid]=a[3] is not equal to 15, beg<end. We start next iteration. • As a[mid]=20>15, therefore, we take end=mid-1=3-1=2 where as beg remains same.. Thus mid=(beg+end)/2 = (0+2)/2=1 Since a[mid] i.e. a[1]=10<15, therefore, we take beg=mid+1=1+1=2, where as end remains same Since beg=end • Compute the mid element mid=(beg+end)/2=(2+2)/2=2 Since a[mid] i.e. a[2]=15, the search terminates on success.

  9. Algorithm Binarysearch(a,n,item,loc) Begin set beg=0 set end=n-1 Set mid=(beg+end)/2 while((beg<=end) and(a[mid]!=item) do if(item<a[mid]) then set end=mid-1 else set beg=mid+1 endif set mid=(beg+end)/2 endwhile if(beg>end) then set loc=-1 else set loc=mid endif end

  10. C Implementation int binarysearch(int *a, int n, int item) { int beg,end,mid; beg=0; end=n-1; mid=(beg+end)/2; while((beg<end)&&(a[mid]!=item)) { if(item<a[mid]) end=mid-1; else beg=mid+1; mid=(beg+end)/2; } if(beg>end) return -1; else return mid; }

  11. Analysis of binary search • In each iteration or in each recursive call, the search is reduced to one half of the array. Therefore , for n element in the array, there will be log2n iteration for recursive calls. • Thus the complexity of binary search is O(log2n). • This complexity will be same irrespective of the position of the element, event if it is not present in the array.

  12. Hash table and Hashing Objectives: Understand the problem with direct address tables Understand the concept of hash tables. Understand different hash functions. Understand the different collision resolution schemes.

  13. Introduction In all the search algorithms considered so far, the location of item is determined by a sequence of comparisons. In each case, a data item sought is repeatedly compared with item in certain locations of the data structure. However, the number of comparison depends on the data structure and the search algorithm used. E.g. In an array and linked list, the linear search requires O(n) comparisons. In an sorted array, the binary search requires O(logn) comparisons. In a binary search tree, search requires O(logn) comparisons.

  14. Contd.. However, there are some applications that requires search to be performed in constant time, i.e. O(1). Ideally it may not be possible, but still we can achieve a performance very close to it. And this is possible using a data structure known as hash table.

  15. Contd.. A hash table in basic sense, is a generalization of the simpler notation of an ordinary array. Directly addressing into an array makes it possible to access any data element of the array in O(1) time. For example, if a[1..100] is an ordinary array, then the nth data element, 1<=n<=100, can be directly accessed as a[n]. However direct addressing is applicable only when we can allocate an array that has one position for every possible key. In addition direct addressing suffers from following problems: If the actual number of possible keys is very large, it may not be possible to allocate an array of that size because of the memory available in the system of the application software does not permit it. If the actual number of keys is very small as compared to total number of possible keys, lot of space in the array will be wasted.

  16. Direct address Tables Direct addressing is a simple technique that works quite well when the universe U of keys is reasonably small. As an example, consider an application that needs a dynamic set in which each element has a key drawn from the universe U={0,1,2,3….,m-1}, where m is not very large. We also assume that all elements are unique i.e. no two elements have the same key.

  17. Contd.. Figure in next page shows the implementing a dynamic set by a direct address table T, where the elements are stored in the table itself. Here each key in the unverse U={0,1,2,..9} corresponds to a index in a table. The set K={1,4,7,8} of actual key determines the slot in the table that contains elements. The empty/vaccant slots are marked with slash character ‘/’.

  18. Direct Addressing Tables T element 1 K / 2 U 6 9 (Universe of keys) / 3 K (actual keys) element 4 2 1 / 5 4 0 / 6 7 3 element 7 8 element 8 5 / 9 Implementing a dynamic set by a direct address table T, where the elements are stored in the table itself

  19. Contd.. Previous figure shows the implementation of a dynamic set where a pointer to an element is stored in the direct address table T. To represent the dynamic set, we can use array T[0..m-1] in which each position or slot correspond to a key in the universe U.

  20. Direct Addressing Tables T 1 / K element 2 U 6 9 (Universe of keys) / 3 K (actual keys) / 4 2 1 element 5 4 0 / 6 7 3 element 7 8 8 element 5 / 9 Implementing a dynamic set by a direct address table T, where the elements are stored in the table itself

  21. Operations on Direct Address Table Initializing a direct address table In order to initialize a direct address table T[0..m-1], sentinel value -1 is assigned to each slot. void initializeDAT(int t[ ],int m) { int i; for(i=0;i<m;i++) t[i]=-1; } This initialization operation is O(m)

  22. Operations on Direct Address Table Searching an element in direct address table To search an element x in a direct address table T[0..m-1], the element at index key[x] is returned. Int serch(int t[],int x) { return t[key[x]]; }

  23. Operations on Direct Address Table Inserting a new element in direct address table To insert a new element x in a direct address table T[0..m-1], the element is stored at index key[x]. Void insertDAT(int t[],int x) { t[key[x]]=x; }

  24. Operations on Direct Address Table Deleting a new element from direct address table To delete an element x from a direct address table T[0..m-1], the sentinel value -1 is stored at index key[x]. Void deletefromDAT(int t[],int x) { t[key[x]]=-1; }

  25. DAT Each of these operations is fast: only o(1) time is required. However, the difficulties with the direct address table are obvious as stated below. If the universe U is large, storing a table T of size U may be impractical or even impossible given the memory available on a typical computer. If the set K of actual keys is very small relative to U, most of the space allocated for T will be wasted.

  26. Hash Table • A hash table is data structure in which location of a data item is determined directly as a function of the data item itself rather than by a sequence of comparisons. • Under ideal conditions, the time required to locate a data item in a hash table is o(1) i.e. it is constant and does not depend on the number of data item stored.

  27. Hashing • Hashing is a technique where we can compute the location of the desired record in order to retrieve it in a single access • Here, the hash function h maps the universe U of keys into the slots of a hash table T[0..m-1]. This process of mapping keys to appropriate slots in a hash table is known as hashing.

  28. T 0 / K h(k1) U (Universe of keys) / K (actual keys) / k1 h(k2)=h(k4)=h(k7) k4 k2 / k7 k6 h(k6) k3 h(k3)=h(k5) K5 / M-1 Implementing a dynamic set by a hash table T[0..m-1], where the elements are stored in the table itself

  29. Hash table • Figure in the previous page shows the implementing a dynamic set by a hash table T[0..m-1], where the elements are stored in the table itself. • Here each key in the dynamic set K of actual keys is mapped to hash table slots using hash function h. • Note that the keys k2,k4, and k7 map to the same slot. • Mapping of more than one key to the same slot known as collision. • We can also say that keys k2,k4 and k7 collide. • We usually say that an element with key k hashes to slot h(k). We can say that h(k) is the hash value of key k. • The purpose of the hash function is to reduce the range of array indices that need to be handled. Therefore, instead of U values, we need to handle only m values which led to the reduction in the storage requirements.

  30. What is hash function? • A hash function h is simply a mathematical formula that manipulates the key in some form to compute the index for this key in the hash table. For example, a hash function can divide the key by some number, usually size of the hash table, and return remainder as the index of the key. • In general, we say that a hash function h maps the universe U of keys into the slots of a hash table T[0..m-1]. This process of mapping keys to appropriate slots in a hash table is known as hashing.

  31. Different hash functions • There is variety of hash functions. The main considerations while choosing particular hash function h are: • It should be possible to compute it efficiently • It should distribute the keys uniformly across the hash table i.e. it should keep the number of collisions as minimum as possible.

  32. Hash Functions • Division method: In division method, key K to be mapped into one of the m slots in the hash table is divided by m and the remainder of this division is taken as index into the hash table. That is hash function is h(k)=k mod m

  33. Division method Consider a hash table with 9 slots i.e. m=9 then the hash function h(k)= k mod m will map the key 132 to slot 6 since h(132)= 132 mod 9 = 6 Since it requires only a single division operation, hashing is quite fast.

  34. example • Let company has 90 employees and 00,01,02,..89 be the two digits 90 memory address ( or index or hash address) to store the records. We have employee code as the key. • Choose m in such a way that it is greater than 90. suppose m=93, then for the following employee code (or key k) h(k)=h(2103)=2103(mod 93) =57 h(k)=h(6147)=6147(mod 93) =9 h(k)=h(3750)=3750(mod 93) =30 Then typical hash table will look like as next page So if you enter the employee code to the hash function we can directly retrieve table[h[k]] details directly.

  35. Midsquare method • The midsquare method operates in two step, the square of the key value k is taken. In the second step, the hash value is obtained by deleting digits from ends of the squared value i.e.k2 . It is important to note that same position of k2 must be used for all keys. This the hash function is h(k)=k2 Where s is obtained by deleting digits from both sides of k2.

  36. Midsquare method Consider the hash table with 100 slots i.e.m=100, and values k=3205,7148,2345 Solution: K 4147 3750 2103 K2 17197609 14062500 4422609 h(k) 97 62 22 The hash values are obtained by taking fourth and fifth digits counting from right

  37. Folding method • The folding method also operates in two steps. In the first step, the key value k is divided into number of parts, k1,k2..kr, where each parts has the same number of digits except the last part, which can have lesser digits. H(k)=k1+k2+….+kr • In the second step, these parts are added together and hash values are obtained by ignoring the last carry, if any. • For example, the hash table has 1000 slots, each parts will have three digits, and the sum of these parts after ignoring the last carry will also be three digits number in the range 0 to 999.

  38. Folding method • Here we are dealing with a hash table with index from 00 to 99, i.e, two digit hash table. So we divide the K numbers of two digits

  39. Folding method • Extra milling can also be applied to even numbered parts, k2, k4,… are each reversed before the addition

  40. Multiplication method The multiplication method operates in two steps. In the first step, the key value K is multiplied by a constant A in range 0<A<1 and extract the fractional part of value kA. In the second step, the fractional value obtained above is multiplied by m and the floor of the result is taken as hash value. That is, the hash function is h(k)=└m(kAmod1)┘ Where “kAmod 1” means the fractional part of kA i.e. kA-└kA┘. Note that └x┘ read as floor of x and represent the largest integer less than or equal to x. Although this method works with any value of A, it works better with some values than others. The best choice depends on the characteristics of the key values. Knuth has suggested in his study that the following value of A is likely to work reasonably well. A=(√5-1)/2=0.6180339887…

  41. Multiplication method Consider a hash table with 10000 slots i.e m=10000 then the hash function h(k)=└m(kAmod1)┘ Will map the key 123456 to slot 41 since H(123456)=└10000*(123456*0.61803…mod1) ┘ =└10000*(76300.0041151…mod1) ┘ =└100000*0.0041151….) ┘ =└41.151…. ┘ =41

  42. Hash Collision • It is possible that two non identical keys k1, k2 are hashed into the same hash address. This situation is called hash collision

  43. Hash Collision

  44. Hash collision Let us consider a hash table having 10 location as shown in previous figure. Division method is used to hash the key. H(k)=k(mod m) Here m is chosen as 10. the hash function produces any integer between 0 and 9. depending on the value of the key. If we want to insert a new record with key 500 then H(500)=500(mod10)=0 The location 0 in the table is already filled. Thus collision occurred. Collision are almost impossible to avoid but it can be minimized considerably by introducing few techniques.

  45. Resolving Collision • A collision is a phenomenon that occurs when more than one keys maps to same slot in the hash table. • Though we can keep collisions to a certain minimum level, but we can not eliminate them together. Therefore we need some mechanism to handle them.

  46. Collision Resolution by Synonyms Chaining • In this scheme, all the elements whose key hash to same hash table slot are put in a linked list. • Thus the slot I in the hash table contains a pointer to the head of the linked list of all the elements that hashes to a value I • If there is no such element that hash to value I, the slot I contains NULL value

  47. K5 X K6 X K7 X K4 X K2 X K1 X K3 X T 0 / K U (Universe of keys) / K (actual keys) / k1 k4 k2 / k7 k6 k3 K5 / M-1 Collision resolution by separating chainng. Each hash table slot T[i] contains a linked list of all the keys whose hash value is i

  48. Collision Resolution by Synonyms Chaining • Structure of node of linked list will look like Typedef struct nodetype { int info; struct nodetype *next; }node; • Initializing a Chained hash table Void iniHT(node*t[],int m) { int I; for(i=0;i<=m;i++) t[i]=NULL; }

More Related