1 / 56

Indexing and Hashing

Indexing and Hashing. Outline. Basic Concepts Ordered Indices B+-Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered Indexing and Hashing Index Definition in SQL Multiple-Key Access. pointer. search-key. Basic Concepts.

corbin
Download Presentation

Indexing and Hashing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Indexing and Hashing

  2. Outline • Basic Concepts • Ordered Indices • B+-Tree Index Files • B-Tree Index Files • Static Hashing • Dynamic Hashing • Comparison of Ordered Indexing and Hashing • Index Definition in SQL • Multiple-Key Access

  3. pointer search-key Basic Concepts • Indexing mechanisms are used to speed up access to desired data. • Search-key • set of attributes used to look up records in a file • An index fileconsists of records (called index entries) of the form • Index files are typically much smaller than the original file • Two basic kinds of indices: • Ordered indices • search keys are stored in sorted order • Hash indices • search keys are distributed uniformly across “buckets” using a “hash function”.

  4. Index Evaluation Metrics • Access types supported efficiently, such as finding records with • a specific value for the search key • search key falling in a given range of values • given values for only parts of the search key • Access time • Insertion time • Deletion time • Space overhead • These times are usually estimated by measuring the number of disk blocks that need to be read/written

  5. Ordered Indices • index entries are sorted on the search key • Primary index • in a sequentially ordered file, the index whose search key specifies the sequential order of the file. • Also called clustering index • The search key of a primary index is usually but not necessarily the primary key of the data file. • Secondary index • an index whose search key specifies an order different from the sequential order of the file. Also called non-clustering index • Index-sequential file • ordered sequential file with a primary index.

  6. Dense and Sparse Index Files • Dense index • Has an index entry for every search-key value in the file • Sparse Index • contains index entries for some search-key values • Typically the smallest search-key in each block of records • useful when records are sequentially ordered on search-key • To locate a record with search-key value K with a sparse index • Find index entry with largest search-key value < K • Search file sequentially starting at the record pointed to by the index • Sparse vs. dense indexes • less space and less maintenance overhead for insertions and deletions • generally slower for locating records. • Good tradeoff • sparse index with an index entry for every block in file, corresponding to least search-key value in the block.

  7. Multilevel Index • If primary index does not fit in memory, access becomes expensive. • An index of an index may fit in memory. Build a sparse one! • outer index – a sparse index of primary index • inner index – the primary index file • Scheme can be repeated as needed until the top index fits in memory

  8. Multilevel Index Multilevel Index (Cont.)

  9. Index Update when Deleting Records • If deleted record was the only record in the file with its particular search-key value, the search-key is deleted from the index also. • Single-level index deletion • Dense indices • deletion of search-key is similar to file record deletion. • Sparse indices • if an entry for the search key exists in the index, it is deleted by replacing the entry in the index with the next search-key value in the file (in search-key order). • If the next search-key value already has an index entry, the entry is deleted instead of being replaced.

  10. Index Update when Inserting Records • Single-level index insertion: • Perform a lookup using the search-key value appearing in the record to be inserted. • Dense indices • if the search-key value does not appear in the index, insert it. • Sparse indices • if index stores an entry for each block of the file, no change needs to be made to the index unless a new block is created. In this case, the first search-key value appearing in the new block is inserted into the index. • Inserting index entries may result in overflow blocks • Multilevel insertion (as well as deletion) algorithms are simple extensions of the single-level algorithms

  11. Secondary Indices • Secondary indices are useful when searching for all the records whose values in certain fields, which are not in the search-key of the primary index, satisfy some condition. • Build a dense index with search-key consisting of those fields • index entry points to a bucket that contains pointers to all the actual data records with that particular search-key value.

  12. Primary vs. Secondary Indices • Secondary indices have to be dense. • Indices offer substantial benefits when searching for records. • When a file is modified, every index on the file must be updated • Updating indices imposes overhead on database modification. • Sequential scan using primary index is efficient, but a sequential scan using a secondary index is expensive • each record access may fetch a new block from disk

  13. B+-Tree Index Files • Disadvantage of indexed-sequential files • performance degrades as file grows, since many overflow blocks get created. • Periodic reorganization of entire file is required. • B+-tree indices are an alternative to indexed-sequential files • Advantage of B+-treeindex files • automatically reorganizes itself with small, local, changes, in the face of insertions and deletions • Reorganization of entire file is not required to maintain performance. • Disadvantage of B+-trees • extra insertion and deletion overhead, space overhead. • Advantages of B+-trees outweigh disadvantages, and they are used extensively.

  14. B+-Tree Index Files • A B+-tree is a rooted search tree satisfying the following properties: • All paths from root to leaf are of the same length • The root node has • at least 2 children if not a leaf • between 0 and (n–1) values (index entries), if a leaf • All other nodes are at least half-full, i.e. they have • between [n/2] and n children, if not a leaf • between [(n–1)/2] and n–1 values (index entries) if a leaf • the fanout n depends on the size of the search-key and the size of disk blocks

  15. B+-Tree Node Structure • Typical node • Ki are the search-key values and are stored in sorted order K1 < K2 < K3 < . . .< Kn–1 • The subtree rooted at Pi has search-key values in the interval [Ki-1,Ki ) • K0 and Kn are assumed to be +- • Ki is always the smallest key-value in the subtree to its right • For non-leaf nodes, Pi are pointers to children nodes • For leaf nodes, Pi are pointers to records or buckets of records • Pn points to the next leaf node in the search-key sorted order

  16. B+-tree for account file (n = 5) B+-tree for account file (n = 3) Example B+-trees

  17. Queries on B+-Trees • Since a B+-Tree is a search tree, the algorithm for finding records with a particular search-key k is to • Visit all the children of a node whose intervals contain (cover) k • when at a leaf node, get the data records by using the pointers to them • Access time analysis • The height of a B+-Tree with K index entries is at most logn/2(K). • For example • search-key of 36 bytes • block pointers of 4 bytes, and • block (node) size 4KB • we have • node fanout n is around 100 • K=1,000,000 index entries • at most log50(1,000,000) = 4 nodes (disk blocks) are accessed in a lookup.

  18. Updates on B+-Trees • Updates on B+-Trees are essentially along the lines of updates in standard binary search trees • Find the leaf node that should contain a given search-key value and either insert a pointer to the inserted data record or delete the pointer to the deleted data record • Special care needs to be taken due to the requirements that • Each node needs to have a certain number of children/pointers to data records • All root-leaf paths have the same length • The search-key values stored at a node reflect the search-key values stored in the corresponding subtrees • This can be accomplished with splitting/merging nodes, and updating the (key-value, pointer) pairs along the path from the affected leaf back to the root

  19. Splitting nodes • When a node X with parent P does not enough space for all the (key-value, pointer) pairs • Create a new node Y • move the upper half of X ‘s pairs to Y • Insert recursively the (Kmin, Y ) pair to P where Kmin is the smallest key-value in the subtree rooted at Y • If X is the root then first • create a new root node P, and insert to it a (key-value, pointer) pair for X • Update the key-value’s for the pairs that point to X , Y, and/or P in their parents as needed

  20. Merging Nodes • When a node X with parent P has too few (key-value, pointers) pairs then • If its pairs and those of its left or right sibling Y fit in a single block, move them to Y and delete recursively X from its parent P • Else, redistribute some of the smallest (key-value, pointer) pairs from Y to X, so that both have enough pairs • If P is the root and has only one child delete P and make the single child the new root • Update the (key-value, pointer) pairs for X, Y, and/or P as needed

  21. Insertion to B+-Trees before and after inserting “Clearview”

  22. Deleting from a B+-Tree Before and after deleting “Downtown”

  23. Deleting from a B+-Tree Before and after deleting “Perryridge”

  24. Deleting from a B+-Tree Before and after deleting of “Perryridge”

  25. B+-Tree File Organization • B+-Trees can be used to organize data files. • The leaf nodes in a B+-tree file organization store data records, instead of pointers to data records. • Since records are larger than pointers, the maximum number of records that can be stored in a leaf node is less than the number of pointers in a nonleaf node. • Leaf nodes are still required to be half full. • Insertion and deletion are handled in the same way as insertion and deletion of entries in a B+-tree index.

  26. B+-Tree File Organization

  27. B-Tree Index Files • Similar to B+-tree, but B-tree allows search-key values to appear only once; eliminates redundant storage of search keys. • Search keys in non-leaf nodes appear nowhere else in the B-tree; an additional pointer field for each search key in a non-leaf node must be included. • Generalized B-tree leaf node • Non-leaf node – pointers Bi are the bucket or file record pointers.

  28. B-Tree Indices B-tree B+-tree

  29. B-Tree Indices • Advantages • May use less tree nodes than a corresponding B+-Tree. • Sometimes possible to find search-key value before reaching leaf node. • Disadvantages • Only small fraction of all search-key values are found early • Non-leaf nodes are larger, so fan-out is reduced. Thus B-Trees typically have greater depth than corresponding B+-Tree • Insertion and deletion more complicated than in B+-Trees • Implementation is harder than B+-Trees. • Typically, advantages of B-Trees do not outweigh disadvantages.

  30. Static Hashing • A bucket is a unit of storage containing one or more records (a bucket is typically a disk block). • In a hash file organization we obtain the bucket of a record directly from its search-key value using a hash function h • h is used to locate records for access, insertion as well as deletion. • Records with different search-key values may be mapped to the same bucket • thus entire bucket has to be searched sequentially to locate a record. • Hashing can be used not only for file organization, but also for index-structure creation. • A hash index organizes the search keys, with their associated record pointers, into a hash file structure. • Typically, a secondary index

  31. Example of Hash File Organization

  32. Example of Hash Index

  33. Handling of Bucket Overflows • Bucket overflow can occur because of • Insufficient buckets • Skew in distribution of records. This can occur due to two reasons • multiple records have same search-key value • hash function produces non-uniform distribution of key values • Although the probability of bucket overflow can be reduced, it cannot be eliminated • it is handled by using overflow buckets • Overflow chaining – the overflow buckets of a given bucket are chained together in a linked list. • open addressing (hashing), which does not use overflow buckets, is not suitable for database applications.

  34. Static Hashing Disadvantages • Uses fixed hash function h and set B of bucket addresses. • Databases grow with time. If initial number of buckets is too small, performance will degrade due to too much overflows. • If file size at some point in the future is anticipated and number of buckets allocated accordingly, significant amount of space will be wasted initially. • If database shrinks, again space will be wasted. • One option is periodic re-organization of the file with a new hash function, but it is very expensive. • These problems can be avoided by using techniques that allow the number of buckets to be modified dynamically.

  35. Dynamic Hashing: Extendable Hashing • Dynamic hashing allows the hash function h to be modified dynamically • Good for files that grow and shrink in size • Extendable hashing is one form of dynamic hashing • Uses a hash function h that generates (integer) values over a large range [ 0, 2b ) • At any time it uses only a prefix of 0  i  b bits of the hash values, • and a table of 2i bucket addresses • Initially i=0 • Call i the bucket table prefix • Value of i grows and shrinks as the size of the database grows and shrinks. • The prefixes are used to index into a table of bucket addresses. • Multiple entries in the bucket address table may point to a bucket • actual number of buckets is at most 2i • Each bucket j stores with it the bucket prefix ij • All the keys it contains have the same value for their ij prefix • There are 2 (i-ij) pointers to bucket j from the table of bucket addresses • The number of buckets also changes dynamically due to coalescing and splitting of buckets.

  36. Extendable Hashing

  37. Use of Extendable Hashing • To look-up the bucket containing search-key value K: • Use the i prefix of h(K) as an index into the bucket address table, and follow the pointer to the appropriate bucket • To insert a search-key value K • Look-up the bucket j that should contain • If there is room in the bucket j insert record in the bucket, else split the bucket j and attempt the insertion again (use overflow buckets if full again) • To delete a search-key value K, • Lookup the bucket j that contains it and delete it from there • Remove the bucket if empty • Coalescing of buckets can be done • can coalesce only with a “buddy” bucket having same value of ij and same ij –1 prefix, if it is present • Decreasing bucket address table size is also possible

  38. Extendable Hashing: Splitting Buckets • Splitting bucket j • Depends on the #pointers to it in the bucket address table • If i > ij (more than one pointer to bucket j) • allocate a new bucket z with bucket prefix ij +1 • Set the bucket prefix of j to ij +1 • make the second half of the bucket address table entries pointing to j to point to z • remove and reinsert each record in bucket j. • If i = ij(only one pointer to bucket j) • increment i and double the size of the bucket address table. • replace each entry in the table by two entries that point to the same bucket.

  39. Example of Extendable Hashing Initial Extendable Hash structure, bucket size = 2 records

  40. Example of Extendable Hashing • After inserting of one Brighton and two Downtown records • After inserting of Mianus

  41. Example of Extendable Hashing After inserting three Perryridge records

  42. Example of Extendable Hashing Hash structure after insertion of Redwood and Round Hill records

  43. Extendable Hashing vs. Other Schemes • Benefits of extendable hashing: • Hash performance does not degrade with growth of file • Minimal space overhead • Disadvantages of extendable hashing • Extra level of indirection to find desired record • Bucket address table may itself become very big (larger than memory) • Need a tree structure to locate desired record in the structure! • Changing size of bucket address table is an expensive operation • Linear hashingis an alternative mechanism which avoids these disadvantages at the possible cost of more bucket overflows

  44. Comparison of Ordered Indexing and Hashing • Cost of periodic re-organization • Relative frequency of insertions and deletions • Is it desirable to optimize average access time at the expense of worst-case access time? • Expected type of queries: • Hashing is generally better at retrieving records having a specified value of the key. • If range queries are common, ordered indices are to be preferred

  45. Index Definition in SQL • Create an index • Use create unique index to indirectly specify and enforce the condition that the search key is a candidate key. • To drop an index • CREATE INDEX <index-name> ON <relation-name> (<attribute-list>) • E.g.: create index b-index on branch(branch-name) DROP INDEX <index-name>

  46. Multiple-Key Access • Use multiple indices for certain types of queries. • Possible strategies for processing query using indices on single attributes: 1. Use index on branch-name to find accounts with balances of $1000; test branch-name = “Perryridge”. 2. Use indexon balance to find accounts with balances of $1000; test branch-name = “Perryridge”. 3. Use branch-name index to find pointers to all records pertaining to the Perryridge branch. Similarly use index on balance. Take intersection of both sets of pointers obtained. • SELECT account-number • FROM account • WHERE branch-name = “Perryridge” AND balance = 1000

  47. Multiple-Key Access • Use an index on combined search-key (branch-name, balance). • will fetch only records that satisfy both conditions. • Using separate indices is less efficient • we may fetch many records (or pointers) that satisfy only one of the conditions. • Can also efficiently handle where branch-name = “Perryridge” and balance < 1000 • But cannot efficiently handlewhere branch-name < “Perryridge” and balance = 1000 • May fetch many records that satisfy the first but not the second condition. • What is needed are multi-dimensional index structures

  48. Grid Files • Structure used to speed the processing of general multiple search-key queries involving one or more comparison operators. • The grid file has a single grid array and one linear scale for each search-key attribute. The grid array has number of dimensions equal to number of search-key attributes. • Multiple cells of grid array can point to same bucket • To find the bucket for a search-key value, locate the row and column of its cell using the linear scales and follow pointer

  49. Example Grid File

  50. Queries on a Grid File • A grid file on two attributes A and B can handle queries of all following forms with reasonable efficiency • (a1 A  a2) • (b1  B  b2) • (a1 A  a2  b1  B  b2),. • E.g., to answer (a1 A  a2  b1  B  b2), use linear scales to find corresponding candidate grid array cells, and look up all the buckets pointed to from those cells.

More Related