Data management databases and organizations richard watson
Download
1 / 166

Data Management: Databases and Organizations Richard Watson - PowerPoint PPT Presentation


  • 121 Views
  • Uploaded on

Data Management: Databases and Organizations Richard Watson. Summary of Selections from Chapter 11 prepared by Kirk Scott. Outline of Topics. Relationship between O/S and dbms Indexing Hashing File organization and access Joining B+ trees. Relationship between O/S and dbms.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Data Management: Databases and Organizations Richard Watson' - aquila


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Data management databases and organizations richard watson

Data Management: Databases and OrganizationsRichard Watson

Summary of Selections from Chapter 11 prepared by Kirk Scott


Outline of topics
Outline of Topics

  • Relationship between O/S and dbms

  • Indexing

  • Hashing

  • File organization and access

  • Joining

  • B+ trees


Relationship between o s and dbms
Relationship between O/S and dbms

  • Most performance concerns in dbms internals eventually hinge on secondary storage

  • In other words, by definition, a db is persistent, stored on disk

  • The dbms is responsible for managing and accessing this data on disk

  • As such, dbms internals are either closely related to or integrated with O/S functions



  • If the pagesdbms and O/S are integrated, the db administrator may have the ability to specify physical storage characteristics of tables:

  • Clustering in sectors, tracks, cylinders, etc.



Indexing
Indexing pages

  • Indexes were introduced when considering SQL

  • In simple terms, they provide key based access to the contents of tables

  • It turns out that devising a special kind of index was one of the critical parts of making relational dbms’s practical to implement








  • In reality, an index is not typically implemented as a simple look-up table

  • The full scale details of one kind of indexing scheme are given in the section on B+ trees

  • In the meantime, it is worth considering one nuance that results from the interaction between dbms records and O/S pages


Sparse indexes
Sparse Indexes simple look-up table

  • A given file may be stored in order, sorted by a given field of interest

  • Superficially, this might suggest that an index on that field is not needed

  • However, the size of database tables means that you don’t want to have to do linear search through secondary storage in order to find a desired record


  • The reality is that what you want is not a RRN or an address—what you want is the page that the desired record would be on

  • This is because the O/S returns data in pages anyway

  • An RRN request would be translated into a page request and the complete set of records on the page would be returned as a block anyway


  • For a sorted file, address—what you want is the page that the desired record would be onan index may be sparse

  • The index can again be envisioned as a simple look-up table

  • The look-up key values would correspond only to the first records on each page

  • If a desired value fell between two entries in the index, it would be on the page of the first of those two entries

  • Note again that this only works if the table is stored in sorted order on the look-up key


Clustering tables
Clustering Tables address—what you want is the page that the desired record would be on

  • The issue of whether a table is stored in some sorted order is significant and will be treated in general in a later section

  • In the meantime, note that SQL supports this with the keyword CLUSTER


  • This is an example of its use: address—what you want is the page that the desired record would be on

  • CREATE INDEX indexname

  • ON tablename(fieldname) CLUSTER

  • This means that as records are entered, the table is organized in sorted order in secondary storage


  • The term inter-file clustering refers to storing records from related tables in order

  • For example, you could have mothers followed by their children

  • This violates every precept of relational databases

  • However, in rare circumstances this may be within the db administrator’s power for performance reasons


Index access
Index Access from related tables in order

  • Indexing supports two kinds of access into a file:

  • Random access: Given a single look-up key value, it’s possible to find the one (or more) corresponding record(s)

  • Sequential access: Reading through the index from beginning to end produces all of the records in a table sorted in the order of the index key field


Index support for queries
Index Support for Queries from related tables in order

  • Not only do keys support the simple access schemes given above, they can also support various aspects of SQL queries

  • Take a query with a WHERE clause for example

  • Let the table be indexed on the field in the WHERE clause

  • Then a query optimizer could use the index in order to restrict the results of the query without having to search through the whole table looking for matches


Hashing
Hashing from related tables in order

  • Hashing has many uses in computer science

  • It turns out to have particularly useful applications in dbms internals

  • In a perfect world, a primary key field might be an unbroken set of integers

  • The identifiers for records would map directly into a linear address space



  • The reason for this is the following: from related tables in order

  • In general, you have alternatives on how to store the records of a table

  • They can be stored in arrival sequence

  • You could cluster them

  • If you hash them, they can be saved at a particular address (offset), without wasting space due to the sparseness of the key values


  • The utility of hashing comes from the following: from related tables in order

  • The location of a record is computed based on the key on its way in

  • That means that, given the key value, the location of the corresponding record can be computed again for easy access upon retrieval


  • Indexing supports both direct access and sequential access from related tables in order

  • Hashing doesn’t support sequential access, but it does support direct access

  • As a matter of fact, no better scheme for implementing direct access exists

  • It is quicker to hash than it is to search an index


  • The classic hashing algorithm, which is relatively easy to illustrate, is division-remainder hashing

  • The look-up or hashing key of interest may be of any type

  • If it’s not actually an integer field, let it be converted into a unique integer value

  • Let the number of expected records, the size of the desired address space, be n



  • The idea is that for key values of integer form which are illustrate, is division-remainder hashinglarger (or smaller) than p, you can do integer division by p

  • What you are interested in is the remainder—in other words modulus

  • The range of possible modulus values when dividing by p is 0 through p – 1

  • This is the new, limited address space defined by the hashing scheme


  • A simple example will illustrate the idea illustrate, is division-remainder hashing

  • Let the key of interest be 9 digit social security numbers

  • Let the desired address space be 20

  • 23 is the smallest prime number larger than 20




  • In other words, you get the same hash value back for a given set of values

  • This means that the collision occurs again, but this is not a problem

  • The only thing you have to worry about is where you store two different things that hash to the same location in the 0-22 address space

  • There are basically two approaches


  • The for a given set of valuesfirst approach is to maintain an overflow area at the end of a page

  • Suppose you hash something on look-up

  • You go to the hash address obtained

  • When you get there, you do not find the desired key value

  • Then go to the overflow area at the end of the page and do linear search in it


  • Alternatively, if records collide, they can simply be displaced

  • In other words, let there be a collision upon data entry

  • Simply search forward in the address space until the next empty slot is found and place the new record there

  • The same strategy is used for finding the right record when accessing data later



  • Note that the possibility of collisions imposes a restriction on this scheme

  • It won’t conveniently support existence queries or incorrect key value input

  • If the key input on search isn’t valid, then you start a fruitless search for where that value hashed to.


File organization and access
File Organization and Access restriction on this scheme

  • The previous discussions of indexing and hashing may have seemed somewhat disjointed

  • Now that both topics have been covered, it’s possible to summarize some of the choices for maintaining tables in secondary storage and their advantages and disadvantages

  • Choices like indexing are available to users

  • Other choices, like clustering and hashing, would only be available to database administrators


Arrival order
Arrival Order restriction on this scheme

  • File organization: Arrival order—this is standard

  • Indexed: Yes, possibly on >1 field

  • Access: Random and sequential by index

  • Performance: Good for both

  • Maintenance and cost: None on base file; update and deletion maintenance costs on index(es)


Clustered
Clustered restriction on this scheme

  • File organization: Sequential—in other words, maintained in sorted order by some field

  • Indexed: Not necessarily—possibly desirable on non-key fields, sparse if on key field

  • Access: Sequential on key. No other unless indexed

  • Performance: perfect sequential on key

  • Maintenance and cost: Overflow and reorganization of base table—cost is horrendous


Hashed
Hashed restriction on this scheme

  • File organization: Hashed (on one field only)

  • Indexed: Typically, no—hashing implies that the principal goal is random access; you don’t need sequential access and don’t want the cost of index maintenance

  • Access: Direct/random (only)

  • Performance: The best possible direct access

  • Maintenance and cost: Reorganization if the address space fills or there are too many collisions


  • Notice that choosing hashed file organization is a specialized option that would be available to a database administrator

  • It is not necessarily part of a standard database application

  • It is used when direct access is critical to performance

  • Historically, things like airline ticketing databases have driven the need for extremely quick direct access


Joining
Joining specialized option that would be available to a database administrator

  • Historically, the performance costs of joining were one of the things that made the relational model impractical

  • As noted in the chapter on the relational model, implementing the theoretical definition of a join is out of the question:

  • Form the Cartesian product of two tables and then perform selection and projection on the results…



  • There is also a concept known as interfile clustering these conditions:

  • This means that the records for two different tables are stored intermixed

  • In other words, the physical data, for example, follows a pattern such as this:

  • Mother 1, child a, child b, mother 2, child c, mother 3, child d, child e, child f, …

  • In rare cases, a database administrator may specify this as the only way to get acceptable performance

  • However, it is very costly to maintain tables in this way, and very rare


  • Nested loop would be the naïve option for files in arrival order with no indexes on the joining fields

  • This kind of algorithm would be O(n2)

  • It is not out of the question, but it is important to remember an underlying reality

  • The costs of these algorithms are in secondary storage access, not main memory access

  • Anything above linear is painfully expensive


  • If both tables are indexed on their joining fields, then merge join becomes possible on the indexes

  • This is certainly better than having nothing to work with at all, but there is still a warning:

  • Even though progression through indexes is linear, the retrieval of records from the tables may not follow this pattern

  • In other words, if the records themselves are not clustered, you may read the same page more than once at different times in order to access the various records on it


Hash join
Hash Join merge join becomes possible on the indexes

  • It turns out that hashing was the basis for a joining algorithm that finally made the relational model practical

  • The fundamental of hashing is the same as explained above, but it’s applied in a different way

  • A preliminary picture of main memory and secondary storage is given on the next overhead

  • The assumptions about and relationships between the tables, records, buckets, and pages will be explained following it





  • The term “bucket” refers to one collection of things (table records) that hash to the same value

  • As seen in the picture given earlier, the expectation is that multiple pages worth of records will hash into a single bucket

  • Hashing now is being used to group together things that have something in common, not to map individual items into an address space


  • What follows is a numbered list of assumptions needed in order to explain how hash join works:

  • 1. More than one table record fits into a page of memory

  • 2. When hashing, more than one page’s worth of records will hash to the same value

  • Let the collection of things that hash to the same value be referred to as a bucket


  • 3. Note that you’re not worried about collisions order to explain how hash join works:

  • If there is more than one record with the same key value that hashes to the same bucket, that’s OK

  • It’s also OK if there are genuine collisions, where different key values hash to the same bucket


  • 4. The parameters that have to be tuned in order for this to work involve the size of the hash space (i.e., the number of different buckets) relative to the sizes of the tables to be joined.

  • Hash join proceeds in two phases

    • 1. a reading/hashing/writing phase

    • 2. a reading/joining/writing phase


  • In order for the scheme to work: to work involve the size of the hash space (i.e., the number of different buckets) relative to the sizes of the tables to be joined.

  • During phase 1, at least one page for each bucket has to fit in memory at the same time

  • During phase 2, all of the pages for corresponding buckets of A and B have to fit in memory at the same time


  • For phase 1, this means that the number of buckets total can’t exceed the size of memory in pages allocated to the process

  • For phase 2, this mathematical expression will clarify the limitation on buckets/pages vs. the memory allocated to the process:

  • Size(max(card(A bucket) + card(B Bucket)))

  • <= size(main memory)


  • Specifically, the tuning involves the following: can’t exceed the size of memory in pages allocated to the process

  • It depends on how the hash key values are distributed in the tables A and B

  • It depends on the hashing algorithm and how many buckets are chosen for it

  • It involves the overall amount or size of memory that will be available at run time for the algorithm



  • Phase 2 of the hash join algorithm can be described as follows:

  • Read tables A and B back in from secondary storage bucket by matching bucket

  • Note that collisions don’t matter

  • You may have a mixture of different key values, but the same key values from A and B will be present

  • Use a memory resident algorithm to form the join of the bucket contents

  • Write them back out


  • Observe that it doesn’t matter if the memory-resident algorithm is something inefficient like O(n2)

  • The critical point is the following: Access to secondary storage, paging, has been optimized

  • In total, each of the records of A and B, that is, each page containing records of A and B, is read exactly twice

  • A and B are read once during phase 1 and again during phase 2


  • Hash join is O(n) in I/O costs algorithm is something inefficient like O(n

  • Access to secondary storage is potentially around 3 orders of magnitude slower than memory access

  • I/O costs will dominate any algorithm that involves access to secondary storage

  • Devising an algorithm that’s linear in I/O costs means that whatever you have to do in memory is of no performance consequence


B trees
B+ Trees algorithm is something inefficient like O(n

  • As noted above, the application of hashing was critical to making relational database systems practical

  • The development of indexes was equally important

  • As stated before, real indexes are not, in fact, simple look-up tables

  • In reality, indexes take a tree-like form

  • Also, they are not simply indexes

  • The records in a table can be stored in a tree-like structure that is simultaneously index like


  • The classic, original development of this was known at IBM as VSAM

  • This stood for virtual storage access method

  • The tree structure is known as a B tree, or depending on the implementation, a B+ tree

  • How B trees work is the next topic

  • Before considering them in detail, it is useful to compare what they accomplish with the other file organization and access options listed earlier


  • File organization: VSAM—to be described soon as VSAM

  • Indexed: This is a B tree index with data records stored in the index nodes

  • Access: Random and sequential—where this doesn’t necessarily require a separate index hit, since the data and index are unified

  • Performance: Access to any record (page) is bounded by logn (number of records in file) where n = the number of records per index node

  • Maintenance and cost: The insertion and deletion algorithms automatically maintain the data and indexing simultaneously


B tree background
B+ Tree Background as VSAM

  • 1. You can think of B+ trees as being the hard-coded equivalent of binary (or in general, base n) search.

  • 2. The B in the name means balanced.

  • The nodes in the tree may vary in how many entries they contain, but balanced means that all of the leaves are the same distance from the root.


  • 3. The balance of the tree is desirable because it places an upper bound on the number of pages that have to be read in order to get any value.

  • The bound is O(logn(number of records in file)) where n = the number of records per index node

  • 4. If + is included in the name of the data structure, this signifies that in addition to providing indexed access to file records, links are provided which allow the records to be accessed in sequential order without traversing the index tree.


Example
Example an upper bound on the number of pages that have to be read in order to get any value.

  • An example of a B+ tree at a certain stage of development is shown on the next overhead.

  • It is taken from page 4 of part 1 of the assignment keys.

  • The question of how insertions and deletions are made will be addressed later.

  • At this point it is simply desirable to see a tree and explain its contents.


  • The tree structure represents an index on a field in a table.

  • The tree consists of nodes which each fit on a single page of memory.

  • In this diagram, the pairs of parentheses and their contents represent the nodes in the tree.

  • The integers are values of the field that is being indexed.


  • This field may not be a key field in the table, but in general, when indexing, the field that is being indexed on can be referred to as the key.

  • The nodes also contain pointers.

  • In this diagram the pointers are represented by arrows.

  • In reality, the pointers would be stored in the nodes as addresses referring to other nodes.


  • This illustration is set up as a pure index. general, when indexing, the field that is being indexed on can be referred to as the key.

  • The idea behind VSAM is that each node (page) is large enough to hold not just a key value, but the complete record containing it.

  • This would mean that the contents of a file were conceptually stored as a tree—

  • And that the file contents would be essentially self indexing.



  • From the sequence set it is possible to point to the pages containing the actual table records containing those key values.

  • This is indicated by the vertical arrows pointing down from the leaf nodes.

  • The horizontal arrows between the leaf nodes represent the linkage that makes it possible to access the key values in sequential order using this index.





  • If n is odd, you round up, and the minimum is (n / 2) + 1. of this sort.

  • Some books use the notation of the ceiling function, n/2, which means the same thing.

  • Because fullness is measured by the number of pointers, it is possible for it to appear less than half full when looking at the number of key values present in a node.

  • Finally, it is permissible in general for the root node to fall below half full.


  • Another thing becomes apparent about B of this sort. + trees from looking at the example.

  • In each node the key values are in order.

  • There is also a relationship between the order of the key values in one node, the pointers coming from it, and the values in the nodes pointed to by these pointers.

  • This relationship is intrinsic to the meaning of the contents of the tree and will be explained further below when covering the rules for inserting and deleting entries.


  • It is also apparent that the index set is sparse while the sequence set is dense.

  • In other words, the leaves contain all key values occurring in the table being indexed.

  • Some of these key values occur in the index set, but the majority do not.

  • If a key value does occur in the index set, it can only occur there once.


  • It will become evident when looking at the rules for inserting values how this situation comes about.

  • When the tree is growing, a value in a sequence set node can be copied into the index set node above it.

  • However, when values are promoted from one index set node to another they are not copied; they are moved.


  • A final remark can be made in this vein. inserting values how this situation comes about.

  • The example shows creating a B+ tree on the primary key of a table, in other words, a field that is unique.

  • All of the example problems on this topic will do the same.

  • If the index were on a non-unique field, the difference would show up only in the sequence set.

  • It would be necessary at the leaf level to arrange for multiple pointers from a single key value, pointing to the multiple records that contained that key value.


Creating and updating b trees
Creating and Updating B+ Trees inserting values how this situation comes about.

  • Some authors present the rules for creating and maintaining B+ trees as a set of mathematical algorithms.

  • Others give pseudo-code or code for implementations.

  • There is also a certain degree of choice in both the algorithm and its implementation.

  • What will be given here are sets of rules of thumb that closely parallel Korth and Silberschatz.


  • The kinds of test questions you should be able to answer about B+ trees would be like the assignment questions.

  • In other words, given the number of key values and pointers that a node can contain, and given a sequence of unique key values to insert and delete, you need to be able to create and update the corresponding B+ tree index.


Summary of the characteristics of a correctly formed tree
Summary of the Characteristics of a Correctly Formed Tree about B

  • Some general rules of thumb that explain the contents of a tree are given beginning on the next overhead.

  • More specific rules for insertion and deletion are given in following lists.

  • At the outset, however, it’s helpful to have a few overall observations.


General rules of thumb 1 sequence set
General Rules of Thumb, 1—Sequence Set about B

  • At the very beginning the whole tree structure would consist of only one node, which would be both the index set and the sequence set at the same time.

  • After the first node is split there is a distinction.

  • The meaning of pointers coming from and between sequence set nodes has already been given above and no further explanation is needed.

  • The remaining remarks below address the considerations of index set nodes specifically.


General rules of thumb 2 index set
General Rules of Thumb, 2—Index Set about B

  • If a key value appears in a node, it has to have pointers on each side of it.

  • In other words, the existence of a value in a node fundamentally signals “branch left” or “branch right”.

  • In the algorithm for the insertion of values it will become apparent that as the tree grows, a new value in an index set node is promoted from a lower node to indicate branching to the left or right.


General rules of thumb 3 index set
General Rules of Thumb 3—Index Set about B

  • The pointer to the left of a key value points to the subtree where all of the entries are strictly less than that key value.

  • The pointer to the right of a key value points to the subtree where all of the entries are greater than or equal to that key value.

  • The “greater than or equal to” is part of the logic of the tree that allows sequence set values to appear in the index set, thereby creating the index.


General rules of thumb 4 index set
General Rules of Thumb, 4—Index Set about B

  • As insertions are made, it is possible for a node to become full.

  • If it is necessary to insert another value into a full node, that node has to be split in two.

  • The detailed rules for splitting are given below.


General rules of thumb 5 index set
General Rules of Thumb, 5—Index Set about B

  • Deletions can reduce a node to less than half full.

  • If this happens, sibling nodes have to be merged.

  • The detailed rules for merging are given below.


Inserting and deleting
Inserting and Deleting about B

  • There is an important conceptual difference between balanced trees and other tree structures you might be familiar with.

  • In other trees you work from the root down when inserting and deleting.

  • This leads to the characteristic that different branches of the tree may be of different lengths.


  • In order to maintain balance in a tree, it’s necessary to work from the leaves up.

  • You use the tree to search downward to the leaf (sequence set) node where a value either would fall, or is.

  • You then either insert or delete accordingly, and adjust the index set above to correspond to the new situation in the leaves.


  • Enforcing the requirements on the fullness of nodes leads to either splitting or merging.

  • As a consequence of the adjustment to the index set, the depth of the whole tree might grow or shrink depending on whether the inserting/splitting or deleting/merging propagate all the way back up to the current root node of the tree.


Rules of thumb for inserting
Rules of Thumb for Inserting either splitting or merging.

  • Here is a list of the rules of thumb involved in inserting a new value into the tree.

  • 1. Search through the tree as it exists until you find the sequence set node where the key value belongs.

  • 2. If there is room in the node, simply insert the key value in order. Such an insertion has no effect upwards in the index set.

  • 3. If the destination leaf node is full, split it into 2 nodes and divide the key values evenly between them.




  • 7. In general, when a node is split, the leftmost value in the new right sibling is promoted to the parent.

  • The fact that it is always the leftmost value that is promoted is explained by the fact that after promotion its right pointer points to a subtree containing values greater than or equal to that value.

  • Promoting itself takes on two different meanings.


  • When a value is inserted into a sequence set node and is promoted from there into the index set, what is promoted is a copy of that value.

  • This explains how sequence set values appear in the index set.

  • However, if further up a value is promoted from one index set node into another, it is moved, not copied.

  • This explains why a value can appear at most twice in the tree, once in the sequence set and only once in the index set.


  • 8. The splitting and promoting process is recursive. promoted from there into the index set, what is promoted is a copy of that value.

  • If the parent is already full and a value is to be added to it, the parent is split into two siblings and its parent is adjusted accordingly.



  • In other words, when the parent is split, 2 new pointers arise when the number of children only rises by one.

  • However, the problem is resolved because the split in the parent requires that the leftmost pointer in the new right parent also be promoted, and this promotion is a move, not a copy.




Deleting
Deleting in the tree and no branches become longer than any others.

  • As described above, the splitting of nodes is binary, resulting in two new sibling nodes.

  • This is a simple and unremarkable result of the insertion algorithm.

  • Deletion and merging introduce a slight complication.

  • If a deletion causes a node to fall below half full, it needs to be merged with another node.

  • The question is, which one, the left sibling or the right sibling?


  • Except for the root, every node will have at least one sibling.

  • In general, it may have zero or more on each side.

  • Should it be merged only with an immediate neighbor, and if so, should it be the one on the left or the right?

  • The rules of thumb below embody the arbitrary decision to merge with the sibling on the immediate right, if there is one, and otherwise take the one on the immediate left.


  • In developing rules of thumb for this there is another consideration with deletion that leads to more complication than with insertion.

  • It may be that the sibling that you merge with has the minimum permissible number of values in it.

  • If this is the case, the total number of values would fit into one node and you would truly merge.


  • If, however, the sibling to be merged with is over half full, merging alone would not result in the loss of a node.

  • The values would simply have to be redistributed between the nodes.

  • The situation where the two nodes would actually merge into one would be rare in practice.

  • However, it is quite possible with examples where the nodes can only contain a small number of values and pointers.


  • Just as with splitting, merging can trickle all of the way back up to the root.

  • If it reaches the point where the immediate children of the root are merged into a single node, then the original root is no longer needed.

  • This is how the tree shrinks in a balanced way.

  • Situations where nodes are merged and the values are redistributed between them will still require that the values and pointers in their parent be adjusted.


  • Finally, a simple deletion from the sequence set which does not even cause a merge can have an effect on the index set.

  • This is because values in the index set have to be values that exist in the sequence set.

  • If the value disappears from the sequence set, then it also has to be replaced in the index set.

  • This is as true for the root node as for any other.


  • Here is one final note of explanation that is directly related to the examples given.

  • In order to make the examples more interesting, the following assumption has been made:

  • You measure the fullness of a sequence set node strictly according to the same standard as an index node.


  • Take the case where an index set node contains 3 key values and 4 pointers for example

  • In the sequence set a node would contain 3 key values and 3 pointers

  • An index set node might have only one key value in it, but is considered half full because it has two pointers in it.

  • If a sequence set node falls to one key value, then it only has one pointer in it, the pointer to the record.

  • Thus, this sequence set node has to be merged with a sibling.


Rules of thumb for deleting
Rules of Thumb for Deleting and 4 pointers for example

  • Here is a list of the rules of thumb involved in deleting a value from the tree.

  • 1. Search through the tree as it exists until you find the sequence set node where the key value exists.


  • 2. Delete the value. and 4 pointers for example

  • If the value can be deleted without having the node drop below half full, no merging is needed.

  • However, if the deleted value was the leftmost in a sequence set node (other than the leftmost sequence set node), that value appears in the index set and has to be replaced there.

  • Its replacement will end up being the new leftmost value in the sequence set node from which the value was deleted.





  • 6. Now check the parent to see whether due to the adjustments it has fallen below half full.

  • Recall that the measure of fullness has to do with whether the number of pointers has fallen below half.

  • In most of the small scale examples given, the sure sign of trouble is when a parent has only one child.

  • A tree which doesn’t branch at each level is by definition not balanced.



  • 8. Deletions can be roughly grouped into four categories with corresponding concerns.

  • 8.1. A deletion of a value that doesn’t appear in the index set and which doesn’t cause a merge:

  • This requires no further action.

  • 8.2. A deletion of a value that appears in the index set and which doesn’t cause a merge:

  • Promote another value into its spot in the index set.


  • 8.3. A deletion which causes a redistribution of values between nodes:

  • This will affect the immediate parent; this may also be a value that appeared higher in the index set, requiring the promotion of a replacement.

  • 8.4. A deletion which causes the merging of two nodes:

  • Work back up the tree, recursively merging as necessary; also promote a value if necessary to replace the deleted one in the index set.


  • 9. If the merging process trickles all of the way back up to the root and the children of the current root are merged into one node, then the current root is replaced with this new node.

  • This illustrates how balance is maintained when deleting, because the length of all branches of the tree is decreased at the same time when the root is replaced in this way.


B tree examples
B+-Tree examples to the root and the children of the current root are merged into one node, then the current root is replaced with this new node.

  • The first three example exercises were taken from a previous edition of Korth and Silberschatz.

  • The same problems live on in a more recent edition with different numbering.

  • They're given in the fifth edition as shown on the following overheads.


  • 12.3 Construct a B+-tree for the following set of key values:

  • (2, 3, 5, 7, 11, 17, 19, 23, 29, 31)

  • Assume that the tree is initially empty and values are added in ascending order. Construct B+-trees for the cases where the number of pointers that will fit in one node is as follows:

  • a. Four

  • b. Six

  • c. Eight



  • The example exercises are worked out on the following overheads.

  • As usual, the idea is that this may provide a helpful illustration.

  • If you decide to work the exercises yourself, it is unlikely that you would be able to memorize the given solutions.

  • Instead, they are available for you to check your own work if you want to.


B trees example 1
B+-Trees, Example 1 overheads.

  • Let the index set nodes of the tree contain 4 pointers.

  • Construct a B+-tree for the following set of key values:

  • (2, 3, 5, 7, 11, 17, 19, 23, 29, 31)



B trees example 2
B+-Trees, Example 2 overheads.

  • Let the index set nodes of the tree contain 6 pointers.

  • Construct a B+-tree for the following set of key values:

  • (2, 3, 5, 7, 11, 17, 19, 23, 29, 31)



B trees example 3
B+-Trees, Example 3 overheads.

  • Let the index set nodes of the tree contain 8 pointers.

  • Construct a B+-tree for the following set of key values:

  • (2, 3, 5, 7, 11, 17, 19, 23, 29, 31)



B trees example 4
B+-Trees, Example 4 overheads.

  • This example is not taken from Korth and Silberschatz.

  • Let the index set nodes of the tree contain 4 pointers.

  • Construct a B+-tree for the following set of key values:

  • (3, 8, 6, 9, 15, 20, 4, 25, 30, 13, 11, 7)

  • Then delete 20 and 7.



The end
The End overheads.


ad