1 / 32

File Storage and Indexing in Relational DBs

Learn about the physical storage of data in relational databases, indexes, and other means to speed up data access. Understand how tables are stored on the hard disk and the operations involved in accessing and modifying data.

fayephelps
Download Presentation

File Storage and Indexing in Relational DBs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IELM 230: File Storage and Indexes Agenda: - Physical storage of data in Relational DB’s - Indexes and other means to speed Data access - Defining indexes in SQL

  2. Physical Data Storage - All data in a DB is stored on hard disks (HD) - All data in a file  series of bits (0, 1) - Each bit is stored (0  magnetised, 1  demagnetised) along points on tracks (concentric circles)

  3. Physical Data Storage.. Typical IDE HD controller 40-pin socket  to  motherboard

  4. HD storage details Schematic of data storage on a 1024-track disk SECTOR: Smallest unit of data exchange (typical size: 512Byte) [why?] CLUSTER: group of four sectors R/W heads move together  Four R/W heads can read at the same time CYLINDER: tracks on different platters that can be read simultaneously

  5. Delays in HD   CPU data communication Block:the amount of data in one sector [request time] Block address HD Controller CPU, RAM 1 [seek time] Use stepper motor to locate R/W head above correct track [transfer time] send Block of data on Data bus to RAM 4 2 HD buffer 3 [read time] read 1 Block of data from HD and store in HD buffer

  6. Typical DB concerns Fast data access even when - many users are simultaneously accessing a DB - data is in a table with millions of rows Typical operations - Search for a particular row of data in a table - Creation a row in a table - Modify some data in a row of a table - Deleting a row of data from the table SELECT… INSERT… UPDATE… DELETE…

  7. How to store tables on the HD • 1. Each table is stored as an independent file • 2. The attributes in a table are often accessed together [Why ?]  • Need to store the attribute values in each record contiguously • and • Attributes MUST be stored in the same sequence for each record • 3. We can choose the sequence in different records are stored [Why ?]

  8. Storage format for records Employee( Lname, Fname, ID, DeptNo) Record 1 Record 2 field separator record separator Block n Block n+1 … wasted disk space

  9. Approximate time for different operations [request time] < 10-6 sec Block address HD Controller CPU, RAM 1 [seek time] ~3x10-3 sec Use stepper motor to locate R/W head on track [transfer time] ~3x10-3 sec send 1 Block of data on Data bus to RAM 4 2 3 HD buffer [read time] ~2-4x10-3 sec (including mean latency) read 1 Block of data from HD and store in HD buffer CPU: Search for a record in one block of data stored in the RAM: ~10-6sec Specifications Seagate Cheetah 15K.7 600GB Hard drive Performance Specifications: Spindle Speed (RPM): 15000 Seek Time: Average Read (ms): 3.4 Average Rotational Latency (ms): 2.0 Transfer Rate: SCSI (MB/s): 600. Capacity: 600GB Buffer (Cache) Size: 16MB Bytes per Sector: 512 Disk Drive Configurations: Disks: 4. Heads: 8

  10. Time analysis of operation on DB Total time for an operation (e.g. search for a record in a DB): few 10-3 secs 1. TRANSFER block of Data to RAM For Each Block 2. Search for data in BLOCK [transfer data from RAM to CPU] + [examine data] + [report output] few 10-6 secs Since TRANSFER time dominates, we will ignore CPU time for all further analysis.

  11. Heap Files HEAP file: - All records of the table stored in the order of creation - Stored in one large file - Stored on contiguous blocks on HD Operation: Insert a new record Method: Get file data (Location of 1st Block, Size of file) Transfer last block from HD  RAM If (enough space) Add Record to Block Transfer updated Record  HD (write) Else Increment file size by 1 Block, Add record to new Block, Transfer updated Record  HD (write) t sec t sec Worst case time = 2t sec (very fast)

  12. Heap file operations.. Operation: Search for a record Method: Linear search (1) Transfer 1st Block  RAM (2) (CPU) Search for record in this block (3) If no match is found (3.1) Copy the next block into RAM (3.2) Go to (2) Performance: Let: Size of file: B blocks Worst case = (the data is in the last Block, or not in Table) Worst case time=Bt (very slow) Average case time: Bt/2 (very slow)

  13. Heap file operations.. Operation: Update a record Method: Linear search (1) Search for the record to update (Linear search) (2) If found: Modify record; Write the updated Block to HD Performance: Let: Size of file: B blocks Worst case = Step (1): Bt; Step (2): t Worst case time=Bt+t (very slow) Average case time: (Bt+t)/2 (very slow)

  14. Heap file operations.. Operation: Delete a record Method: Same as for Update Performance: Same as for Update Problem: Extra space (‘Hole’) is left in the Block with the deleted record Typical solutions: (a) Periodic consolidation of Blocks (b) Use of 1-bit ‘RECORD_DELETED’ markers

  15. Sorted Files Main idea: Sort the records in the file Based on one attribute value (ordering attribute/field). Table sorted by SSN

  16. Sorted file operations Operation: Search for a record, given value of ordering field Method:Binary search Let file size = b Blocks. 1. Look in the block number b/2 If (searched record is in this block), DONE; If (searched value) > (last ordering field value in this block) Binary search in blocks between ( b/2 + 1), b; Else Binary search in blocks between 1, ( b/2 - 1). Performance: Worst case: t(1 + lg2b)

  17. Sorted file operations.. Heap file vs. Sorted File, Search time comparison ASSUME: file size = 8192 blocks. Heap file: Worst case time = 8192t Sorted file: Worst case time = t( 1 + lg2 8192) = t(1+ 13) = 14t Searching in sorted file is 8192/14 ≈ 585 times faster

  18. Sorted file operations… Operations: Delete a record/update a value in a record Method: Binary search for record; Modify and Write block Performance: The worst case time = t(1 + lg2b)+ t (fast) Worst case search time NOTE 1. Still need to perform occasional ‘file compacting’ after deletions 2. What if we want to modify the ordering attribute value?

  19. Sorted file operations…. Operations: Insert a new record Update the ordering attribute value of a record Method 1: Insert the record in correct position by ordering field. 1. Search correct block to insert record 2. If (Block is full) 2.1. Remove last record in Block 2.2. Insert new record and rewrite block 2.3. Insert the removed block of step 2.1 in next Block… Performance: Search for the insertion point ≈ t(1+ lg2b) + Read and Write each block = 2bt Very inefficient

  20. Sorted file operations….. Operations: Search for a record in Table in Sorted+Overflow files Method: 1. Binary search in Main file 2. Linear search in Overflow file Performance: [exercise]

  21. Sorted file operations…. Operations: Insert a new record Update the ordering attribute value of a record Method 2: Overflow files Use two files to store a Table: Main file: contains most of the records, SORTED Overflow file: recently inserted records stored in this, HEAP At periodic intervals, Overflow file records merged into Main file, Performance: Insertion time: 2t (constant time) (very fast) + occasional time to consolidate Overflow and Main files

  22. Faster search: Hashing Main idea: divide data into a series of organized “buckets” Setting up a hash table: 1. Estimate maximum size of Table (e.g. 10,000 Blocks) 2. Specify maximum search time for a record (e.g. 10t) 3. Determine bucket size (here, 10 Blocks) 4. Determine a hashing attribute 5. Determine a hashing function, h( ) h( hash_attribute_value) = Bucket_number 6. Reserve max_size contiguous Blocks on HD

  23. Using a hash file Insert a record: Let Bucket size = b blocks; 1. Compute the Bucket address = Addr = h( hash key value) 2. Get Block at address Addr to RAM 2.1. If enough space, insert and rewrite Block to HD 2.2. Else Set (Addr = Addr+1); go to Step 2. NOTE: 1. Selection of h( ) is critical: h( hash_key_values) must be uniformly distributed on 1,..n Buckets 2. What happens if a Bucket is full ? Performance: Constant time for Search, Insert, Delete, Update

  24. Indexes - Hash files sacrifice extra disk space [Why?] for operation speed - Another way to use extra space for faster operations: Index files A primary index file is an index that is constructed using the sorting attribute of the main file. - default sorting attribute: primary key

  25. Primary Index Example:

  26. Primary Index.. Operation: Search for a record in the main file Procedure: 1. Binary search for Block address of record in primary index file 2. Fetch Block of Main file with searched record to RAM 2.1. Search this block for the data Performance: Let size of Primary Index file = P blocks Worst case time to locate Block address ≈t(1 + lg2P) Time to fetch located block from main file = t Total worst case time ≈t(1 + lg2P) + t = t(2 + lg2P) (very fast)

  27. Primary Index… Example: search for record of SSN= ‘1208’ Block 1 1. Binary search in P blocks  SSN= ‘1208’ in Block 3 of Main file Block P 2. Fetch Block 3 of main file; 3. Find data of SSN=‘1208’;

  28. Primary Index…. Operation: Insert a record into main file Problem: - Main file must be sorted by sorting attribute  insert into correct position is too expensive Solution: - Newly inserted records are stored in Overflow file NOTE: Overflow file may be a Hash file (fast), or Heap file Performance analysis: Constant time (add record to last Block in Overflow file)

  29. Secondary Indexes Secondary index file is an index constructed on any non-sorting attribute of the Main table. The Secondary Index is a two column file storing the block address of every secondary index attribute value of the table.

  30. Secondary Indexes.. Example: Secondary Index on Lname

  31. Secondary Indexes… Operations and time analysis: Similar to Primary Index Each table can have only one primary index You can define more than one secondary index files Why would we create more than one index for the same table?

  32. Creating, Deleting Indexes in SQL Example 1: Create an index file for Lname attribute of EMPLOYEE. CREATE INDEX myLnameIndex ON EMPLOYEE(Lname); Example 2: You can also create an Index on a combination of attributes. CREATE INDEX myNamesIndex ON EMPLOYEE(Lname, Fname); Example 3: Delete the index created in Example 2. DROP INDEX myNamesIndex;

More Related