1 / 34

Database Performance Part 1—Topics

Database Performance Part 1—Topics. Storing Data Retrieving Data Costs of Retrieving Data Reasons for Concern Data Volume Analysis Data Usage Analysis Enhancement Mechanisms Indexes. Default SQL Server Data Storage.

yosef
Download Presentation

Database Performance Part 1—Topics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Database Performance Part 1—Topics • Storing Data • Retrieving Data • Costs of Retrieving Data • Reasons for Concern • Data Volume Analysis • Data Usage Analysis • Enhancement Mechanisms • Indexes

  2. Default SQL Server Data Storage • Data in tables is stored on pages and there are eight pages per extent. • When more space is needed an entire extent is added to the database • Each row (record) in the databaseis physically stored on a pageand in an extent • Each row has a RowID and PageOffset that identifies it and it’s location in the page

  3. SQL Server Data Storage (cont.) • Without a clustered index(covered later) rows areadded to pages in the order of insertion. • When pages are full rowsare added to the next page in the extent. • When extents are full new extents are created • Tables keep track of the sequence of extents thatcontain their contents tocreate a logical sequence

  4. Data Retrieval • By default, queries of tables require that each page be loaded into memory in sequence and each row examined to see if it meets the query conditions • "Full Table Scan"

  5. Data Retrieval (cont.) • The Page is the basic unit of IO • Entire page is moved from physical storage to RAM for evaluation • In a pure table scan (the default method of retrieval) each record is examined to see if it matches the WHERE clause conditions (if any) • Test value and column value moved to CPU for testing • Records where condition is TRUE are added to result set • Pages are cached and the cached copy will be read if available and needed

  6. Data Retrieval (cont.) • In SQL Server page sizes are fixed at 8 KB • (Entire extent is 64 KB) • Some DBMS have different sizes • Some DBMS allow tuning on a table by table basis • 8 KB is also the maximum record size • Number of Records on a page depends on record size • Sum of data sizes of each column • IO time for a pure scan increases with • Number of records • Record size

  7. Data Retrieval Costs • Two levels of costs associated with data retrieval • Most Important: IO moving page from disk storage to RAM • Less Important: CPU effort to evaluate records • In default mode records cannot be evaluated until they have been moved into RAM • We also care about physical storage space • Less important as a performance issue • We also care about costs of reorganizing data as it is added to the DB or updated (later)

  8. Data Retrieval Costs (cont.) • ALL Retrieval Enhancement mechanisms must be evaluated on the dimensions from the previous slide • None of the enhancements come without cost • Decisions affected by use of the data, not just pure database characteristics • Understanding organizational tasks and priorities key • Requires balance between technical and organizational knowledge • MIS graduates ideally positioned to participate in this analysis

  9. Data Retrieval Costs (cont.) • Degree of the cost changes with many factors • Table sizes • Access mechanisms (paths—more later) • Nature of query • Number of tables needed in query • Nature of the enhancement approach • Remember that our DB design goal of minimizing storage space and redundancy caused (normalization) spread data around the database • More tables containing transaction logic • More complicated queries

  10. Reasons for Concern • Write the SQL query to calculate your GPA • How is query executed (if no enhancements)?

  11. Data Volume Analysis • We don’t have retrieval problems with small tables • Need to know how big a table will get over the life of the system to understand the potential magnitude of the problem • Q: How many records are expected in the ENROLLMENT table? • Document in the data dictionary • Estimate of number of records expected • How estimate was computed

  12. Data Volume Analysis (cont.) • Estimating DV • Absolute count: We know there are 12 possible grades that can be contained in the GRADE table • Estimate: We think that we will have 32,000 students next year (use your statistics!) • Derived: Each enrolled student takes an average of four sections per semester • Historical trends: Enrollment is growing at 2% per year • System Lifetime: Specify the expected useful life of the system→Cap on records

  13. Data Volume Analysis (cont.) • Don’t forget historical data! • Are graduated or withdrawn student records retained in the STUDENT and ENROLLMENT tables? • How long will they be kept? • What is the potential size of the ENROLLMENT table if records are never discarded? • Precise entity definitions are critical in DVA • Document where or how you came up with volume estimates

  14. Data Usage Analysis • DUA is concerned with three factors • How frequently are tables accessed? • How urgent are the table accesses? • What is the access path into the table? • Usually means what fields are being compared in a WHERE clause • Including Join ON expressions • Goal is to find the high frequency, important retrievals and to put enhancements on the path used by the retrieval

  15. Data Usage Analysis (cont.) • Many frequency and urgency estimates will come from an analysis of the organization’s business practices and needs • What is max time a customer can be allowed to wait for a response? • How many sales take place a day? • Can this transaction take place in batch overnight? • How many sales are made per hour? Do we expect it to grow? • Consider electronic credit card clearing from retail stores

  16. Data Usage Analysis (cont.) • The access path is the fields being searched to find appropriate records in a transaction • What is the path taken through the sample ERD to: • Calculate your GPA? • Determine if you have met a course prerequisite? • Don’t forget checks of operational business rules made in conjunction with a transaction • What if we had a business rule that said only students with a 3.0 GPA could take ISM 4212? • How about checking prerequisites?

  17. For Your Group's Project… • Which business transaction will be conducted the most frequently? • What SQL does it require? • Include triggers • What tables are used? • How large are the tables? • How time sensitive is the transaction? • Identify a report your organization will need • What SQL does it require? • Is it needed near-real-time or can it wait?

  18. Enhancement Mechanisms • Indices • Denormalizing tables • Partitioning tables • Hardware enhancements

  19. Indexes • If SQL Server knows the extent address, page address, and RowID of desired data it can go directly to the page in question (one page read into memory) and directly to the desired record • Indexes are separate storage structures that map from values in columns of tables to the location of the row from which the value was taken

  20. Indexes (cont.) • Indexes let the system search a small record to find the exact address of a large record More records per page than the main table

  21. Indexes (cont.) • There are a multitude of algorithms and techniques for implementing indexes • Computer scientists develop, test, and evaluate various indexing methods • Our indexing techniques will usually be determined by our choice of RDBMS

  22. The B-Tree (Balanced Tree) Index Root Page Leaf Pages Data Pages

  23. The B-Tree Index (cont.) • Rows in each index page are inorder according to the column(s)on which the index was created • Upper level pages have sparsepopulations of indexes values • Not all values listed • Each entry points to the page with denser values • Leaf pages (nodes) contain all values within a range • Leaf pages point to the actual data page and Row ID from which the index value came

  24. Clustered Index • In a clustered index the data rows are physically in the order specified by the index key • Leaf Nodes in the index are actually the data pages CustomerID CompanyName ---------- ---------------------------------------- ALFKI Alfreds Futterkiste ANATR Ana Trujillo Emparedados y helados ANTON Antonio Moreno Taquería AROUT Around the Horn BERGS Berglunds snabbköp BLAUS Blauer See Delikatessen BLONP Blondesddsl père et fils BOLID Bólido Comidas preparadas BONAP Bon app' BOTTM Bottom-Dollar Markets

  25. Clustered Index (cont.) • Because data rows are physically ordered by the index value records must be moved around to allow insertions CustomerID CompanyName ---------- ---------------------------------------- ALFKI Alfreds Futterkiste ANATR Ana Trujillo Emparedados y helados ANTON Antonio Moreno Taquería AROUT Around the Horn BERGS Berglunds snabbköp BERNI Bernie’s Fish-O-Rama BLAUS Blauer See Delikatessen BLONP Blondesddsl père et fils BOLID Bólido Comidas preparadas BONAP Bon app' BOTTM Bottom-Dollar Markets Insertion Other records must be moved

  26. Clustered Indexes (cont.) • When a clustered index page is full it must “split” • Half of records are moved to new page and half remain in place • New pages may end up in new extents • Pointers must link pages in the logical order of the data • Pages with extensive insertions that are not naturally in the clustered index order can take extensive processing time • E.g.—Adding Employees with SSN PK • Page splits may cascade upwards to splits of index pages

  27. Clustered Indexes (cont.) • Clustered indexes have significant advantages when performing range queries or when the desired index value is a ‘natural’ sequence for the data • Timestamp • CustomerID • There can only be one clustered index per table (Why?) • Nonclustered indexes on a table with a clustered index table point to the clustered index

  28. Implementing Indexes • Use the ManageIndexes & Keyswindow in EnterpriseManager • Default for PK index is to make it clustered • Override if you don’twant this • Do not automatically accept the default

  29. Using Indexes • SQL Server will automatically select indices to use in queries • Where clauses • Inner Join clauses • First column of the index must match the criteria • Additional columns will be used if available

  30. Indexes (cont.) • Places to consider implementing indexes • Primary Keys (required in most RDBMS) • Foreign Keys • Other ‘access fields’ • E.g., Customer phone number if used as a lookup field • Look at data usage analysis for other potential targets • Fields in WHERE clause of SQL statements • Fields in ORDER BY clause of SQL query

  31. Indexes (cont.) • Contraindications for indexes • Very little variation among the attribute values in the indexed field(s) • Class (Freshman, Sophomore, etc.) • Gender • Many null values in the indexed field(s) • Small tables (Index may be as large as the table)

  32. Index Benefits • Avoid table scan • Quick location of record address—one page record to get data • Small row sizes per each index entry→many fewer page reads to find record address • B-tree algorithm discards high percentage of records with each level of the index pages evaluated • SQL stops looking when it knows it has finished—indices can determine this • Indexes may be used for IF EXISTS queries without accessing data pages

  33. Index Costs • Extra storage space • Each table index must be updated with each data modification to the table • Increased processing time • Easy to implement and sometimes overused

  34. Index Tricks and Techniques • Consider dropping and then rebuilding indices when bulk updates are required • Nonclustered indices can have additional data included in the leaf node • Avoid retrieval of main data page • Increases index size and therefore reduces efficiency

More Related