mis 385 mba 664 systems implementation with dbms database management n.
Skip this Video
Loading SlideShow in 5 Seconds..
MIS 385/MBA 664 Systems Implementation with DBMS/ Database Management PowerPoint Presentation
Download Presentation
MIS 385/MBA 664 Systems Implementation with DBMS/ Database Management

Loading in 2 Seconds...

play fullscreen
1 / 59

MIS 385/MBA 664 Systems Implementation with DBMS/ Database Management - PowerPoint PPT Presentation

  • Uploaded on

MIS 385/MBA 664 Systems Implementation with DBMS/ Database Management. Dave Salisbury salisbury@udayton.edu (email) http://www.davesalisbury.com/ (web site). Physical Database Design.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'MIS 385/MBA 664 Systems Implementation with DBMS/ Database Management' - bunny

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
mis 385 mba 664 systems implementation with dbms database management

MIS 385/MBA 664Systems Implementation with DBMS/Database Management

Dave Salisbury

salisbury@udayton.edu (email)

http://www.davesalisbury.com/ (web site)

physical database design
Physical Database Design
  • The purpose of the physical design process is to translate the logical description of the data into technical specifications for storing and retrieving data
  • Goal: create a design that will provide adequate performance and insure database integrity, security, and recoverability
  • Decisions made in this phase have a major impact on data accessibility, response times, security, and user friendliness.
physical design process

Normalized relations

Volume estimates

Attribute definitions

Response time expectations

Data security needs

Backup/recovery needs

Integrity expectations

DBMS technology used


Attribute data types

Physical record descriptions (doesn’t always match logical design)

File organizations

Indexes and database architectures

Query optimization

Physical Design Process

Leads to

determining volume and usage
Determining volume and usage
  • Data volume statistics represent the size of the business
    • calculated assuming business growth over a period of several years
  • Usage is estimated from the timing of events, transaction volumes, and reporting and query activity.
    • Less precise than volume statistics
usage statistics for pvfc
Usage statistics for PVFC

Figure 6.1 - Composite usage map

(Pine Valley Furniture Company)

data volumes

Data volumes

Data Volumes

Figure 6.1 - Composite usage map

(Pine Valley Furniture Company)

access frequencies

Access Frequencies (per hour)

Access Frequencies

Figure 6.1 - Composite usage map

(Pine Valley Furniture Company)

one usage analysis

Usage analysis:

140 purchased parts accessed per hour 

80 quotations accessed from these 140 purchased part accesses 

70 suppliers accessed from these 80 quotation accesses

One usage analysis

Figure 6.1 - Composite usage map

(Pine Valley Furniture Company)

another usage analysis

Usage analysis:

75 suppliers accessed per hour 

40 quotations accessed from these 75 supplier accesses 

40 purchased parts accessed from these 40 quotation accesses

Another usage analysis

Figure 6.1 - Composite usage map

(Pine Valley Furniture Company)

physical design decisions
Physical Design Decisions
  • Specify the data type for each attribute from the logical data model
    • minimize storage space and maximize integrity
  • Specify physical records by grouping attributes from the logical data model
  • Specify the file organization technique to use for physical storage of data records
  • Specify indexes to optimize data retrieval
  • Specify query optimization strategies
designing fields
Designing Fields
  • Field: smallest unit of data in database
  • Field design
    • Choosing data type
    • Coding, compression, encryption
    • Controlling data integrity
choosing data types
Choosing Data Types
  • CHAR – fixed-length character
  • VARCHAR2 – variable-length character (memo)
  • LONG – large number
  • NUMBER – positive/negative number
  • INTEGER – positive/negative numbers up to 38 digits
  • DATE – actual date – stores c/y/m/d/h/m/s
  • BLOB – binary large object (good for graphics, sound clips, etc.)
data format
Data Format
  • Data type selection goals
    • minimize storage
    • represent all possible values
      • eliminate illegal values
    • improve integrity
    • support manipulation
  • Note: these have different relative importance
data coding
Data coding
  • e.g., C(OAK), B(MAPLE) , etc
  • Implement by creating a look-up table
  • There is a trade-off in that you must create and store a second table and you must access this table to look up the code value
  • Consider using when a field has a limited number of possible values, each of which occupies a relatively large amount of space, and the number of records is large and/or the number of record accesses is small
data format decisions integrity
Data integrity controls

Default value

Range control

Null value control

Referential integrity

Missing data

substitute an estimate

report missing data

sensitivity testing

Triggers can be used to perform these operations

Data Format decisions (integrity)
for example
For example...
  • Suppose you were designing the age field in a student record at your university. What decisions would you make about:
    • data type
    • integrity (range, default, null)
    • How might your decision vary by other characteristics about the student such as degree sought?
physical records
Physical Records
  • Physical Record: A group of fields stored in adjacent memory locations and retrieved together as a unit
  • Page: The amount of data read or written in one I/O operation
  • Blocking Factor: The number of physical records per page
  • Transforming normalized relations into unnormalized physical record specifications
  • Benefits:
    • Can improve performance (speed) be reducing number of table lookups (i.e reduce number of necessary join queries)
  • Costs (due to data duplication)
    • Wasted storage space
    • Data integrity/consistency threats
  • Common denormalization opportunities
    • One-to-one relationship
    • Many-to-many relationship with attributes
    • Reference data (1:N relationship where 1-side has data not used in any other relationship)
denormalization many to many relationship with nonkey attributes
Denormalization: many-to-many relationship with nonkey attributes

Extra table access required

Null description possible

consider the following normalized relations
Consider the following normalized relations
  • STORE(Store_Id, Region, Manager_Id, Square_Feet)
  • EMPLOYEE(Emp_Id, Store_Id, Name, Address)
  • DEPARTMENT(Dept#, Store_ID, Manager_Id, Sales_Goal)
  • SCHEDULE(Dept#, Emp_Id, Date, hours)

What opportunities might exist for denormalization?

  • Horizontal Partitioning: Distributing the rows of a table into several separate files
    • Useful for situations where different users need access to different rows
    • Three types: Key Range Partitioning, Hash Partitioning, or Composite Partitioning
  • Vertical Partitioning: Distributing the columns of a table into several separate files
    • Useful for situations where different users need access to different columns
    • The primary key must be repeated in each file
  • Combinations of Horizontal and Vertical
  • Partitions often correspond with User Schemas (user views)
  • Advantages of Partitioning:
    • Records used together are grouped together
    • Each partition can be optimized for performance
    • Security, recovery
    • Partitions stored on different disks: contention
    • Take advantage of parallel processing capability
  • Disadvantages of Partitioning:
    • Slow retrievals across partitions
    • Complexity
data replication
Data Replication
  • Purposely storing the same data in multiple locations of the database
  • Improves performance by allowing multiple users to access the same data at the same time with minimum contention
  • Sacrifices data integrity due to data duplication
  • Best for data that is not updated often
designing physical files
Designing Physical Files
  • Physical File:
    • A named portion of secondary memory allocated for the purpose of storing physical records
  • Constructs to link two pieces of data:
    • Sequential storage.
    • Pointers.
  • File Organization:
    • How the files are arranged on the disk.
  • Access Method:
    • How the data can be retrieved based on the file organization.
sequential file organization



If sorted – every insert or delete requires resort

If not sorted

Average time to find desired record = n/2.


Sequential file organization

Records of the file are stored in sequence by the primary key field values.

sequential retrieval
Sequential Retrieval
  • Consider a file of 10,000 records each occupying 1 page
  • Queries that require processing all records will require 10,000 accesses
  • e.g., Find all items of type 'E'
  • Many disk accesses are wasted if few records meet the condition
  • However, very effective if most or all records will be accessed (e.g., payroll)
indexed file organizations
Indexed File Organizations
  • Index – a separate table that contains organization of records for quick retrieval – like an index in a book.
  • Primary keys are automatically indexed
  • Oracle has a CREATE INDEX operation, and MS ACCESS allows indexes to be created for most field types
  • Indexing approaches:
    • B-tree index
    • Bitmap index
    • Hash Index
    • Join Index
tree search

uses a tree search

Average time to find desired record = depth of the tree

Fig. 6-7b – B-tree index

Tree Search

Leaves of the tree are all at same level 

consistent access time

  • A technique for reducing disk accesses for direct access
  • Avoids an index
  • Number of accesses per record can be close to one
  • The hash field is converted to a hash address by a hash function
hashed file organization
Hashed File Organization
  • Hashing Algorithm: Converts a primary key value into a record address
  • Division-remainder method is common hashing algorithm

Hash algorithm

Usually uses division-remainder to determine record position. Records with same position are grouped in lists.


Hash address = remainder after dividing SSN by 10000 (in this case)

shortcomings of hashing
Shortcomings of Hashing
  • Different hash fields may convert to the same hash address
    • these are called Synonyms
    • store the colliding record in an overflow area
  • Long synonym chains degrade performance
  • There can be only one hash field per record
  • The file can no longer be processed sequentially
  • More collisions between synonyms leads to reduced access speed
bitmap index organization
Bitmap index organization

Bitmap saves on space requirements

Rows - possible values of the attribute

Columns - table rows

Bit indicates whether the attribute of a row has the values

clustering files
Clustering Files
  • In some relational DBMSs, related records from different tables can be stored together in the same disk area
  • Useful for improving performance of join operations
  • Primary key records of the main table are stored adjacent to associated foreign key records of the dependent table
  • e.g. Oracle has a CREATE CLUSTER command

An index is a table file that is used to determine the location of rows in another file that satisfy some condition

querying with an index
Querying with an Index
  • Read the index into memory
  • Search the index to find records meeting the condition
  • Access only those records containing required data
  • Disk accesses are substantially reduced when the query involves few records
maintaining an index
Maintaining an Index
  • Adding a record requires at least two disk accesses:
    • Update the file
    • Update the index
  • Trade-off:
    • Faster queries
    • Slower maintenance (additions, deletions, and updates of records)
    • Thus, more static databases benefit more overall
rules for using indexes
Rules for Using Indexes
  • Use on larger tables
  • Index the primary key of each table
  • Index search fields (fields frequently in WHERE clause)
  • Fields in SQL ORDER BY and GROUP BY commands
  • When there are >100 values but not when there are <30 values
rules for using indexes1
Rules for Using Indexes
  • DBMS may have limit on number of indexes per table and number of bytes per indexed field(s)
  • Null values will not be referenced from an index
  • Use indexes heavily for non-volatile databases; limit the use of indexes for volatile databases
    • Why? Because modifications (e.g. inserts, deletes) require updates to occur in index files
rules for adding derived columns
Rules for Adding Derived Columns
  • Use when aggregate values are regularly retrieved.
  • Use when aggregate values are costly to calculate.
  • Permit updating only of source data.
  • Create triggers to cascade changes from source data.
another rule of thumb to increase performance
Another rule of thumb to increase performance
  • Consider contriving a shorter field or selecting another candidate key to substitute for a long, multi-field primary key (and all associated foreign keys)
  • Redundant Arrays of Inexpensive Disks
  • Exploits economies of scale of disk manufacturing for the computer market
  • Can give greater security
  • Increases fault tolerance of systems
  • Not a replacement for regular backup
  • The operating system sees a set of physical drives as one logical drive
  • Data are distributed across physical drives
  • All levels, except 0, have data redundancy or error-correction features
  • Parity codes or redundant data are used for data recovery
  • Write
    • Identical copies of file are written to each drive in array
  • Read
    • Alternate pages are read simultaneously from each drive
    • Pages put together in memory
    • Access time is reduced by approximately the number of disks in the array
  • Read error
    • Read required page from another drive
  • Tradeoffs
    • Provides data security
    • Reduces access time
    • Uses more disk space

Complete Data Set

Complete Data Set

No parity

  • Three drive model
  • Write
    • Half of file to first drive
    • Half of file to second drive
    • Parity bit to third drive
  • Read
    • Portions from each drive are put together in memory
  • Read error
    • Lost bits are reconstructed from third drive’s parity data
  • Tradeoffs
    • Provides data security
    • Uses less storage space than mirroring
    • Not as fast as mirroring

One-Half Data Set

One-Half Data Set

Parity Codes

raid with four disks and striping
RAID with four disks and striping

Here, pages 1-4 can be read/written simultaneously

raid types
Raid 0

Maximized parallelism

No redundancy

No error correction

no fault-tolerance

Raid 1

Redundant data – fault tolerant

Most common form

Raid 2

No redundancy

One record spans across data disks

Error correction in multiple disks– reconstruct damaged data

Raid 3

Error correction in one disk

Record spans multiple data disks (more than RAID2)

Not good for multi-user environments,

Raid 4

Error correction in one disk

Multiple records per stripe

Parallelism, but slow updates due to error correction contention

Raid 5

Rotating parity array

Error correction takes place in same disks as data storage

Parallelism, better performance than Raid4

Raid Types
database architectures
Database Architectures







query optimization
Query Optimization
  • Parallel Query Processing
  • Override Automatic Query Optimization
  • Data Block Size -- Performance tradeoffs:
    • Block contention
    • Random vs. sequential row access speed
    • Row size
    • Overhead
  • Balancing I/O Across Disk Controllers
query optimizer factors
Query Optimizer Factors
  • Type of Query
    • Highly selective.
    • All or most of the records of a file.
  • Unique fields
  • Size of files
  • Indexes
  • Join Method
    • Nested-Loop
    • Merge-Scan (Both files must be ordered or indexed on the join columns.)
query optimization1
Query Optimization
  • Wise use of indexes
  • Compatible data types
  • Simple queries
  • Avoid query nesting
  • Temporary tables for query groups
  • Select only needed columns
  • No sort without index