1 / 43

Conventional Files Versus the Database

Conventional Files Versus the Database. Introduction All information systems create, read, update and delete data. This data is stored in files and databases. Files are collections of similar records. Databases are collections of interrelated files. The key word is interrelated .

haracha
Download Presentation

Conventional Files Versus the Database

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Conventional Files Versus the Database • Introduction • All information systems create, read, update and delete data. This data is stored in files and databases. • Files are collections of similar records. • Databases are collections of interrelated files. • The key word is interrelated. • The records in each file must allow for relationships (think of them as ‘pointers’) to the records in other files. • In the file environment, data storage is built around the applications that will use the files. • In the database environment, applications will be built around the integrated database.

  2. Conventional Files Versus the Database • The Pros and Cons of Database • Pros: • The principal advantage of a database is the ability to share the same data across multiple applications and systems. • Database technology offers the advantage of storing data in flexible formats. • Databases allow the use of the data in ways not originally specified by the end-users - data independence. • The database scope can even be extended without impacting existing programs that use it. • New fields and record types can be added to the database without affecting current programs.

  3. Conventional Files Versus the Database • The Pros and Cons of Database • Cons: • Database technology is more complex than file technology. • Special software, called a database management system (DBMS), is required. • A DBMS is still somewhat slower than file technology. • Database technology requires a significant investment. • The cost of developing databases is higher because analysts and programmers must learn how to use the DBMS. • In order to achieve the benefits of database technology, analysts and database specialists must adhere to rigorous design principles. • Another potential problem with the database approach is the increased vulnerability inherent in the use of shared data.

  4. Conventional Files Versus the Database • Database Design in Perspective • To fully exploit the advantages of database technology, a database must be carefully designed. • The end product is called a database schema, a technical blueprint of the database. • Database design translates the data models that were developed for the system users during the definition phase, into data structures supported by the chosen database technology. • Subsequent to database design, system builders will construct those data structures using the language and tools of the chosen database technology.

  5. Database Concepts • Databases • Databases provide for the technical implementation of entities and relationships. • The history of information systems has led to one inescapable conclusion: • Data is a resource that must be controlled and managed! • Out of necessity, database technology was created so an organization could maintain and use its data as an integrated whole instead of as separate data files.

  6. Database Concepts • Databases • Database Architecture: • Database architecture refers to the database technology including the database engine, database management utilities, database CASE tools for analysis and design, and database application development tools. • The control center of a database architecture is its database management system. • A database management system (DBMS) is specialized computer software available from computer vendors that is used to create, access, control, and manage the database. The core of the DBMS is often called its database engine. The engine responds to specific commands to create database structures, and then to create, read, update, and delete records in the database.

  7. Database Concepts • Databases • Database Architecture: • A systems analyst, or database analyst, designs the structure of the data in terms of record types, fields contained in those record types, and relationships that exist between record types. • These structures are defined to the database management system using its data definition language. • Data definition language (or DDL) is used by the DBMS to physically establish those record types, fields, and structural relationships. Additionally, the DDL defines views of the database. Views restrict the portion of a database that may be used or accessed by different users and programs. DDLs record the definitions in a permanent data repository.

  8. Database Concepts • Databases • Database Architecture: • Some data dictionaries include formal, elaborate software that helps database specialists track metadata – the data about the data –such as record and field definitions, synonyms, data relationships, validation rules, help messages, and so forth. • The database management system also provides a data manipulation language to access and use the database in applications. • A data manipulation language (or DML) is used to create, read, update, and delete records in the database, and to navigate between different records and types of records. The DBMS and DML hide the details concerning how records are organized and allocated to the disk.

  9. Database Concepts • Databases • Database Architecture: • Many DBMSs don’t require the use of a DDL to construct the database, or a DML to access the database. • They provide their own tools and commands to perform those tasks. This is especially true of PC-based DBMSs. • Many DBMSs also include proprietary report writing and inquiry tools to allow users to access and format data without directly using the DML. • Some DBMSs include a transaction processing monitor (or TP monitor) that manages on-line accesses to the database, and ensures that transactions that impact multiple tables are fully processed as a single unit.

  10. Database Concepts • Databases • Relational Database Management Systems: • There are several types of database management systems and they can be classified according to the way they structure records. • Early database management systems organized records in hierarchies or networks implemented with indexes and linked lists. • Relational databases implement data in a series of tables that are ‘related’ to one another via foreign keys. • Files are seen as simple two-dimensional tables, also known as relations. • The rows are records. • The columns correspond to fields.

  11. Database Concepts for the Systems Analyst • Databases • Relational Database Management Systems: • Both the DDL and DML of most relational databases is called SQL (which stands for Structured Query Language). • SQL supports not only queries, but complete database creation and maintenance. • A fundamental characteristic of relational SQL is that commands return ‘a set’ of records, not necessarily just a single record (as in non-relational database and file technology).

  12. Database Concepts for the Systems Analyst • Databases • Relational Database Management Systems: • High-end relational databases also extend the SQL language to support triggers and stored procedures. • Triggers are programs embedded within a table that are automatically invoked by a updates to another table. • Stored procedures are programs embedded within a table that can be called from an application program. • Both triggers and stored procedures are reusable because they are stored with the tables themselves. • This eliminates the need for application programmers to create the equivalent logic within each application that use the tables.

  13. Data Analysis for Database Design • What is a Good Data Model? • A good data model is simple. • As a general rule, the data attributes that describe an entity should describe only that entity. • A good data model is essentially non-redundant. • This means that each data attribute, other than foreign keys, describes at most one entity. • A good data model should be flexible and adaptable to future needs. • We should make the data models as application-independent as possible to encourage database structures that can be extended or modified without impact to current programs.

  14. Data Analysis for Database Design • Data Analysis • Data analysis is a process that prepares a data model for implementation as a simple, non-redundant, flexible, and adaptable database. The specific technique is called normalization. • Normalization is a technique that organizes data attributes such that they are grouped together to form stable, flexible, and adaptive entities.

  15. Data Analysis for Database Design • Data Analysis • Normalization is a three-step technique that places the data model into first normal form, second normal form, and third normal form. • An entity is in first normal form (1NF) if there are no attributes that can have more than one value for a single instance of the entity. • An entity is in second normal form (2NF) if it is already in 1NF, and if the values of all non-primary key attributes are dependent on the full primary key – not just part of it. • An entity is in third normal form (3NF) if it is already in 2NF, and if the values of its non-primary key attributes are not dependent on any other non-primary key attributes.

  16. Data Analysis for Database Design • Normalization Example • First Normal Form: • The first step in data analysis is to place each entity into 1NF.

  17. Data Analysis for Database Design • Normalization Example • Second Normal Form: • The next step of data analysis is to place the entities into 2NF. • It is assumed that you have already placed all entities into 1NF. • 2NF looks for an anomaly called a partial dependency, meaning an attribute(s) whose value is determined by only part of the primary key. • Entities that have a single attribute primary key are already in 2NF. • Only those entities that have a concatenated key need to be checked.

  18. Data Analysis for Database Design • Normalization Example • Third Normal Form: • Entities are assumed to be in 2NF before beginning 3NF analysis. • Third normal form analysis looks for two types of problems, derived data and transitive dependencies. • In both cases, the fundamental error is that non key attributes are dependent on other non key attributes. • Derived attributes are those whose values can either be calculated from other attributes, or derived through logic from the values of other attributes. • A transitive dependency exists when a non-key attribute is dependent on another non-key attribute (other than by derivation). • Transitive analysis is only performed on those entities that do not have a concatenated key.

  19. Data Analysis for Database Design • Normalization Example • Third Normal Form: • Third normal form analysis looks for two types of problems, derived data and transitive dependencies. (continued) • A transitive dependency exists when a non-key attribute is dependent on another non-key attribute (other than by derivation). • This error usually indicates that an undiscovered entity is still embedded within the problem entity. • Transitive analysis is only performed on those entities that do not have a concatenated key. • “An entity is said to be in third normal form if every non-primary key attribute is dependent on the primary key, the whole primary key, and nothing but the primary key.”

  20. Data Analysis for Database Design • Simplification by Inspection: • When several analysts work on a common application, it is not unusual to create problems that won’t be taken care of by normalization. • These problems are best solved through simplification by inspection, a process wherein a data entity in 3NF is further simplified by such efforts as addressing subtle data redundancy.

  21. Data Analysis for Database Design • CASE Support for Normalization: • Most CASE tools can only normalize to first normal form. • They accomplish this in one of two ways. • They look for many-to-many relationships and resolve those relationships into associative entities. • They look for attributes specifically described as having multiple values for a single entity instance. • It is exceedingly difficult for a CASE tool to identify second and third normal form errors. • That would require the CASE tool to have the intelligence to recognize partial and transitive dependencies.

  22. Database Design • The Database Schema • The design of a database is depicted as a special model called a database schema. • A database schema is the physical model or blueprint for a database. It represents the technical implementation of the logical data model. • A relational database schema defines the database structure in terms of tables, keys, indexes, and integrity rules. • A database schema specifies details based on the capabilities, terminology, and constraints of the chosen database management system.

  23. Database Design • The Database Schema • Transforming the logical data model into a physical relational database schema rules and guidelines: • Each fundamental, associative, and weak entity is implemented as a separate table. • The primary key is identified as such and implemented as an index into the table. • Each secondary key is implemented as its own index into the table. • Each foreign key will be implemented as such. • Attributes will be implemented with fields. • These fields correspond to columns in the table.

  24. Database Design • The Database Schema • Transforming the logical data model into a physical relational database schema rules and guidelines: (continued) • The following technical details must usually be specified for each attribute. • Data type. Each DBMS supports different data types, and terms for those data types. • Size of the Field. Different DBMSs express precision of real numbers differently. • NULL or NOT NULL. Must the field have a value before the record can be committed to storage? • Domains. Many DBMSs can automatically edit data to ensure that fields contain legal data. • Default. Many DBMSs allow a default value to be automatically set in the event that a user or programmer submits a record without a value.

  25. Database Design • The Database Schema • Transforming the logical data model into a physical relational database schema rules and guidelines: (continued) • Supertype/subtype entities present additional options as follows: • Most CASE tools do not currently support object-like constructs such as supertypes and subtypes. • Most CASE tools default to creating a separate table for each entity supertype and subtype. • If the subtypes are of similar size and data content, a database administrator may elect to collapse the subtypes into the supertype to create a single table. • Evaluate and specify referential integrity constraints.

  26. Database Design • Data and Referential Integrity • There are at least three types of data integrity that must be designed into any database - key integrity, domain integrity and referential integrity. • Key Integrity: • Every table should have a primary key (which may be concatenated). • The primary key must be controlled such that no two records in the table have the same primary key value. • The primary key for a record must never be allowed to have a NULL value.

  27. Database Design • Data and Referential Integrity • Domain Integrity: • Appropriate controls must be designed to ensure that no field takes on a value that is outside of the range of legal values. • Referential Integrity: • A referential integrity error exists when a foreign key value in one table has no matching primary key value in the related table.

  28. Database Design • Data and Referential Integrity • Referential Integrity: • Referential integrity is specified in the form of deletion rules as follows: • No restriction. • Any record in the table may be deleted without regard to any records in any other tables. • Delete:Cascade. • A deletion of a record in the table must be automatically followed by the deletion of matching records in a related table. • Delete:Restrict. • A deletion of a record in the table must be disallowed until any matching records are deleted from a related table.

  29. Database Design • Data and Referential Integrity • Referential Integrity: • Referential integrity is specified in the form of deletion rules as follows: (continued) • Delete:Set Null. • A deletion of a record in the table must be automatically followed by setting any matching keys in a related table to the value NULL.

  30. Database Design • Roles • Some database shops insist that no two fields have exactly the same name. • This presents an obvious problem with foreign keys • A role name is an alternate name for a foreign key that clearly distinguishes the purpose that the foreign key serves in the table. • The decision to require role names or not is usually established by the data or database administrator.

More Related