1 / 52

Introduction to database System Sajid Banday

Introduction to database System Sajid Banday. Basic Database Concept. Part 1 Relational database theory Normalization theory Database engine structure and architecture concept Database Engine Component Database Engine structure and architecture. Database Design Concepts. Part 2

zaynah
Download Presentation

Introduction to database System Sajid Banday

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to database SystemSajid Banday

  2. Basic Database Concept Part 1 • Relational database theory • Normalization theory • Database engine structure and architecture concept • Database Engine Component • Database Engine structure and architecture

  3. Database Design Concepts Part 2 • Design Data Model • Design Entity Relationship Model

  4. Querying and Manipulating Data Part 3 • Data Definition Language (DDL) • Data Manipulation Language (DML) • Data Control Language (DCL) • Data Query Language (DQL) • Group functions • Sql Joins • Basic SQL programming(Transact-SQL & CLR /PL-SQL/ SQL & PSM) • Stored procedure • User-defined function • Trigger • Views/Synonyms

  5. Basic Database Concept Part 1 Relational database theory • The relational database model was conceived by E. F. Codd in 1969, then a researcher at IBM. The model is based on branches of mathematics called set theory and predicate logic. The basic idea behind the relational model is that a database consists of a series of unordered tables (or relations) that can be manipulated using non-procedural operations that return tables. This model was in vast contrast to the more traditional database theories of the time that were much more complicated, less flexible and dependent on the physical storage methods of the data.. • Relational database theory uses a set of mathematical terms, which are roughly equivalent to SQL database terminology. The table below summarizes some of the most important relational database terms and their SQL database equivalents.

  6. Basic Database Concept Part 1 Relational database theory • In Relational database schema is the structural description of relations in database and instance is actual contents or data at given point in time. • Database is set of named relations or tables and each relation has a set of named attributes or columns. Tuple or row has a value for each attribute. Each attribute has a data type or domain Key: • The relational model dictates that each row in a table be unique. If you allow duplicate rows in a table, then there's no way to uniquely address a given row via programming. This creates all sorts of ambiguities and problems that are best avoided. • You guarantee uniqueness for a table by designating a primary key a column that contains unique values for a table. Each table can have only one primary key, even though several columns or combination of columns may contain unique values. All columns (or combination of columns) in a table with unique values are referred to as candidate keys, from which the primary key must be drawn. • All other candidate key columns are referred to as alternate keys. Keys can be simple or composite. A simple key is a key made up of one column, whereas a composite key is made up of two or more columns.

  7. Basic Database Concept Part 1 Relational database theory Relationships: You define foreign keys in a database to model relationships in the real world. Relationships between real-world entities can be quite complex, involving numerous entities each having multiple relationships with each other. For example, a family has multiple relationships between multiple people all at the same time. These tables can be related in one of three different ways: one-to-one, one-to-many or many-to-many. • One-to-One Relationships Two tables are related in a one-to-one (1—1) relationship if, for every row in the first table, there is at most one row in the second table. True one-to-one relationships seldom occur in the real world. This type of relationship is often created to get around some limitation of the database management software rather than to model a real-world situation.

  8. Basic Database Concept Part 1 Relational database theory Cardinality: • One-to-Many Relationships Two tables are related in a one-to-many (1—M) relationship if for every row in the first table, there can be zero, one, or many rows in the second table, but for every row in the second table there is exactly one row in the first table.

  9. Basic Database Concept Part 1 Relational database theory Cardinality: • Many-to-Many Relationships Two tables are related in a many-to-many (M—M) relationship when for every row in the first table, there can be many rows in the second table, and for every row in the second table, there can be many rows in the first table. Many-to-many relationships can't be directly modeled in relational database. These types of relationships must be broken into multiple one-to-many relationships.

  10. Basic Database Concept Part 1 Normalization theory • Normalization is a process of reducing redundancies or set of guidelines used to optimally design a database to reduce redundant data. The actual guidelines of normalization, called normal forms. • However, normalization is an important process that, if understood, will increase your understanding of SQL. At this point, don't be overly concerned with all the specifics of normalization; it is most important to understand the basic concepts.

  11. Basic Database Concept Part 1 Normal Forms • The normal forms, an integral concept involved in the process of database normalization. • Normal form is a way of measuring the levels, or depth, to which a database has been normalized. A database's level of normalization is determined by the normal form. • The following are the three most common normal forms in the normalization process: 1. First normal form 2. Second normal form 3. Third normal form • Of the three normal forms, each subsequent normal form depends on normalization steps taken in the previous normal form. For example, to normalize a database using the second normal form, the database must first be in the first normal form.

  12. Basic Database Concept Part 1 First Normal Form • The objective of the first normal form is to divide the base data into logical units called tables. When each table has been designed, a primary key is assigned to most or all tables. Below figure which illustrates how the raw database has been redeveloped using the first normal form.

  13. Basic Database Concept Part 1 First Normal Form • You can see that to achieve the first normal form, data had to be broken into logical units of related information, each having a primary key and ensuring that there are no repeated groups in any of the tables. Instead of one large table, there are now smaller, more manageable tables: EMPLOYEE_TBL, CUSTOMER_TBL, and PRODUCTS_TBL. The primary keys are normally the first columns listed in a table, in this case: EMP_ID, CUST_ID, and PROD_ID

  14. Basic Database Concept Part 1 Second Normal Form • The objective of the second normal form is to take data that is only partly dependent on the primary key and enter that data into another table. Below figure illustrates the second normal form.

  15. Basic Database Concept Part 1 Second Normal Form • According to the figure, the second normal form is derived from the first normal form by further breaking two tables down into more specific units. • EMPLOYEE_TBL split into two tables called EMPLOYEE_TBL and EMPLOYEE_PAY_TBL. • Personal employee information is dependent on the primary key (EMP_ID), so that information remained in the EMPLOYEE_TBL (EMP_ID, LAST_NAME, FIRST_NAME, MIDDLE_NAME, ADDRESS, CITY, STATE, ZIP, PHONE, and PAGER). • On the other hand, the information that is only partly dependent on the EMP_ID (each individual employee) is used to populate EMPLOYEE_PAY_TBL (EMP_ID, POSITION, POSITION_DESC, DATE_HIRE, PAY_RATE, and DATE_LAST_RAISE). Notice that both tables contain the column EMP_ID. • This is the primary key of each table and is used to match corresponding data between the two tables. CUSTOMER_TBL split into two tables called CUSTOMER_TBL and ORDERS_TBL. What took place is similar to what occurred in the EMPLOYEE_TBL. • Columns that were partly dependent on the primary key were directed to another table. The order information for a customer is dependent on each CUST_ID, but does not directly depend on the general customer information in the original table.

  16. Basic Database Concept Part 1 Third Normal Form • The third normal form's objective is to remove data in a table that is not dependent on the primary key. Below figure illustrates the third normal form.

  17. Basic Database Concept Part 1 Third Normal Form • Another table was created to display the use of the third normal form. EMPLOYEE_PAY_TBL is split into two tables, one table containing the actual employee pay information and the other containing the position descriptions, which really do not need to reside in EMPLOYEE_PAY_TBL. The POSITION_DESC column is totally independent of the primary key, EMP_ID .

  18. Basic Database Concept Part 1 Benefits of Normalization Normalization provides numerous benefits to a database. Some of the major benefits include the following : • Greater overall database organization • Reduction of redundant data • Data consistency within the database • A much more flexible database design • A better handle on database security • Organization is brought about by the normalization process, making everyone's job easier, from the user who accesses tables to the database administrator (DBA) who is responsible for the overall management of every object in the database. • Data redundancy is reduced, which simplifies data structures and conserves disk space. Because duplicate data is minimized, the possibility of inconsistent data is greatly reduced. The database has been normalized and broken into smaller tables, you are provided with more flexibility as far as modifying existing structures. • It is much easier to modify a small table with little data than to modify one big table that holds all the vital data in the database. Data integrity is the assurance of consistent and accurate data within a database.

  19. Basic Database Concept Part 1 Drawbacks of Normalization • Although most successful databases are normalized to some degree, there is one substantial drawback of a normalized database: reduced database performance. • The acceptance of reduced performance requires the knowledge that when a query or transaction request is sent to the database, there are factors involved, such as CPU usage, memory usage, and input/output (I/O). • To make a long story short, a normalized database requires much more CPU, memory, and I/O to process transactions and database queries than does a de-normalized database. A normalized database must locate the requested tables and then join the data from the tables to either get the requested information or to process the desired data.

  20. Basic Database Concept Part 1 Referential Integrity • Referential integrity simply means that the values of one column in a table depend on the values of a column in another table. For instance, in order for a customer to have a record in the ORDERS_TBL table, there must first be a record for that customer in the CUSTOMER_TBL table. Integrity constraints can also control values by restricting a range of values for a column. The integrity constraint should be created at the table's creation. Referential integrity is typically controlled through the use of primary and foreign keys. • In a table, a foreign key, normally a single field, directly references a primary key in another table to enforce referential integrity. In the preceding paragraph, the CUST_ID in ORDERS_TBL is a foreign key that references CUST_ID in CUSTOMER_TBL.

  21. Basic Database Concept Part 1 Database engine structure and architecture concept Database Engine Component • DBMS engine accepts logical requests from various other DBMS subsystems, converts them into physical equivalents, and actually accesses the database and data dictionary as they exist on a storage device. • Data definition subsystem helps the user create and maintain the data dictionary and define the structure of the files in a database. • Data manipulation subsystem helps the user to add, change, and delete information in a database and query it for valuable information. Software tools within the data manipulation subsystem are most often the primary interface between user and the information contained in a database. It allows the user to specify its logical information requirements. • Application generation subsystem contains facilities to help users develop transaction-intensive applications. It usually requires that the user perform a detailed series of tasks to process a transaction. It facilitates easy-to-use data entry screens, programming languages, and interfaces. • Data administration subsystem helps users manage the overall database environment by providing facilities for backup and recovery, security management, query optimization, concurrency control, and change management.

  22. Basic Database Concept Part 1 Database engine structure and architecture concept Database Engine structure and architecture • SQL Server architecture

  23. Basic Database Concept Part 1 Database engine structure and architecture concept Database Engine structure and architecture 2. Oracle architecture

  24. Basic Database Concept Part 1 Database engine structure and architecture concept Database Engine structure and architecture 3. MySql architecture

  25. Basic Database Concept Part 1 Database engine structure and architecture concept MySql store engine • MySQL's unique Pluggable Storage Engine Architecture gives users the flexibility to choose from a portfolio of purpose-built storage engines that are optimized for specific application domains - OLTP, Read-Intensive Web Scale-out, High-Availability Clustering, Data Archiving, Data Warehousing, and more. • Plus, The Pluggable Storage Engine Architecture provides a standard set of server, drivers, tools, management, and support services that are leveraged across all the underlying storage engines. MySQL Native Storage Engines MySQL currently offers a number of its own native Storage Engines, including: 1. InnoDB 2. MyISAM 3. Example 4. Federated 5. Archive 6. Merge 7. Memory 8. CSV 9. Blackhole

  26. Basic Database Concept Part 1 Database engine structure and architecture concept MySql store engine Partner Storage Engines (Third party) • Calpont's InfiniDB storage engine is purpose-built for analytical data, data warehouses and marts, and any read-intensive application environment. The column-oriented and modular scale up/out architecture ensures the fastest possible response times no matter the data volume. • Infobright's BrightHouse engine: InfoBright's storage engine delivers high performance on complex queries across terabytes of data for analytic data warehouse solutions. InfoBright's features include a column-oriented architecture, high-speed data loading, advanced optimizer, and 10:1 data compression. • Kickfire analytics appliances bring together hardware and software innovations that deliver high-performance data warehousing capabilities. By combining the ease of use of MySQL with patented hardware and state-of-the-art software technology, Kickfire is able to deliver high-end performance via an appliance-based approach.

  27. Basic Database Concept Part 1 Database engine structure and architecture concept MySQL Native Storage Engines • InnoDB : A transaction-safe (ACID compliant) storage engine for MySQL that has commit, rollback, and crash-recovery capabilities to protect user data. InnoDB row-level locking (without escalation to coarser granularity locks) and Oracle-style consistent non-locking reads increase multi-user concurrency and performance. InnoDB stores user data in clustered indexes to reduce I/O for common queries based on primary keys. To maintain data integrity, InnoDB also supports FOREIGN KEY referential-integrity constraints. InnoDB is the default storage engine • MyISAM : These tables have a small footprint. Table-level locking limits the performance in read/write workloads, so it is often used in read-only or read-mostly workloads in Web and data warehousing configurations. • Federated : Offers the ability to link separate MySQL servers to create one logical database from many physical servers. Very good for distributed or data mart environments.

  28. Basic Database Concept Part 1 Database engine structure and architecture concept MySQL Native Storage Engines • Archive: These compact, un-indexed tables are intended for storing and retrieving large amounts of seldom-referenced historical, archived, or security audit information. • Example : This engine serves as an example in the MySQL source code that illustrates how to begin writing new storage engines. It is primarily of interest to developers. The storage engine is a “stub” that does nothing. You can create tables with this engine, but no data can be stored in them or retrieved from them. • Merge : Enables a MySQL DBA or developer to logically group a series of identical MyISAM tables and reference them as one object. Good for VLDB environments such as data warehousing. • Memory : Stores all data in RAM, for fast access in environments that require quick lookups of non-critical data. This engine was formerly known as the HEAP engine. Its use cases are decreasing; InnoDB with its buffer pool memory area provides a general-purpose and durable way to keep most or all data in memory, and NDB-CLUSTER provides fast key-value lookups for huge distributed data sets.

  29. Basic Database Concept Part 1 Database engine structure and architecture concept MySQL Native Storage Engines • CSV : Its tables are really text files with comma-separated values. CSV tables let you import or dump data in CSV format, to exchange data with scripts and applications that read and write that same format. Because CSV tables are not indexed, you typically keep the data in InnoDB tables during normal operation, and only use CSV tables during the import or export stage. • Blackhole : The Blackhole storage engine accepts but does not store data, similar to the Unix /dev/null device. Queries always return an empty set. These tables can be used in replication configurations where DML statements are sent to slave servers, but the master server does not keep its own copy of the data.

  30. Database Design Concepts Part 2 Design Data Model • Data modeling is a process used to define and analyze data requirements needed to support the business processes within the scope of corresponding information systems in organizations. Therefore, the process of data modeling involves professional data modelers working closely with business stakeholders, as well as potential users of the information system. • Data modeling defines not just data elements, but their structures and relationships between them. Data modeling techniques and methodologies are used to model data in a standard, consistent, predictable manner in order to manage it as a resource. The use of data modeling standards is strongly recommended for all projects requiring a standard means of defining and analyzing data within an organization There are three different types of data models produced while progressing from requirements to the actual database to be used for the information system. • Conceptual data model • Logical data model • Physical data model

  31. Database Design Concepts Part 2 Design Data Model • Conceptual data model :The data requirements are initially recorded as a conceptual data model which is essentially a set of technology independent specifications about the data and is used to discuss initial requirements with the business stakeholders. • Logical data model: The conceptual model is then translated into a logical data model, which documents structures of the data that can be implemented in databases. Implementation of one conceptual data model may require multiple logical data models. • Physical data model : The last step in data modeling is transforming the logical data model to a physical data model that organizes the data into tables, and accounts for access, performance and storage details.

  32. Database Design Concepts Part 2 Design Entity Relationship Model • There are several notations for data modeling. The actual model is frequently called "Entity relationship model", because it depicts data in terms of the entities and relationships described in the data. • An entity-relationship model (ERM) is an abstract conceptual representation of structured data. Entity-relationship modeling is a relational schema database modeling method, used in software engineering to produce a type of conceptual data model (or semantic data model) of a system, often a relational database, and its requirements in a top-down fashion. • These models are being used in the first stage of information system design during the requirements analysis to describe information needs or the type of information that is to be stored in a database. The data modeling technique can be used to describe any ontology (i.e. an overview and classifications of used terms and their relationships) for a certain universe of discourse i.e. area of interest.

  33. Database Design Concepts Part 2 Design Entity Relationship Model

  34. Querying and Manipulating Data Part 3 Data Definition Language (DDL) The Data Definition Language (DDL) is used to create and destroy databases and database objects. These commands will primarily be used by database administrators during the setup and removal phases of a database project. Let's take a look at the structure and usage of basic DDL commands: • Create : Create database object (table, view, Store procedure, trigger, function, index, constraint etc.) • Alter : Modify database object (table, view, Store procedure, trigger, function, index, constraint etc.) • Drop : Delete database object (table, view, Store procedure, trigger, function, index, constraint etc.) • TRUNCATE : Deletes all data from a table in a very fast way, deleting the data inside the table and not the table itself. It usually implies a subsequent COMMIT operation, i.e., it cannot be rolled back.(data is not written to the logs for rollback later, unlike DELETE )

  35. Querying and Manipulating Data Part 3 Data Manipulation Language (DML) The Data Manipulation Language (DML) is used to retrieve, insert and modify database information. These commands will be used by all database users during the routine operation of the database. Let's take a brief look at the basic DML commands: • INSERT - Insert data into database tables • UPDATE - Update data in database tables • DELETE - Delete data from database tables

  36. Querying and Manipulating Data Part 3 Data Control Language (DCL) • The Data Control Language (DCL) is a subset of the Structured Query Lanaguge (SQL) that allows database administrators to configure security access to relational databases. It complements the Data Definition Language (DDL), which is used to add and delete database objects, and the Data Manipulation Language (DML), which is used to retrieve, insert and modify the contents of a database. • As it consists of three commands: GRANT, REVOKE, and DENY. Combined, these three commands provide administrators with the flexibility to set and remove database permissions in an extremely granular fashion. • GRANT : The GRANT command is used by administrators to add new permissions to a database user. • REVOKE : The REVOKE command is used to remove database access from a user previously granted such access. • DENY : The DENY command may be used to explicitly prevent a user from receiving a particular permission. This is helpful when a user may be a member of a role or group that is granted a permission and you want to prevent that user from inheriting the permission by creating an exception.

  37. Querying and Manipulating Data Part 3 Data Query Language (DQL) • The most common operation in SQL is the query, which is performed with the declarative SELECT statement. SELECT retrieves data from one or more tables, or expressions. Standard SELECT statements have no persistent effects on the database. Some non-standard implementations of SELECT can have persistent effects, such as the SELECT INTO syntax that exists in some databases. • Select : A query includes a list of columns to be included in the final result immediately following the SELECT keyword. An asterisk ("*") can also be used to specify that the query should return all columns of the queried tables. SELECT is the most complex statement in SQL, with optional keywords and clauses that include: • From : The FROM clause which indicates the table(s) from which data is to be retrieved. The FROM clause can include optional JOIN sub clauses to specify the rules for joining tables. • Where : The WHERE clause includes a comparison predicate, which restricts the rows returned by the query. The WHERE clause eliminates all rows from the result set for which the comparison predicate does not evaluate to True.

  38. Querying and Manipulating Data Part 3 Data Query Language (DQL) • Group By : The GROUP BY clause is used to project rows having common values into a smaller set of rows. GROUP BY is often used in conjunction with SQL aggregation functions or to eliminate duplicate rows from a result set. The WHERE clause is applied before the GROUP BY clause. • Having : The HAVING clause includes a predicate used to filter rows resulting from the GROUP BY clause. Because it acts on the results of the GROUP BY clause, aggregation functions can be used in the HAVING clause predicate. • Order By : The ORDER BY clause identifies which columns are used to sort the resulting data, and in which direction they should be sorted (options are ascending or descending). Without an ORDER BY clause, the order of rows returned by an SQL query is undefined.

  39. Querying and Manipulating Data Part 3 Group functions Group functions are built-in SQL functions that operate on groups of rows and return one value for the entire group. These functions are:  • COUNT : This function returns the number of rows in the table that satisfies the condition specified in the WHERE condition. If the WHERE condition is not specified, then the query returns the total number of rows in the table. • MAX : This function is used to get the maximum value from a column. • MIN : This function is used to get the minimum value from a column. • AVG : This function is used to get the average value of a numeric column. • SUM : This function is used to get the sum of a numeric column. • DISTINCT :This function is used to select the distinct rows.

  40. Querying and Manipulating Data Part 3 Sql Joins The SQL JOIN clause is used whenever we have to select data from 2 or more tables. To be able to use SQL JOIN clause to extract data from 2 (or more) tables, we need a relationship between certain columns in these tables. These Sql joins are: • INNER JOIN - only rows satisfying selection criteria from both joined tables are selected. • LEFT OUTER JOIN - rows satisfying selection criteria from both joined tables are selected as well as all remaining rows from left joined table are being kept along with Nulls instead of actual right joined table values. • RIGHT OUTER JOIN - rows satisfying selection criteria from both joined tables are selected as well as all remaining rows from right joined table are being kept along with Nulls instead of actual left joined table values. • FULL OUTER JOIN - rows satisfying selection criteria from both joined tables are selected as well as all remaining rows both from left joined table and right joined table are being kept along with Nulls instead of values from other table. • SELF JOIN - table is joined to itself.

  41. Querying and Manipulating Data Part 3 Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM) SQL is designed for a specific purpose: to query data contained in a relational database. SQL is a set-based, declarative query language, not an imperative language such as C or BASIC. However, there are extensions to Standard SQL which add procedural programming language functionality, such as control-of-flow constructs. These include:

  42. Querying and Manipulating Data Part 3 Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM) In addition to the standard SQL/PSM extensions and proprietary SQL extensions, procedural and object-oriented programmability is available on many SQL platforms via DBMS integration with other languages. • SQL Server 2005 uses the SQLCLR (SQL Server Common Language Runtime) to host managed .NET assemblies in the database, while prior versions of SQL Server were restricted to using unmanaged extended stored procedures which were primarily written in C. • Postgre SQL allows functions to be written in a wide variety of languages including Perl, Python, Tcl, and C. • In Oracle Database, you can now invoke public static methods of Java classes in the Oracle JVM directly from Java clients without defining PL/SQL call specifications and calling these through JDBC or SQLJ. Instead, you can use the JPublisher utility to generate a client-proxy class with the same signature as the server-side Java class. Once you have instantiated a client-proxy instance with a JDBC connection, you can call the proxy methods directly.

  43. Querying and Manipulating Data Part 3 Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM) Stored procedures • A stored procedure is a subroutine available to applications that access a relational database system. A stored procedure (sometimes called a proc, sproc, StoPro, StoredProc, or SP) is actually stored in the database data dictionary. • Typical uses for stored procedures include data validation (integrated into the database) or access control mechanisms. Furthermore, stored procedures can consolidate and centralize logic that was originally implemented in applications. Extensive or complex processing that requires execution of several SQL statements is moved into stored procedures, and all applications call the procedures. One can use nested stored procedures, by executing one stored procedure from within another.

  44. Querying and Manipulating Data Part 3 Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM) User-defined function • In SQL databases, a user-defined function provides a mechanism for extending the functionality of the database server by adding a function that can be evaluated in SQL statements. • The SQL standard distinguishes between scalar and table functions. A scalar function returns only a single value (or NULL), whereas a table function returns a (relational) table comprising zero or more rows, each row with one or more columns. • Once created, a user-defined function may be used in expressions in SQL statements. For example, it can be invoked where most other intrinsic functions are allowed. This also includes SELECT statements, where the function can be used against data stored in tables in the database. Conceptually, the function is evaluated once per row in such usage.

  45. Querying and Manipulating Data Part 3 Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM) Trigger • A database trigger is procedural code that is automatically executed in response to certain events on a particular table or view in a database. The trigger is mostly used for keeping the integrity of the information on the database. Triggers are commonly used to: • Audit changes (e.g. keep a log of the users and roles involved in changes) • Enhance changes (e.g. ensure that every change to a record is time-stamped by the server's clock) • Enforce business rules (e.g. require that every invoice have at least one line item) • Execute business rules (e.g. notify a manager every time an employee's bank account number changes) • Replicate data (e.g. store a record of every change, to be shipped to another database later) • Enhance performance (e.g. update the account balance after every detail transaction, for faster queries)

  46. Querying and Manipulating Data Part 3 Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM) Trigger • The examples above are called Data Manipulation Language (DML) triggers because the triggers are defined as part of the Data Manipulation Language and are executed at the time the data is manipulated. • Some systems also support non-data triggers, which fire in response to Data Definition Language (DDL) events such as creating tables, or runtime or and events such as logon, commit, and rollback. Such DDL triggers can be used for auditing purposes. The following are major features of database triggers and their effects: • triggers do not accept parameters or arguments (but may store affected-data in temporary tables) • triggers cannot perform commit or rollback operations because they are part of the triggering SQL statement (only through autonomous transactions) • triggers are normally slow (slowdown the process)

  47. Querying and Manipulating Data Part 3 Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM) Trigger Triggers in Oracle : The four main types of triggers are: • Row Level Trigger: This gets executed before or after any column value of a row changes • Column Level Trigger: This gets executed before or after the specified column changes • For Each Row Type: This trigger gets executed once for each row of the result set caused by insert/update/delete • For Each Statement Type: This trigger gets executed only once for the entire result set, but fires each time the statement is executed.

  48. Querying and Manipulating Data Part 3 Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM) Trigger Triggers in MS Sql Server : The Three main types of triggers are: • After Triggers : After Trigger fires an operation after any insert, update, delete operations performed on a table. • Instead OF Triggers : Instead of triggers are executed in place of the usual triggering action. Instead of triggers can also be defined on views with one or more base tables, where they can extend the types of updates a view can support. • CLR Triggers : A CLR Trigger can be either an AFTER or INSTEAD OF trigger. A CLR trigger can also be a DDL trigger. Instead of executing a Transact-SQL stored procedure, a CLR trigger executes one or more methods written in managed code that are members of an assembly created in the .NET Framework and uploaded in SQL Server.

  49. Querying and Manipulating Data Part 3 Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM) Trigger Triggers in My Sql : The Three main types of triggers are: • Insert Trigger • Update Trigger • Delete Trigger MySQL allows only one trigger of each type on each table (i.e. one before insert, one after insert, one before update, one after update, one before delete and one after delete). Also does NOT fire triggers outside of a statement (i.e. API's, foreign key cascades)

  50. Querying and Manipulating Data Part 3 Basic SQL programming (Transact-SQL & CLR /PL-SQL/ SQL & PSM) Views • In database theory, a view consists of a stored query accessible as a virtual table in a relational database. • Unlike ordinary tables (base tables) in a relational database, a view does not form part of the physical schema: it is a dynamic, virtual table computed or collated from data in the database. Changing the data in a table alters the data shown in subsequent invocations of the view. Views can provide advantages over tables: • Views can represent a subset of the data contained in a table • Views can join and simplify multiple tables into a single virtual table • Views can act as aggregated tables, where the database engine aggregates data (sum, average etc.) and presents the calculated results as part of the data • Views can hide the complexity of data; for example a view could appear as Sales2000 or Sales2001, transparently partitioning the actual underlying table • Views take very little space to store; the database contains only the definition of a view, not a copy of all the data it presents • Depending on the SQL engine used, views can provide extra security • Views can limit the degree of exposure of a table or tables to the outer world

More Related