1 / 68

NoCOUG Summer Conference, Aug. 19, 2004

Get a handle on DDL using Oracle 9iR2 Streams. 4750 Walnut Street, Suite 106 James F. Koopmann Boulder, Colorado 80301 Director of Technical Consulting 303.938.8282 x115 720.219.3773 (mobile) jameskoopmann@confio.com www.confio.com. NoCOUG Summer Conference, Aug. 19, 2004.

diem
Download Presentation

NoCOUG Summer Conference, Aug. 19, 2004

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Get a handle on DDL using Oracle 9iR2 Streams 4750 Walnut Street, Suite 106 James F. Koopmann Boulder, Colorado 80301 Director of Technical Consulting 303.938.8282 x115 720.219.3773 (mobile) jameskoopmann@confio.com www.confio.com NoCOUG Summer Conference, Aug. 19, 2004

  2. James F. Koopmann Where to Find Me on the NET • N-years of IT/core RDBMS experience • Oracle (V6-10g) 8i & 9i OCP • Writer • Various Publications & Portals • Speaker : Here, There, & Anywhere • 3 technical articles a month • Forum expert / moderator • Database centric vendor reviews • Database general interest issues

  3. Content The DBA nightmare How do you detect DDL changes STREAMS overview STREAMS environment STREAMS setup STREAMS sample DDL session OEM Sweat Dreams for the DBA Running STREAMS

  4. The DBA Nightmare Problem Detection & Viable Solutions • Problems • Are problems “problems” only if seen • Is it a problem if no one sees a benefit • Should I be working on something that doesn’t solve a problem • Problems are caused by change • We need to be able to answer the hard questions • What has happened • When did it happen • Will it happen again • We have to fix Problems • Reduce finger pointing • Research viable sources…Test / Validate…Acceptance • Be weary of “Your mileage may vary” type statements

  5. Your actions are reactive, not proactive, in nature The events that trigger an investigation are often very specific to an incident, narrow in scope, and the solutions typically do not take the full health of a database into consideration. Most of your time is spent in problem investigation & detection, not problem solving Investigation & detection inherently wastes money Customers / users drive the work flow of the database administrators Database administration group is seen as ineffective The DBA Nightmare Lazy Detection Methodology Shortcomings

  6. Has the issue been seen by others? Will someone benefit from you working on the issue? Are you solving a real problem that is causing pain for your users? Before solving the problem, are you be able to determine what was the true root cause? Will solving the problem eliminate it from recurring? The DBA Nightmare Intelligent Detection Methodology Benefits

  7. The DBA Nightmare What Can We Detect / Solve

  8. The DBA Nightmare Buffer Cache - Hit Ratio • SQL> SELECT name, block_size, • 100*(1 - (physical_reads / (db_block_gets+consistent_gets))) buffhit • FROM v$buffer_pool_statistics; • How we size the buffer cache • Just issue the SQL to see the hit ratio • Increase the buffer cache if hit ratio is low • Done if hit ratio is high

  9. The DBA Nightmare Buffer Cache - diff’d Hit Ratio • SQL> SELECT b.name, b.block_size, • 100*(1 - ((e.physical_reads-b.physical_reads) / • ((e.db_block_gets-b.db_block_gets)+ • (e.consistent_gets-b.consistent_gets)))) buffhit • FROM beg_buffer_pool_statistics b, end_buffer_pool_statistics e • WHERE b.name=e.name AND b.block_size=e.block_size; • How to diff • Create table beg_buffer_pool_statistics as select * from v$buffer_pool_statistics • Run workload through system • Create table end_buffer_pool_statistics as select * from v$buffer_pool_statistics • Just issue the SQL to see hit ratio • Increase the buffer cache if hit ratio is low • Done if hit ratio is high

  10. The DBA Nightmare Buffer Cache - Advisory • SQL> SELECT name, block_size, size_for_estimate, • estd_physical_read_factor, estd_physical_reads • FROM V$DB_CACHE_ADVICE • WHERE advice_status = 'ON'; • How to get the statistics • Set the db_cache_advice to ‘READY’ • Set the db_cache_advice to ‘ON’ • Run a valid workload through the system • Just issue the SQL to see report

  11. The DBA Nightmare Buffer Cache - Advisory Report BLOCK Cache Estd Phys Estd Phys NAME SIZE Size Read Factor Reads -------- ----- ----- ----------- ---------- DEFAULT 8192 48 2.1133 343669940 DEFAULT 8192 96 1.7266 280783364 DEFAULT 8192 144 1.4763 240091867 DEFAULT 8192 192 1.3573 220733606 DEFAULT 8192 240 1.2801 208181172 DEFAULT 8192 288 1.2165 197842182 DEFAULT 8192 336 1.1686 190052917 DEFAULT 8192 384 1.1202 182180544 DEFAULT 8192 432 1.0877 176884743 DEFAULT 8192 480 1.0602 172420984 DEFAULT 8192 528 1.0196 165812231 DEFAULT 8192 544 1 162626093 DEFAULT 8192 576 .9765 158797352 DEFAULT 8192 624 .9392 152735392 DEFAULT 8192 672 .9216 149879874 DEFAULT 8192 720 .9013 146571255 DEFAULT 8192 768 .885 143928671 DEFAULT 8192 816 .8726 141908868 DEFAULT 8192 864 .8607 139972381 • Its all about reads • Don’t reduce the size of your cache if you are going to incur more physical reads • Don’t increase the size of your cache if you are not going to reduce the number of reads DEFAULT 8192 544 1.00 162626093 CURRENT

  12. How Do You Detect DDL Changes Do I Care About Capturing DDL • What kind of shop are you in • You run a tight / secure database shop. (does it need to be tighter) • Changes are kept to a minimal. (will one slip by) • The time window is not sufficiently large between DDL changes. • You need an inexpensive investigative approach for DDL extraction. • Do you hedge on the side of caution or paranoia • Do you need to be concerned with each and every DDL statement • Are you pro-active • Do you like systematic approaches to solving problems • Do you need to be notified of problems

  13. The DBA Nightmare Attempts at DDL stability through policy       • Have you tried to set policies? • I don’t know how many shops I have worked in and tried to set policies about : • NO changes after 3:00pm on friday • Can’t use production for Test / Stress / QA • ALL changes must be approved through DBA • ALL changes must be implemented by DBA • ALL source must go through QA & Test • and the list goes on and on and on and on • Bottom line • Startup fallacy • Everyone is in a rush • Most don’t want to do the job the right or proper way the first time Your get a call at 4am on Sunday You spend time with data integrity This change MUST be in today You don’t know what anyone is doing Not one will tell you what they are doing You have to pick up the pieces

  14. How Do You Detect DDL Changes Methodolgies • Take a look at Change DATEs in DBA views • SELECT Statements against DBA views • Compare Database Objects Over Time • Take Snapshot at T1 • Take Snapshot at T2 • Compare T1 to T2 • Use LogMiner • Search and Extract DDL from Current Redo Logs & Archived Logs • Streams • Set of database structures to capture and replicate changes from a source database to a destination database

  15. How Do You Detect DDL Changes Take a Look at Dates In DBA Views The Good - We know when object is added / changed The Bad - No Granularity The Ugly - We have to still validate a change through looking at the total object

  16. How Do You Detect DDL Changes Compare Database Objects Over time • T0 - CREATE TABLE time_dba_tables with DATE-TYPE column • T1 - INSERT INTO time_dba_tables (SELECT sysdate,* FROM dba_tables) • T2 - INSERT INTO time_dba_tables (SELECT sysdate,* FROM dba_tables) • TC - SELECT <changes> FROM time_dba_tables t1, time_dba_tables t2 • OUTER JOINS - for what was added and deleted • DECODES - for proper column comparison • t1.column != t2.column - WHERE Cluase is “OR’d” for interested columns The Good - Build Yourself. No reliance on 3rd party software or database vendor The Bad - Complicated code. (you better be good) The Ugly - Oracle could change / add tables that are important to object change

  17. How Do You Detect DDL Changes LogMiner • Lets you look at the redo log activity in your database • You can look in the current redo logs or archived logs • New in 9i is the ability to look at DDL statements

  18. How Do You Detect DDL Changes LogMiner • Create a dictionary • EXECUTE DBMS_LOGMNR_D.BUILD( • dictionary_filename => 'dictionary.log', • dictionary_location => '/ora/oradata/hcmc/log'); • Specify which log files to be monitored • EXECUTE DBMS_LOGMNR.ADD_LOGFILE( • LogFileName => '/ora/oradata/hcmc/arch/arc00075.001', • Options => dbms_logmnr.NEW); • Start logminer • EXECUTE DBMS_LOGMNR.START_LOGMNR( • DictFileName =>'/ora/oradata/hcmc/log/dictionary.log'); • Query logminer views • SELECT sql_redo FROM V$LOGMNR_CONTENTS • WHERE operation=‘DDL’ The Good - Easy to setup and use The Bad - Pinpointing the time when something might have happened The Ugly - Manual process of adding logs and searching through them

  19. Oracle Streams Overview Oracle Streams A feature within the Oracle database that allows for the replication of database structures and information between two separate databases • Is a new feature that was introduced in 9iR2 • Is a more robust replication environment that allows for the sharing of information and structures • Improved performance of replication because Streams utilizes the extraction of DML and DDL from the redo log files through separate background processes. • Supports the hot mining of the current active redo log for change data at the same time it is written to the log. • Supports the mining of archived log files. • No loss of data • Directed Networks

  20. Oracle Streams Directed Networks • A directed network is defined as a networked system of hosts that allow for the passing of information to a destination database where the destination host is not directly accessible to the source host. • Two methods of information propagation. • Queue Forwarding, information is received by an intermediate database, and automatically forwarded to another intermediate or destination database. • Apply Forwarding, information is received by an intermediate database, applied, captured, and re-queued and ready for propagation to another destination database.

  21. Streams Environment CAPTURE SET_UP_QUEUE ADD_GLOBAL_RULES User DDL Change HCMC (source) PROPAGATION ADD_GLOBAL_PROPAGATION_RULES APPLY SET_UP_QUEUE CREATE TABLE history_ddl_lcrs CREATE PROCEDURE history_ddl ADD_GLOBAL_RULES ALTER_APPLY SAIGON (destination)

  22. Streams Environment CAPTURE User DDL Change HCMC (source) PROPAGATION APPLY SAIGON (destination)

  23. Streams Environment CAPTURE User DDL Change HCMC (source) PROPAGATION APPLY SAIGON (destination)

  24. Streams Environment CAPTURE User DDL Change HCMC (source) PROPAGATION APPLY SAIGON (destination)

  25. Streams Environment CAPTURE User DDL Change HCMC (source) PROPAGATION APPLY SAIGON (destination)

  26. Streams Environment CAPTURE User DDL Change HCMC (source) PROPAGATION APPLY SAIGON (destination)

  27. Streams Environment CAPTURE User DDL Change HCMC (source) PROPAGATION APPLY SAIGON (destination)

  28. Streams Environment CAPTURE User DDL Change HCMC (source) PROPAGATION APPLY SAIGON (destination)

  29. Streams Environment CAPTURE User DDL Change HCMC (source) PROPAGATION APPLY SAIGON (destination) The Good - Able to report on every DDL statement issued without intervention The Bad - Learning curve is a bit high The Ugly - Intensive & cumbersome setup

  30. Streams Environment CAPTURE SET_UP_QUEUE ADD_GLOBAL_RULES User DDL Change HCMC (source) PROPAGATION ADD_GLOBAL_PROPAGATION_RULES APPLY SET_UP_QUEUE CREATE TABLE history_ddl_lcrs CREATE PROCEDURE history_ddl ADD_GLOBAL_RULES ALTER_APPLY SAIGON (destination)

  31. Streams Environment Software Requirements • All databases using Oracle Streams must be upgraded to version 9.2.0.2 (patch #2632931) • Run catalog.sql & catproc.sql after you have upgraded to version 9.2.0.2

  32. Streams Environment Archive Log Requirement • The source database must be running in ARCHIVELOG mode in order to capture changes. • No data loss • No database hanging caused by LGWR waiting for capture process to complete • Tune your redo log generation before implementing streams • Do not use the NOLOGGING option

  33. Streams Environment Parameter Requirements

  34. Intermission

  35. Streams Setup Create Administrator • CONNECT sys/<passwd>@[hcmc|saigon] AS SYSDBA • Normal Database Stuff • CREATE TABLESPACE ddlman • DATAFILE ’C:\ORACLE\ORADATA\[HCMC|SAIGON]\DDLMAN01.DBF’ SIZE 100M • REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; • CREATE USER ddlman IDENTIFIED BY ddlman • DEFAULT TABLESPACE ddlman • QUOTA UNLIMITED ON ddlman; • GRANT CONNECT, RESOURCE TO ddlman;

  36. Streams Setup Grant Privileges to Administrator • CONNECT sys/<passwd>@[hcmc|saigon] AS SYSDBA • Advanced Queuing Administration • GRANT EXECUTE ON DBMS_AQADM TO DDLMAN; • Streams Administration • GRANT EXECUTE ON DBMS_STREAMS_ADM TO DDLMAN; • GRANT EXECUTE ON DBMS_APPLY_ADM TO DDLMAN; • GRANT EXECUTE ON DBMS_CAPTURE_ADM TO DDLMAN; • GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO DDLMAN; • Instantiation Purposes • GRANT EXECUTE ON DBMS_FLASHBACK TO DDLMAN; • Easy Monitoring and Usage • GRANT SELECT_CATALOG_ROLE TO DDLMAN; • GRANT SELECT ANY DICTIONARY TO DDLMAN; • GRANT SELECT ON DBA_APPLY_ERROR TO DDLMAN;

  37. Streams Setup Grant Privileges to Administrator to Create Rules • CONNECT sys/<passwd>@[hcmc|saigon] AS SYSDBA • BEGIN • DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE( • privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ, • grantee => ’DDLMAN’, • grant_option => FALSE); • END; • / • 3. BEGIN • DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE( • privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ, • grantee => ’DDLMAN’, • grant_option => FALSE); • END; • /

  38. Streams Setup Switch LogMiner out of the SYSTEM Tablespace • Why • By default, LogMiner tables are in the SYSTEM tablespace. • When you create a capture or apply process, Oracle will create a subset of the data dictionary to keep track of changes to structures. • Streams will actually keep multiple versions of object information. • The SYSTEM tablespace may not have enough room for these tables. • How • CONNECT sys/<passwd>@hcmc AS SYSDBA • CREATE TABLESPACE logminer DATAFILE ’C:\ORACLE\ORADATA\HCMC\LOGMINER01.DBF’ SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED; • EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE(’LOGMINER’);

  39. Streams Setup LogMiner / Streams Issues • What • If you move LogMiner after you have captured or applied, you will lose the Streams directory changes that have been recorded. • The Streams data dictionary is not kept clean by Oracle which can also cause greater strains on the Streams dictionary and allow it to grow uncontrollably. • To Do • Simple, don’t forget to move LogMiner • Remove an object is not being used you can clean out the Streams dictionary by using DBMS_STREAMS_ADM.PURGE_SOURCE_CATALOG for a particular object.

  40. Streams Setup Database Link from source to destination • Why • For transporting the captured DDL from the source database to the destination database • How • CONNECT ddlman/ddlman@hcmc • CREATE DATABASE LINK saigon CONNECT TO ddlman IDENTIFIED BY ddlman USING ‘saigon’

  41. Streams Setup Capture • CONNECT ddlman/ddlman@hcmc • BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'ddl_cap_table', queue_name => 'ddl_cap_q', queue_user => 'ddlman'); END; / • BEGIN • DBMS_STREAMS_ADM.ADD_GLOBAL_RULES( streams_type => 'capture', streams_name => 'cap_ddl', queue_name => 'ddl_cap_q', include_dml => false, include_ddl => true, include_tagged_lcr => false, source_database => ‘hcmc’); END; /

  42. Streams Setup Propagation Rules • CONNECT ddlman/ddlman@hcmc • BEGIN • DBMS_STREAMS_ADM.ADD_GLOBAL_PROPAGATION_RULES( • streams_name => 'prop_ddl', • source_queue_name => 'ddl_cap_q', • destination_queue_name => 'ddlman.ddl_apply_q@saigon’, • include_dml => false, • include_ddl => true, • include_tagged_lcr => false, • source_database => ‘hcmc’ ); • END; • /

  43. Streams Setup Create Queue • CONNECT ddlman/ddlman@saigon • BEGIN • DBMS_STREAMS_ADM.SET_UP_QUEUE( • queue_table => 'ddl_apply_table', • queue_name => 'ddl_apply_q', • queue_user => 'ddlman'); • END; • /

  44. Streams Setup Create Table to hold DDL • CONNECT ddlman/ddlman@saigon • CREATE TABLE ddlman.ddl_history( • timestamp DATE, • source_database_name VARCHAR2(128), • command_type VARCHAR2(30), • object_owner VARCHAR2(32), • object_name VARCHAR2(32), • object_type VARCHAR2(18), • ddl_text CLOB, • logon_user VARCHAR2(32), • current_schema VARCHAR2(32), • base_table_owner VARCHAR2(32), • base_table_name VARCHAR2(32), • tag RAW(10), • transaction_id VARCHAR2(10), • scn NUMBER);

  45. Streams Setup Logical Change Records (LCRs) When the capture process mines information from the redo log, it reformats this information into LCRs. These LCRs are specific to the type of information captured and the completely defines the changed that has occurred. SYS.ANYDATA • This is an overloaded object type that can be of any scalar (number, varchar, char,date…) or user defined data type. It has defined with it methods that allows us to query what type of true data type it holds as well as methods to retrieve the values • $ORACLE_HOME/rdbms/admin/dbmsany.sql

  46. Streams Setup Create Procedure to “handle” DDL • CONNECT ddlman/ddlman@saigon • CREATE PROCEDURE history_ddl(in_any IN SYS.ANYDATA) IS • lcr SYS.LCR$_DDL_RECORD; • rc PLS_INTEGER; • ddl_text CLOB; • BEGIN • rc := in_any.GETOBJECT(lcr); • DBMS_LOB.CREATETEMPORARY(ddl_text, TRUE); • lcr.GET_DDL_TEXT(ddl_text); • INSERT INTO ddlman.ddl_history • VALUES(SYSDATE, lcr.GET_SOURCE_DATABASE_NAME(), lcr.GET_COMMAND_TYPE(),lcr.GET_OBJECT_OWNER(), • lcr.GET_OBJECT_NAME(), lcr.GET_OBJECT_TYPE(),ddl_text, lcr.GET_LOGON_USER(), lcr.GET_CURRENT_SCHEMA(), • lcr.GET_BASE_TABLE_OWNER(), lcr.GET_BASE_TABLE_NAME(),lcr.GET_TAG(),lcr.GET_TRANSACTION_ID(), lcr.GET_SCN()); • COMMIT; • DBMS_LOB.FREETEMPORARY(ddl_text); • END; • /

  47. Streams Setup Create Rules • CONNECT ddlman/ddlman@saigon • BEGIN • DBMS_STREAMS_ADM.ADD_GLOBAL_RULES( • streams_type => 'apply', • streams_name => 'apply_ddl', • queue_name => 'ddl_apply_q', • include_dml => false, • include_ddl => true, • include_tagged_lcr => false, • source_database => ‘hcmc’); • END; • /

  48. Streams Setup Hook in the DDL handler • CONNECT ddlman/ddlman@saigon • BEGIN • DBMS_APPLY_ADM.ALTER_APPLY( • apply_name => 'apply_ddl', • ddl_handler => 'ddlman.history_ddl'); • END; • /

  49. Streams Setup Instantiate the Stream Environment • Definition • Before we can start capturing, propagating, and applying within our Streams environment we must instantiate the destination database. This is nothing more than registering the source SCN with the destination database so it knows the point in time it can start applying captured information. • There are three methods for instantiating objects • Instantiating and creating database objects by using Export/Import • Setting the instantiation SCN for an existing table, schema, or database manually, by executing procedures in the DBMS_APPLY_ADM package at the destination database • Performing a metadata-only export and import for existing tables or schemas You can verify the instantiation by querying DBA_APPLY_INSTANTIATED_OBJECTS.

  50. Streams Setup Instantiate the Stream Environment • CONNECT ddlman/ddlman@hcmc • exec dbms_capture_adm.PREPARE_GLOBAL_INSTANTIATION; • DECLARE • iscn NUMBER; • BEGIN • iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER(); • DBMS_APPLY_ADM.SET_GLOBAL_INSTANTIATION_SCN@saigon( • source_database_name => ‘hcmc’, • instantiation_scn => iscn, • apply_database_link => ‘saigon’); • END; • /

More Related