slide1 l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
NoCOUG Summer Conference, Aug. 19, 2004 PowerPoint Presentation
Download Presentation
NoCOUG Summer Conference, Aug. 19, 2004

Loading in 2 Seconds...

play fullscreen
1 / 68

NoCOUG Summer Conference, Aug. 19, 2004 - PowerPoint PPT Presentation


  • 106 Views
  • Uploaded on

Get a handle on DDL using Oracle 9iR2 Streams. 4750 Walnut Street, Suite 106 James F. Koopmann Boulder, Colorado 80301 Director of Technical Consulting 303.938.8282 x115 720.219.3773 (mobile) jameskoopmann@confio.com www.confio.com. NoCOUG Summer Conference, Aug. 19, 2004.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'NoCOUG Summer Conference, Aug. 19, 2004' - diem


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

Get a handle on DDL using Oracle 9iR2 Streams

4750 Walnut Street, Suite 106 James F. Koopmann

Boulder, Colorado 80301 Director of Technical Consulting

303.938.8282 x115

720.219.3773 (mobile)

jameskoopmann@confio.com

www.confio.com

NoCOUG Summer Conference, Aug. 19, 2004

james f koopmann
James F. Koopmann

Where to Find Me on the NET

  • N-years of IT/core RDBMS experience
  • Oracle (V6-10g) 8i & 9i OCP
  • Writer
  • Various Publications & Portals
  • Speaker : Here, There, & Anywhere
  • 3 technical articles a month
  • Forum expert / moderator
  • Database centric vendor reviews
  • Database general interest issues
slide3

Content

The DBA nightmare

How do you detect DDL changes

STREAMS overview

STREAMS environment

STREAMS setup

STREAMS sample DDL session

OEM

Sweat Dreams for the DBA

Running STREAMS

the dba nightmare
The DBA Nightmare

Problem Detection & Viable Solutions

  • Problems
      • Are problems “problems” only if seen
      • Is it a problem if no one sees a benefit
      • Should I be working on something that doesn’t solve a problem
  • Problems are caused by change
    • We need to be able to answer the hard questions
      • What has happened
      • When did it happen
      • Will it happen again
  • We have to fix Problems
      • Reduce finger pointing
      • Research viable sources…Test / Validate…Acceptance
      • Be weary of “Your mileage may vary” type statements
slide5
Your actions are reactive, not proactive, in nature

The events that trigger an investigation are often very specific to an incident, narrow in scope, and the solutions typically do not take the full health of a database into consideration.

Most of your time is spent in problem investigation & detection, not problem solving

Investigation & detection inherently wastes money

Customers / users drive the work flow of the database administrators

Database administration group is seen as ineffective

The DBA Nightmare

Lazy Detection Methodology Shortcomings

slide6
Has the issue been seen by others?

Will someone benefit from you working on the issue?

Are you solving a real problem that is causing pain for your users?

Before solving the problem, are you be able to determine what was the true root cause?

Will solving the problem eliminate it from recurring?

The DBA Nightmare

Intelligent Detection Methodology Benefits

the dba nightmare7
The DBA Nightmare

What Can We Detect / Solve

slide8

The DBA Nightmare

Buffer Cache - Hit Ratio

  • SQL> SELECT name, block_size,
  • 100*(1 - (physical_reads / (db_block_gets+consistent_gets))) buffhit
  • FROM v$buffer_pool_statistics;
  • How we size the buffer cache
  • Just issue the SQL to see the hit ratio
  • Increase the buffer cache if hit ratio is low
  • Done if hit ratio is high
slide9

The DBA Nightmare

Buffer Cache - diff’d Hit Ratio

  • SQL> SELECT b.name, b.block_size,
    • 100*(1 - ((e.physical_reads-b.physical_reads) /
    • ((e.db_block_gets-b.db_block_gets)+
    • (e.consistent_gets-b.consistent_gets)))) buffhit
    • FROM beg_buffer_pool_statistics b, end_buffer_pool_statistics e
    • WHERE b.name=e.name AND b.block_size=e.block_size;
  • How to diff
  • Create table beg_buffer_pool_statistics as select * from v$buffer_pool_statistics
  • Run workload through system
  • Create table end_buffer_pool_statistics as select * from v$buffer_pool_statistics
  • Just issue the SQL to see hit ratio
  • Increase the buffer cache if hit ratio is low
  • Done if hit ratio is high
slide10

The DBA Nightmare

Buffer Cache - Advisory

  • SQL> SELECT name, block_size, size_for_estimate,
    • estd_physical_read_factor, estd_physical_reads
  • FROM V$DB_CACHE_ADVICE
  • WHERE advice_status = 'ON';
  • How to get the statistics
  • Set the db_cache_advice to ‘READY’
  • Set the db_cache_advice to ‘ON’
  • Run a valid workload through the system
  • Just issue the SQL to see report
slide11

The DBA Nightmare

Buffer Cache - Advisory Report

BLOCK Cache Estd Phys Estd Phys

NAME SIZE Size Read Factor Reads

-------- ----- ----- ----------- ----------

DEFAULT 8192 48 2.1133 343669940

DEFAULT 8192 96 1.7266 280783364

DEFAULT 8192 144 1.4763 240091867

DEFAULT 8192 192 1.3573 220733606

DEFAULT 8192 240 1.2801 208181172

DEFAULT 8192 288 1.2165 197842182

DEFAULT 8192 336 1.1686 190052917

DEFAULT 8192 384 1.1202 182180544

DEFAULT 8192 432 1.0877 176884743

DEFAULT 8192 480 1.0602 172420984

DEFAULT 8192 528 1.0196 165812231

DEFAULT 8192 544 1 162626093

DEFAULT 8192 576 .9765 158797352

DEFAULT 8192 624 .9392 152735392

DEFAULT 8192 672 .9216 149879874

DEFAULT 8192 720 .9013 146571255

DEFAULT 8192 768 .885 143928671

DEFAULT 8192 816 .8726 141908868

DEFAULT 8192 864 .8607 139972381

  • Its all about reads
  • Don’t reduce the size of your cache if you are going to incur more physical reads
  • Don’t increase the size of your cache if you are not going to reduce the number of reads

DEFAULT 8192 544 1.00 162626093 CURRENT

how do you detect ddl changes
How Do You Detect DDL Changes

Do I Care About Capturing DDL

  • What kind of shop are you in
  • You run a tight / secure database shop. (does it need to be tighter)
  • Changes are kept to a minimal. (will one slip by)
  • The time window is not sufficiently large between DDL changes.
  • You need an inexpensive investigative approach for DDL extraction.
  • Do you hedge on the side of caution or paranoia
  • Do you need to be concerned with each and every DDL statement
  • Are you pro-active
  • Do you like systematic approaches to solving problems
  • Do you need to be notified of problems
the dba nightmare13
The DBA Nightmare

Attempts at DDL stability through policy

  • Have you tried to set policies?
  • I don’t know how many shops I have worked in and tried to set policies about :
  • NO changes after 3:00pm on friday
  • Can’t use production for Test / Stress / QA
  • ALL changes must be approved through DBA
  • ALL changes must be implemented by DBA
  • ALL source must go through QA & Test
  • and the list goes on and on and on and on
  • Bottom line
  • Startup fallacy
  • Everyone is in a rush
  • Most don’t want to do the job the right or proper way the first time

Your get a call at 4am on Sunday

You spend time with data integrity

This change MUST be in today

You don’t know what anyone is doing

Not one will tell you what they are doing

You have to pick up the pieces

how do you detect ddl changes14
How Do You Detect DDL Changes

Methodolgies

  • Take a look at Change DATEs in DBA views
    • SELECT Statements against DBA views
  • Compare Database Objects Over Time
    • Take Snapshot at T1
    • Take Snapshot at T2
    • Compare T1 to T2
  • Use LogMiner
    • Search and Extract DDL from Current Redo Logs & Archived Logs
  • Streams
    • Set of database structures to capture and replicate changes from a source database to a destination database
how do you detect ddl changes15
How Do You Detect DDL Changes

Take a Look at Dates In DBA Views

The Good - We know when object is added / changed

The Bad - No Granularity

The Ugly - We have to still validate a change through looking at the total object

how do you detect ddl changes16
How Do You Detect DDL Changes

Compare Database Objects Over time

  • T0 - CREATE TABLE time_dba_tables with DATE-TYPE column
  • T1 - INSERT INTO time_dba_tables (SELECT sysdate,* FROM dba_tables)
  • T2 - INSERT INTO time_dba_tables (SELECT sysdate,* FROM dba_tables)
  • TC - SELECT <changes> FROM time_dba_tables t1, time_dba_tables t2
      • OUTER JOINS - for what was added and deleted
      • DECODES - for proper column comparison
      • t1.column != t2.column - WHERE Cluase is “OR’d” for interested columns

The Good - Build Yourself. No reliance on 3rd party software or database vendor

The Bad - Complicated code. (you better be good)

The Ugly - Oracle could change / add tables that are important to object change

how do you detect ddl changes17
How Do You Detect DDL Changes

LogMiner

  • Lets you look at the redo log activity in your database
  • You can look in the current redo logs or archived logs
  • New in 9i is the ability to look at DDL statements
how do you detect ddl changes18
How Do You Detect DDL Changes

LogMiner

  • Create a dictionary
    • EXECUTE DBMS_LOGMNR_D.BUILD(
    • dictionary_filename => 'dictionary.log',
    • dictionary_location => '/ora/oradata/hcmc/log');
  • Specify which log files to be monitored
    • EXECUTE DBMS_LOGMNR.ADD_LOGFILE(
    • LogFileName => '/ora/oradata/hcmc/arch/arc00075.001',
    • Options => dbms_logmnr.NEW);
  • Start logminer
    • EXECUTE DBMS_LOGMNR.START_LOGMNR(
    • DictFileName =>'/ora/oradata/hcmc/log/dictionary.log');
  • Query logminer views
    • SELECT sql_redo FROM V$LOGMNR_CONTENTS
    • WHERE operation=‘DDL’

The Good - Easy to setup and use

The Bad - Pinpointing the time when something might have happened

The Ugly - Manual process of adding logs and searching through them

oracle streams
Oracle Streams

Overview

Oracle Streams

A feature within the Oracle database that allows for the replication of database structures and information between two separate databases

  • Is a new feature that was introduced in 9iR2
  • Is a more robust replication environment that allows for the sharing of information and structures
  • Improved performance of replication because Streams utilizes the extraction of DML and DDL from the redo log files through separate background processes.
  • Supports the hot mining of the current active redo log for change data at the same time it is written to the log.
  • Supports the mining of archived log files.
  • No loss of data
  • Directed Networks
oracle streams20
Oracle Streams

Directed Networks

  • A directed network is defined as a networked system of hosts that allow for the passing of information to a destination database where the destination host is not directly accessible to the source host.
  • Two methods of information propagation.
  • Queue Forwarding, information is received by an intermediate database, and automatically forwarded to another intermediate or destination database.
  • Apply Forwarding, information is received by an intermediate database, applied, captured, and re-queued and ready for propagation to another destination database.
slide21

Streams Environment

CAPTURE

SET_UP_QUEUE

ADD_GLOBAL_RULES

User DDL Change

HCMC

(source)

PROPAGATION

ADD_GLOBAL_PROPAGATION_RULES

APPLY

SET_UP_QUEUE

CREATE TABLE history_ddl_lcrs

CREATE PROCEDURE history_ddl

ADD_GLOBAL_RULES

ALTER_APPLY

SAIGON (destination)

slide22

Streams Environment

CAPTURE

User DDL Change

HCMC

(source)

PROPAGATION

APPLY

SAIGON (destination)

slide23

Streams Environment

CAPTURE

User DDL Change

HCMC

(source)

PROPAGATION

APPLY

SAIGON (destination)

slide24

Streams Environment

CAPTURE

User DDL Change

HCMC

(source)

PROPAGATION

APPLY

SAIGON (destination)

slide25

Streams Environment

CAPTURE

User DDL Change

HCMC

(source)

PROPAGATION

APPLY

SAIGON (destination)

slide26

Streams Environment

CAPTURE

User DDL Change

HCMC

(source)

PROPAGATION

APPLY

SAIGON (destination)

slide27

Streams Environment

CAPTURE

User DDL Change

HCMC

(source)

PROPAGATION

APPLY

SAIGON (destination)

slide28

Streams Environment

CAPTURE

User DDL Change

HCMC

(source)

PROPAGATION

APPLY

SAIGON (destination)

slide29

Streams Environment

CAPTURE

User DDL Change

HCMC

(source)

PROPAGATION

APPLY

SAIGON (destination)

The Good - Able to report on every DDL statement issued without intervention

The Bad - Learning curve is a bit high

The Ugly - Intensive & cumbersome setup

slide30

Streams Environment

CAPTURE

SET_UP_QUEUE

ADD_GLOBAL_RULES

User DDL Change

HCMC

(source)

PROPAGATION

ADD_GLOBAL_PROPAGATION_RULES

APPLY

SET_UP_QUEUE

CREATE TABLE history_ddl_lcrs

CREATE PROCEDURE history_ddl

ADD_GLOBAL_RULES

ALTER_APPLY

SAIGON (destination)

slide31

Streams Environment

Software Requirements

  • All databases using Oracle Streams must be upgraded to version 9.2.0.2 (patch #2632931)
  • Run catalog.sql & catproc.sql after you have upgraded to version 9.2.0.2
slide32

Streams Environment

Archive Log Requirement

  • The source database must be running in ARCHIVELOG mode in order to capture changes.
    • No data loss
    • No database hanging caused by LGWR waiting for capture process to complete
    • Tune your redo log generation before implementing streams
    • Do not use the NOLOGGING option
slide33

Streams Environment

Parameter Requirements

streams setup
Streams Setup

Create Administrator

  • CONNECT sys/<passwd>@[hcmc|saigon] AS SYSDBA
    • Normal Database Stuff
  • CREATE TABLESPACE ddlman
    • DATAFILE ’C:\ORACLE\ORADATA\[HCMC|SAIGON]\DDLMAN01.DBF’ SIZE 100M
    • REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
  • CREATE USER ddlman IDENTIFIED BY ddlman
    • DEFAULT TABLESPACE ddlman
    • QUOTA UNLIMITED ON ddlman;
  • GRANT CONNECT, RESOURCE TO ddlman;
streams setup36
Streams Setup

Grant Privileges to Administrator

  • CONNECT sys/<passwd>@[hcmc|saigon] AS SYSDBA
    • Advanced Queuing Administration
  • GRANT EXECUTE ON DBMS_AQADM TO DDLMAN;
    • Streams Administration
  • GRANT EXECUTE ON DBMS_STREAMS_ADM TO DDLMAN;
  • GRANT EXECUTE ON DBMS_APPLY_ADM TO DDLMAN;
  • GRANT EXECUTE ON DBMS_CAPTURE_ADM TO DDLMAN;
  • GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO DDLMAN;
    • Instantiation Purposes
  • GRANT EXECUTE ON DBMS_FLASHBACK TO DDLMAN;
    • Easy Monitoring and Usage
  • GRANT SELECT_CATALOG_ROLE TO DDLMAN;
  • GRANT SELECT ANY DICTIONARY TO DDLMAN;
  • GRANT SELECT ON DBA_APPLY_ERROR TO DDLMAN;
streams setup37
Streams Setup

Grant Privileges to Administrator to Create Rules

  • CONNECT sys/<passwd>@[hcmc|saigon] AS SYSDBA
  • BEGIN
  • DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    • privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
    • grantee => ’DDLMAN’,
    • grant_option => FALSE);
    • END;
    • /
  • 3. BEGIN
    • DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    • privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    • grantee => ’DDLMAN’,
    • grant_option => FALSE);
    • END;
    • /
streams setup38
Streams Setup

Switch LogMiner out of the SYSTEM Tablespace

  • Why
  • By default, LogMiner tables are in the SYSTEM tablespace.
  • When you create a capture or apply process, Oracle will create a subset of the data dictionary to keep track of changes to structures.
  • Streams will actually keep multiple versions of object information.
  • The SYSTEM tablespace may not have enough room for these tables.
  • How
  • CONNECT sys/<passwd>@hcmc AS SYSDBA
  • CREATE TABLESPACE logminer DATAFILE ’C:\ORACLE\ORADATA\HCMC\LOGMINER01.DBF’ SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
  • EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE(’LOGMINER’);
streams setup39
Streams Setup

LogMiner / Streams Issues

  • What
  • If you move LogMiner after you have captured or applied, you will lose the Streams directory changes that have been recorded.
  • The Streams data dictionary is not kept clean by Oracle which can also cause greater strains on the Streams dictionary and allow it to grow uncontrollably.
  • To Do
  • Simple, don’t forget to move LogMiner
  • Remove an object is not being used you can clean out the Streams dictionary by using DBMS_STREAMS_ADM.PURGE_SOURCE_CATALOG for a particular object.
streams setup40
Streams Setup

Database Link from source to destination

  • Why
  • For transporting the captured DDL from the source database to the destination database
  • How
  • CONNECT ddlman/ddlman@hcmc
  • CREATE DATABASE LINK saigon CONNECT TO ddlman IDENTIFIED BY ddlman USING ‘saigon’
streams setup41
Streams Setup

Capture

  • CONNECT ddlman/ddlman@hcmc
  • BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'ddl_cap_table', queue_name => 'ddl_cap_q', queue_user => 'ddlman'); END; /
  • BEGIN
  • DBMS_STREAMS_ADM.ADD_GLOBAL_RULES( streams_type => 'capture', streams_name => 'cap_ddl', queue_name => 'ddl_cap_q', include_dml => false, include_ddl => true, include_tagged_lcr => false, source_database => ‘hcmc’); END; /
streams setup42
Streams Setup

Propagation Rules

  • CONNECT ddlman/ddlman@hcmc
  • BEGIN
    • DBMS_STREAMS_ADM.ADD_GLOBAL_PROPAGATION_RULES(
    • streams_name => 'prop_ddl',
    • source_queue_name => 'ddl_cap_q',
    • destination_queue_name => 'ddlman.ddl_apply_q@saigon’,
    • include_dml => false,
    • include_ddl => true,
    • include_tagged_lcr => false,
    • source_database => ‘hcmc’ );
    • END;
  • /
streams setup43
Streams Setup

Create Queue

  • CONNECT ddlman/ddlman@saigon
  • BEGIN
    • DBMS_STREAMS_ADM.SET_UP_QUEUE(
    • queue_table => 'ddl_apply_table',
    • queue_name => 'ddl_apply_q',
    • queue_user => 'ddlman');
    • END;
    • /
streams setup44
Streams Setup

Create Table to hold DDL

  • CONNECT ddlman/ddlman@saigon
  • CREATE TABLE ddlman.ddl_history(
    • timestamp DATE,
    • source_database_name VARCHAR2(128),
    • command_type VARCHAR2(30),
    • object_owner VARCHAR2(32),
    • object_name VARCHAR2(32),
    • object_type VARCHAR2(18),
    • ddl_text CLOB,
    • logon_user VARCHAR2(32),
    • current_schema VARCHAR2(32),
    • base_table_owner VARCHAR2(32),
    • base_table_name VARCHAR2(32),
    • tag RAW(10),
    • transaction_id VARCHAR2(10),
    • scn NUMBER);
streams setup45
Streams Setup

Logical Change Records (LCRs)

When the capture process mines information from the redo log, it reformats this information into LCRs. These LCRs are specific to the type of information captured and the completely defines the changed that has occurred.

SYS.ANYDATA

  • This is an overloaded object type that can be of any scalar (number, varchar, char,date…) or user defined data type. It has defined with it methods that allows us to query what type of true data type it holds as well as methods to retrieve the values
    • $ORACLE_HOME/rdbms/admin/dbmsany.sql
streams setup46
Streams Setup

Create Procedure to “handle” DDL

  • CONNECT ddlman/ddlman@saigon
  • CREATE PROCEDURE history_ddl(in_any IN SYS.ANYDATA) IS
    • lcr SYS.LCR$_DDL_RECORD;
    • rc PLS_INTEGER;
    • ddl_text CLOB;
    • BEGIN
    • rc := in_any.GETOBJECT(lcr);
    • DBMS_LOB.CREATETEMPORARY(ddl_text, TRUE);
    • lcr.GET_DDL_TEXT(ddl_text);
    • INSERT INTO ddlman.ddl_history
    • VALUES(SYSDATE, lcr.GET_SOURCE_DATABASE_NAME(), lcr.GET_COMMAND_TYPE(),lcr.GET_OBJECT_OWNER(),
    • lcr.GET_OBJECT_NAME(), lcr.GET_OBJECT_TYPE(),ddl_text, lcr.GET_LOGON_USER(), lcr.GET_CURRENT_SCHEMA(),
    • lcr.GET_BASE_TABLE_OWNER(), lcr.GET_BASE_TABLE_NAME(),lcr.GET_TAG(),lcr.GET_TRANSACTION_ID(), lcr.GET_SCN());
    • COMMIT;
    • DBMS_LOB.FREETEMPORARY(ddl_text);
    • END;
    • /
streams setup47
Streams Setup

Create Rules

  • CONNECT ddlman/ddlman@saigon
  • BEGIN
    • DBMS_STREAMS_ADM.ADD_GLOBAL_RULES(
    • streams_type => 'apply',
    • streams_name => 'apply_ddl',
    • queue_name => 'ddl_apply_q',
    • include_dml => false,
    • include_ddl => true,
    • include_tagged_lcr => false,
    • source_database => ‘hcmc’);
    • END;
    • /
streams setup48
Streams Setup

Hook in the DDL handler

  • CONNECT ddlman/ddlman@saigon
  • BEGIN
    • DBMS_APPLY_ADM.ALTER_APPLY(
    • apply_name => 'apply_ddl',
    • ddl_handler => 'ddlman.history_ddl');
    • END;
    • /
streams setup49
Streams Setup

Instantiate the Stream Environment

  • Definition
  • Before we can start capturing, propagating, and applying within our Streams environment we must instantiate the destination database. This is nothing more than registering the source SCN with the destination database so it knows the point in time it can start applying captured information.
  • There are three methods for instantiating objects
  • Instantiating and creating database objects by using Export/Import
  • Setting the instantiation SCN for an existing table, schema, or database manually, by executing procedures in the DBMS_APPLY_ADM package at the destination database
  • Performing a metadata-only export and import for existing tables or schemas You can verify the instantiation by querying DBA_APPLY_INSTANTIATED_OBJECTS.
streams setup50
Streams Setup

Instantiate the Stream Environment

  • CONNECT ddlman/ddlman@hcmc
  • exec dbms_capture_adm.PREPARE_GLOBAL_INSTANTIATION;
  • DECLARE
    • iscn NUMBER;
    • BEGIN
    • iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    • DBMS_APPLY_ADM.SET_GLOBAL_INSTANTIATION_SCN@saigon(
    • source_database_name => ‘hcmc’,
    • instantiation_scn => iscn,
    • apply_database_link => ‘saigon’);
    • END;
    • /
streams setup51
Streams Setup

Start the Apply Process

  • CONNECT ddlman/ddlman@saigon
  • BEGIN
    • DBMS_APPLY_ADM.START_APPLY(
    • apply_name => 'apply_ddl');
    • END;
    • /
    • BEGIN
    • DBMS_APPLY_ADM.STOP_APPLY(
    • apply_name => 'apply_ddl');
    • END;
    • /
  • status of apply process set to enabled
  • Starts Reader Server PX process (pnn)
  • Start background coordinator process (apNN)
  • Starts Apply Server PX process(pnn)

Stop the Apply Process

streams setup52
Streams Setup

Start the Capture Process

  • CONNECT ddlman/ddlman@hcmc
  • BEGIN
    • DBMS_CAPTURE_ADM.START_CAPTURE(
    • capture_name => 'cap_ddl');
    • END;
    • /
    • BEGIN
    • DBMS_CAPTURE_ADM.STOP_CAPTURE(
    • capture_name => 'cap_ddl');
    • END;
    • /
  • Set the status of the capture process to enabled
  • Start the background capture process (cpnn)

Stop the Capture Process

slide53
OEM

Streams

slide54
OEM

Streams - Capture

slide55
OEM

Streams - Propagate

slide56
OEM

Streams - Apply

sample ddl session
Sample DDL Session

Make Some Changes

  • HCMC-SQL> connect scott/tigger@hcmc
  • Connected.
  • HCMC-SQL> CREATE TABLE DDL_CHECK_TABLE (COL1 NUMBER);
  • Table created.
  • HCMC-SQL> ALTER TABLE DDL_CHECK_TABLE ADD (COL2 VARCHAR2(500));
  • Table altered.
  • HCMC-SQL> TRUNCATE TABLE DDL_CHECK_TABLE;
  • Table truncated.
  • HCMC-SQL> ALTER TABLE DDL_CHECK_TABLE DROP COLUMN COL2;
  • Table altered.
  • HCMC-SQL> DROP TABLE DDL_CHECK_TABLE;
  • Table dropped.
sample ddl session58
Sample DDL Session

View From DDL_HISTORY Table

SAIGON-SQL> SELECT timestamp,substr(source_database_name,1,4) source,

2 logon_user,command_type,object_owner owner,object_name name,

3 object_type type

4 FROM ddl_history;

TIMESTAMP SOURCE LOGON_USER COMMAND_TYPE OWNER NAME TYPE

--------- ------ ---------- -------------------- --------- --------------- -----

11-OCT-03 HCMC SYS CREATE USER AA USER

11-OCT-03 HCMC SYS CREATE TABLESPACE

11-OCT-03 HCMC SYS CREATE USER TEMPUSER USER

11-OCT-03 HCMC SYS ALTER USER SCOTT USER

11-OCT-03 HCMC SCOTT CREATE TABLE SCOTT DDL_CHECK_TABLE TABLE

11-OCT-03 HCMC SCOTT ALTER TABLE SCOTT DDL_CHECK_TABLE TABLE

11-OCT-03 HCMC SCOTT TRUNCATE TABLE SCOTT DDL_CHECK_TABLE TABLE

11-OCT-03 HCMC SCOTT ALTER TABLE SCOTT DDL_CHECK_TABLE TABLE

11-OCT-03 HCMC SCOTT DROP TABLE SCOTT DDL_CHECK_TABLE TABLE

sample ddl session59
Sample DDL Session

View From DDL_HISTORY Table

SAIGON-SQL> SELECT ddl_text

2 FROM ddl_history;

DDL_TEXT

---------------------------------------------------------------------------

CREATE user aa identified by VALUES '1468620FBA6271E8'

create temporary tablespace temp01

create user tempuser identified by VALUES '2B4C9C62A2919AEF'

alter user scott identified by VALUES 'A7E7E0150C6D5EF3'

CREATE TABLE DDL_CHECK_TABLE (COL1 NUMBER)

ALTER TABLE DDL_CHECK_TABLE ADD (COL2 VARCHAR2(500))

TRUNCATE TABLE DDL_CHECK_TABLE

ALTER TABLE DDL_CHECK_TABLE DROP COLUMN COL2

DROP TABLE DDL_CHECK_TABLE

sweet dreams for the dba
Sweet Dreams for The DBA

SQL Statements Executed Over Time

DROP INDEX

CREATE INDEX

db file sequential reads

buffer busy waits

parse call

Wait Time

Time

running streams
Running Streams

DDL Types Captured

CREATE/ALTER/DROP Tables includes table comments

CREATE/ALTER/DROP Tablespace (requires global rules to be set)

CREATE/ALTER/DROP Indexes

CREATE/ALTER/DROP Triggers

CREATE/ALTER/DROP Views

CREATE/ALTER/DROP Synonyms

CREATE/ALTER/DROP Sequences

Creation of PL/SQL packages, procedures and functions

Changes to users/roles

GRANT or REVOKE on users/roles

COMMIT

ROLLBACK

AUDIT (can be done on user objects)

running streams63
Running Streams

DDL Types Captured But NOT Applied

CREATE , ALTER, or DROP MATERIALIZED VIEW LOG

CREATE , ALTER, or DROP MATERIALIZED VIEW

CREATE or ALTER TABLE for Index-organized tables

CREATE SCHEMA AUTHORIZATION

CREATE or DROP DATABASE LINK

RENAME (use ALTER TABLE instead)

CREATE TABLE ? AS SELECT for clustered tables

running streams64
Running Streams

DDL Types NOT Captured

  • CREATE , ALTER, or DROP MATERIALIZED VIEW LOG
  • CREATE , ALTER, or DROP MATERIALIZED VIEW
  • CREATE or ALTER TABLE for Index-organized tables
  • CREATE SCHEMA AUTHORIZATION
  • CREATE or DROP DATABASE LINK
  • RENAME (use ALTER TABLE instead)
  • CREATE TABLE ? AS SELECT for clustered tables

CREATE or ALTER DATABASE

ALTER SESSION

ALTER SYSTEM

TRUNCATE

CREATE/ALTER/DROP ROLLBACK

CREATE/ALTER/DROP TYPE

CREATE/ALTER/DROP PROFILE

CREATE/ DROP LIBRARY

CREATE/ DROP DIRECTORY

SET ROLE

SET TRANSACTION

SET CONSTRAINT

CREATE CONTROL FILE

CREATE SPFILE

CREATE PFILE

ANALYZE

EXPLAIN

CALL

PL/SQL Procedural calls

Lock Table

running streams65
Running Streams

Problems You May Encounter

Setup / Running Streams

  • Status of dba_capture & dba_apply where ABORTED
  • ORA-01925: Maximum of 30 enabled roles exceeded
  • What object is that?
  • Mixed case global_name causing mismatch Streams and LogMiner

Remedy

  • Stop the Capture and apply processes and start them again
  • Increase current value for MAX_ENABLED_ROLES
    • ALTER SYSTEM SET max_endabled_roles=100
  • Avoid using system generated names
    • Creation will work
    • ALTER / DROP statements will more than likely Fail
  • Change db_name & db_domain to uppercase
running streams66
Running Streams

Problems You May Encounter

Bugs

  • Analyze statement is not propagated on streams environment to target database with using DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES.
  • DDL Issued through the EXECUTE IMMEDIATE are not applied at destination
  • Checkpoints are skipped if there is any DDL activity in the redo logs causing the capture process to reprocess logs during startup.
  • Dropping the streams environment is difficult to cleanup.

Remedy

  • Can use DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES.
  • Do not use dynamic DDL
  • Ensure that no DDL has been issued around scheduled shutdowns.
  • Get it right the first time or choose a version naming schema.
running streams67
Running Streams

Monitoring

  • Just Some
  • DBA_QUEUES
  • DBA_QUEUE_TABLES
  • DBA_APPLY
  • DBA_APPLY_PARAMETERS
  • DBA_CAPTURE
  • DBA_CAPTURE_PARAMETERS
  • DBA_PROPAGATION
  • DBA_APPLY_ERROR
  • DBA_RULES
  • DBA_RULE_SETS
  • DBA_RULE_SET_RULES
  • DBA_JOBS
  • DBA_QUEUE_SCHDULES
slide68

Get a handle on DDL using Oracle 9iR2 Streams

4750 Walnut Street, Suite 106 James F. Koopmann

Boulder, Colorado 80301 Director of Technical Consulting

303.938.8282 x115

720.219.3773 (mobile)

jameskoopmann@confio.com

www.confio.com

NoCOUG Summer Conference, Aug. 19, 2004