1 / 54

Flashback Archive Data Guard Streams Online Maintenance Data Recovery Advisor

High Availability. Flashback Archive Data Guard Streams Online Maintenance Data Recovery Advisor. <Insert Picture Here>. Flashback Data Archive. Data History and Retention. Data retention and change control requirements are growing Regulatory oversight and Compliance

yannis
Download Presentation

Flashback Archive Data Guard Streams Online Maintenance Data Recovery Advisor

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Availability • Flashback Archive • Data Guard • Streams • Online Maintenance • Data Recovery Advisor

  2. <Insert Picture Here> Flashback Data Archive

  3. Data History and Retention • Data retention and change control requirements are growing • Regulatory oversight and Compliance • Sarbanes-Oxley, HIPAA, Basel-II, Internal Audit • Business needs • Extract “temporal” dimension of data • Understand past behavior and manage customer relationships profitably • Failure to maintain appropriate history & retention is expensive • Legal risks • Loss of Reputation • Current approaches to manage historical data are inefficient and often ineffective

  4. Data History and Retention - Requirements • Historical data needs to be secure and tamper proof • Unauthorized users should not be able to access historical data • No one should be able to update historical data • Easily accessible from existing applications • Seamless access • Should not require special interfaces or application changes • Minimal performance overhead • Optimal Storage footprint • Historical data volume can easily grow into hundreds of terabytes • Easy to set up historical data capture and configure retention policies

  5. Managing Data History – Current Approaches • Application or mid-tier level • Combines business logic and archive policies • Increases complexity • No centralized management • Data integrity issues if underlying data is updated directly • Database level • Enabled using Triggers • Significant performance and maintenance overhead • External or Third-party • Mine redo logs • History stored in separate database • Cannot seamlessly query OLTP and history data • None of the above approaches meet all customer requirements • Customers are therefore forced to make significant compromises

  6. Introducing Flashback Data Archive • Transparently tracks historical changes to all Oracle data in a highly secure and efficient manner • Historical data is stored in the database and can be retained for as long as you want • Special kernel optimizations to minimize performance overhead of capturing historical data • Historical data is stored in compressed form to minimize storage requirements • Automatically prevents end users from changing historical data • Seamless access to archived historical data • Using “AS OF” SQL construct select * from product_information AS OF TIMESTAMP '02-MAY-05 12.00 AM‘ where product_id = 3060

  7. Introducing Flashback Data Archive • Extremely easy to set up • enable history capture in minutes! • Completely transparent to applications • Centralized and automatic management • policy-based • multiple tables can share same Retention and Purge policies • automatic purge of aged history Automatically Purge Data based on Retention policy Retention Period

  8. How Does Flashback Data Archive Work? • Primary source for history is the undo data • History is stored in automatically created history tables inside the archive • Transactions and its undo records on tracked tables marked for archival • undo records not recycled until history is archived • History is captured asynchronously by new background process (fbda) • default capture interval is 5 minutes • capture interval is self-tuned based on system activities • process tries to maximize undo data reads from buffer cache for better performance • INSERTs do not generate history records

  9. Flashback Data Archive And DDLs • Possible to add columns to tracked tables • Automatically disallows any other DDL that invalidates history • Dropping and truncating a tables • Dropping or modifying a column • Must disable archiving before performing any major changes • Disabling archiving discards already collected history • Flashback Data Archive guarantees historical data capture and maintenance • Any operations that invalidates history or prevents historical capture will be disallowed

  10. CREATE FLASHBACK ARCHIVE fda1 TABLESPACE tbs1 RETENTION 5 YEAR; ALTER TABLE EMPLOYEES FLASHBACK ARCHIVE fda1; Creating Flashback Data Archive & Enable History Tracking • Create tablespace (Automatic Segment Space Management is required) • Create a flashback data archive • Set the retention period • Enable archiving on desired tables

  11. Managing Flashback Data Archive • SYS_FBA_HIST_* - Internal History Table • replica of tracked table with additional timestamp columns • partitioned for faster performance • no modifications allowed to internal partitions • compression reduces disk space required • no out-of-box indexes • support for copying primary key indexes from tracked table in later releases (TBD) • Applications don’t need to access internal tables directly • use ‘AS OF’ to seamlessly query history • Alerts generated when flashback data archive is 90% full • Automatically purges historical data after expiration of specified retention period • supports ad-hoc purge by administrators (privileged operation

  12. Summary • Managing historical data should no longer be a onerous task • Flashback Data Archive provides a secure, efficient, easy to use and applicant transparent solution • Easy to implement • Centralized, Integrated and query-able • Highly storage and performance efficient • Automatic, Policy-based management • Reduce costs of compliance • Can be used for variety of other purposes • Auditing, Human error correction, etc.

  13. <Insert Picture Here> Data Guard

  14. Data Guard Enhancements • Better standby resource utilization • Enhanced HA / DR functionality • Improved performance Data Guard becomes an integral part of IT operations

  15. Physical Standby with Real-Time Query • Read-only queries on physical standby concurrent with redo apply • Supports RAC on primary and/or standby • Queries see transactionally consistent results • Handles all data types, but not as flexible as logical standby Concurrent Real-Time Query Continuous Redo Shipment and Apply Primary Database Physical Standby Database

  16. Real-time Query – Benefits • Immediate appeal to a large installed customer base for physical standby • Allows leveraging existing physical standby assets for excellent business use • Satisfies several customers with specific requirements • Telecom – service schedules for technicians • Medical – access patient medical reports • Finance – provide management-level reports • Transportation – provide responses to package tracking queries • Web-business – scale-out read access for catalog browsing • Significant differentiator compared to storage mirroring • Mirror volumes are offline during mirroring

  17. Queries Queries Updates Updates Primary Database Physical Standby Database Physical Standby Database Snapshot Standby Database Snapshot StandbyLeverage Standby Database for Testing • Preserves zero data loss, although no real time query or fast failover • Truly leverages DR hardware for multiple purposes • Similar to storage snapshots, but provides DR at the same time anduses single copy of storage

  18. Snapshot StandbyEasier than manual steps in 10.2 10.2 Standby • alter database recover managed standby database cancel; • create restore point before_lt guarantee flashback database; Primary • alter system archive log current; • alter system set log_archive_dest_state_2=defer; Standby • alter database activate standby database; • startup mount force; • alter database set standby database to maximize performance; • alter system set log_archive_dest_state_2=defer; • alter database open; PERFORM TESTING, ARCHIVE LOGS NOT SHIPPED • startup mount force; • flashback database to restore point before_lt; • alter database convert to physical standby; • startup mount force; • alter database recover managed standby database disconnect from session; Primary • Alter system set log_archive_dest_state_2=enable 11.1 Standby • alter database convert to snapshot standby; PERFORM TESTING, ARCHIVE LOGS CONTINUE TO BE SHIPPED • alter database convert to physical standby;

  19. Use Physical Standby to Detect Lost Writes • Use new initialization parameter • Compare versions of blocks on the standby with that in the incoming redo stream • Version discrepancy implies lost writes • Can use the standby to failover and restore data consistency db_lost_write_protect

  20. Enhanced SQL Apply • Support • XMLType data type (CLOB) • Transparent Data Encryption (TDE) • DBMS_FGA (Fine Grained Auditing) • DBMS_RLS (Virtual Private Database) • Role-specific DBMS_SCHEDULER jobs • (PRIMARY, LOGICAL STANDBY, BOTH) • Dynamic SQL Apply parameter changes • Support for Parallel DDL execution on the standby database

  21. Enhanced Fast-Start Failover • Supports Maximum Performance (ASYNC) Mode • Automatic failover for long distance standby • Data loss exposure limited using Broker property (default=30 seconds, min=6 seconds) • Immediate fast-start failover for user-configurable health conditions • Condition examples: • Datafile Offline • Corrupted Controlfile • Corrupted Dictionary • Inaccessible Logfile • Stuck Archiver • Any explicit ORA-xyz error • Apps can request fast-start failover using api FastStartFailoverLagLimit • ENABLE FAST_START FAILOVER [CONDITION <value>]; DBMS_DG.INITIATE_FS_FAILOVER

  22. Data Guard Performance Improvements • Faster Failover • Failover in seconds with Fast-Start Failover • Faster Redo Transport • Optimized async transport for Maximum Performance Mode • Redo Transport Compression for gap fetching: new compression attribute for log_archive_dest_n • Faster Redo Apply • Parallel media recovery optimization • Faster SQL Apply • Internal optimizations • Fast incremental backup on physical standby database • Support for block change tracking

  23. Rolling Database UpgradesUsing Transient Logical Standby • Start rolling database upgrades with physical standbys • Temporarily convert physical standby to logical to perform the upgrade • Data type restrictions are limited to short upgrade window • No need for separate logical standby for upgrade • Also possible in 10.2 (more manual steps) Physical Logical Upgrade Physical Leverage your physical standbys!

  24. <Insert Picture Here> Streams

  25. Apply1 Apply2 Streams Overview SourceDatabase Target Database Propagate Capture Redo Logs Transparent Gateway • All sites active and updateable • Automatic conflict detection & optional resolution • Supports data transformations • Flexible configurations – n-way, hub & spoke, … • Database platform / release / schema structure can differ • Provides HA for applications where update conflicts can be avoided or managed Non-Oracle Database

  26. Streams Enhancements in Oracle Database 11g • Additional Data Type Support • Table data comparison • Synchronous capture • Manageability & Diagnosibility improvements • Performance improvements • Streams AQ Enhancements

  27. Newly Supported Datatypes • XMLType • Storage CLOB • Transparent Data Encryption (TDE) • Default: Capture TDE=> Apply TDE • PRESERVE_ENCRYPTION apply parameter controls behaviour when destination columns are not encrypted

  28. Compare Table Data Comparison • Compare data between live sources • Compare 11.1 with 10.1, 10.2 or 11.1 • Recheck • In-flight data • Rows that are different • Converge feature • Identify “truth” database (local or remote) for row diffs DBMS_COMPARISON

  29. Synchronous Capture • Available in all editions of Oracle Database 11g • Efficient internal mechanism to immediately capture change • Changes captured as part of the user transaction • DML only • LCRs enqueued persistently to disk • When to use: • Replicate a few low activity tables of highly active source database • Capture from redo logs cannot be implemented DBMS_CAPTURE_ADM.CREATE_SYNC_CAPTURE

  30. Capture Changes 10110 00011 10101 Change Apply Streams Performance Advisor • Auto-discovery of streams topology on multiple databases • Automatic performance analysis across all databases • Per-Stream Analysis: • Time-based analysis of each component (waits, CPU, etc.) using ASH • Bottleneck components • Top wait events of bottleneck • Per-Component Analysis: • Throughput and latency • Aborted or Enabled • Integrated with ADDM • Stream errors are integrated with Server-generated Alerts

  31. Challenge With hub&spoke configurations, when one destination is unavailable, all destinations are hit with a performance impact because capture queue spills to disk after 5 minutes Solution Split the queue between live and down destinations Merge queues after recovery Maintains high performance for all replicas Automated, fast “catch-up” for unavailable replica Split and Merge of Streams

  32. Destination Database A Destination Database B Destination Database C Dequeue LCRs Dequeue LCRs Queue Queue Queue Queue Apply Process Apply Process Capture Process Dequeue LCRs Streams: Hub with 3 Spokes Propagation A Source Database Enqueue LCRs Propagation B Apply Process Propagation C

  33. Destination Database A Destination Database B Destination Database C X CLONED Propagation A Source Database Dequeue LCRs Dequeue LCRs Queue Queue Queue Queue Queue Apply Process Apply Process CLONED Capture Process (Disabled) Enqueue LCRs Propagation B Capture Process Dequeue LCRs Propagation C Split Streams: Site A Unavailable Apply Process

  34. Destination Database A Destination Database B Destination Database C CLONED Propagation A Source Database Dequeue LCRs Dequeue LCRs Queue Queue Queue Queue Queue Enqueue LCRs Apply Process Apply Process CLONED Capture Process (Enabled) Enqueue LCRs Propagation B Capture Process Dequeue LCRs Propagation C Split Streams: Site A Available Apply Process

  35. Destination Database A Destination Database C Destination Database B Dequeue LCRs Dequeue LCRs Queue Queue Queue Queue Apply Process Apply Process Capture Process Dequeue LCRs Merge Streams: Original Configuration Propagation A Source Database Enqueue LCRs Propagation B Apply Process Propagation C

  36. Streams…. more manageability improvements • Automated Alerts • abort of capture, propagation, or apply • 1st error in DBA_APPLY_ERROR • Propagation uses scheduler • Improved error messages • ORA-1403 -> ORA-26786 or ORA-26787 • customer DML Handlers need to handle these new exceptions • more detail added to many error messages • Cross-database LCR tracking • trace Streams messages from start to finish DBMS_CAPTURE_ADM.SET_MESSAGE_TRACKING(‘mylabel’) V$STREAMS_MESSAGE_TRACKING

  37. Streams performance • CERN reporting >5000 txns/s in 10.2.03 • OpenLAB presentation http://openlab-mu-internal.web.cern.ch/openlab-mu-internal/Documents/3_Presentations/Slides/2007/DW_openlab_qr1_2007.pdf • 11g performance improvements • common case 10.2 -> 11.1 almost double

  38. Streams Advanced Queuing (AQ) New Features • JMS/AQ performance improvements • Direct Streams AQ support in JDBC • Scalable event notification • Grouping notification by time • Multiple processes notification for scalability • Improved Manageability • Scheduler support • Performance views

  39. Flashback Transaction

  40. Flashback Transaction • Automatically finds and backs out a transaction and all its dependent transactions • Utilizes undo, archived redo logs, supplemental logging • Finalize changes with commit, or roll back • “Dependent” transactions include • Write after write • Primary Key insert after delete • Faster, Easier than laborious manual approach DBMS_FLASHBACK.TRANSACTION_BACKOUT

  41. Flashback TransactionEnterprise Manager Support

  42. Data Recovery Advisor

  43. Data Recovery AdvisorThe Motivation Investigation & Planning • Oracle provides robust tools for data repair: • RMAN – physical media loss or corruptions • Flashback – logical errors • Data Guard – physical or logical problems • However, problem diagnosis and choosing the right solution can be error prone and time consuming • Errors more likely during emergencies Recovery

  44. Data Recovery Advisor • Oracle Database tool that automatically diagnoses data failures, presents repair options, and executes repairs at the user's request • Determines failures based on symptoms • E.g. an “open failed” because datafiles f045.dbf and f003.dbf are missing • Failure Information recorded in diagnostic repository (ADR) • Flags problems before user discovers them, via automated health monitoring • Intelligently determines recovery strategies • Aggregates failures for efficient recovery • Presents only feasible recovery options • Indicates any data loss for each option • Can automatically perform selected recovery steps Reduces downtime by eliminating confusion

  45. Data Recovery AdvisorEnterprise Manager Support

  46. Data Recovery Advisor RMAN Command Line Interface • lists all previously detected failures • presents recommended recovery options • repair database failures (defaults to first repair option from most recent ADVISE FAILURE) • change failure priority (with exception of ‘critical’ priority failures, e.g. missing control file) RMAN> list failure RMAN> advise failure RMAN> repair failure RMAN> change failure 5 priority low

  47. <Insert Picture Here> Recovery Manager, Ultra Safe Mode and Online Operations

  48. RMAN Enhancements • Better performance • Intra-file parallel backup and restore of single data files (multi-section backup) • Faster backup compression (ZLIB, ~40% faster) • Better security • Virtual private catalog allows a consolidation of RMAN repositories and maintains a separation of responsibilities. • Lower space consumption • Duplicate database or create standby database over the network, avoiding intermediate staging areas • Integration with Windows Volume Shadow Copy Services (VSS) API • Allows database to participate in snapshots coordinated by VSS-compliant backup management tools and storage products • Database is automatically recovered upon snapshot restore via RMAN

  49. Ultra-Safe Mode The DB_ULTRA_SAFE parameter provides an easy way to turn on the safest mode. It affects the default values of the following parameters • DB_BLOCK_CHECKING, which initiates checking of database blocks. This check can often prevent memory and data corruption. • DB_BLOCK_CHECKSUM, which initiates the calculation and storage of a checksum in the cache header of every data block when writing it to disk. Checksums assist in detecting corruption caused by underlying disks, storage systems or I/O systems. • DB_LOST_WRITE_PROTECT, which initiates checking for "lost writes". Data block lost writes occur on a physical standby database, when the I/O subsystem signals the completion of a block write, which has not yet been completely written in persistent storage. Of course, the write operation has been completed on the primary database.

More Related