consolidating databases on oracle exadata key learnings at intel
Download
Skip this Video
Download Presentation
Consolidating Databases on Oracle Exadata: Key Learnings at Intel

Loading in 2 Seconds...

play fullscreen
1 / 19

Consolidating Databases on Oracle Exadata: Key Learnings at Intel - PowerPoint PPT Presentation


  • 130 Views
  • Uploaded on

Gagan Singh Intel Corporation Subhadra Sampathkumaran Intel Corporation James Harding Oracle America. Consolidating Databases on Oracle Exadata: Key Learnings at Intel . Agenda. Intel – Database Environment Overview Legacy Environment overview & Configuration

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Consolidating Databases on Oracle Exadata: Key Learnings at Intel ' - kennan


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
consolidating databases on oracle exadata key learnings at intel

Gagan Singh

  • Intel Corporation
  • Subhadra Sampathkumaran
  • IntelCorporation
  • James Harding
  • Oracle America

Consolidating Databases on Oracle Exadata: Key Learnings at Intel

agenda
Agenda
  • Intel – Database Environment Overview
  • Legacy Environment overview & Configuration
  • Limitations of Legacy Architecture
  • Goals of Migration to Exadata
  • Proof of Concept
  • ExadataSolution Architecture
  • Value
  • Key Learnings & Challenges
  • Summary
intel database environment overview
Intel – Database Environment Overview
  • Highly automated factories with 100’s of complex integrated systems
  • Goals include -Yield analysis, process improvement, failure mode analysis and test time reduction
  • Geographically distributed independent systems
  • Monitoring and Availability is key
  • 24 x 7 uptime
  • Strict reporting SLA’s
  • Support (DSS) and OLTP Setup
legacy environment overview
Legacy Environment Overview
  • DB size ranges from a few GB’s to ~ 80 TB per site (6 month retention)
  • Large data growth projected.
  • Reliability, Availability and Performance - High priority
  • Application tier includes 3rd party products and in-house apps
  • Robust Backup and Recovery though lacks performance
  • Long MTTR for Disaster Recovery
  • Monitoring – Oracle Grid control 10.2.x
legacy configuration
Legacy Configuration
  • Each site hosts independent RAC with SAN storage.
  • Database: Oracle10.2.x Windows2003 x64
  • GigE with Jumbo Frames for interconnect
  • >80 TB on ASM External Redundancy
  • RMAN incrementally merged backup to FRA  Tape
  • Application tier – OCI, ODP.Net, OLEDB, ODBC

EMC DMX-414 Racks

exadata proof of concept requirements
Exadata Proof of Concept Requirements
  • Run Data Warehouse queries with atleast 2x improvement
  • Run OLTP Read (small queries) and Read-Write (loader) workload with 40% performance improvement
  • Achieve a 5x reduction in data size with the use of Advanced and Hybrid Columnar Compression
  • Demonstrate backup and restore improvement of 2x
  • Meet or beat current RAP targets as defined by Intel
executing the proof of concept
Executing The Proof Of Concept

Less Risk, Better Results

  • Validating the success criteria
    • Performance: Data Warehouse Queries, Data Loaders
    • Data Compression
    • Backup/Recovery w/ZFS Storage Appliance
    • Reliability, Availability and Performance
  • Exadata/ZFS pre-delivery process
  • Exadata/ZFS Delivery
  • Data Migration
  • Execute test plan and capture data

Ready- to-Run

exadata performance
Exadata- Performance
  • RMAN Backup
        • Disk backup increased to 8 TB/Hour
        • Restore Time7.2 TB/Hour
  • Write back Flash Enabled
  • Availability
        • Application not impacted under various failure scenarios
  • Storage reduction
        • Achieved by index reduction and data compression
  • Query Performance
          • 2x Improvement
          • Application tuned to leverage 11.2 DB features
  • Compression
          • Up to 10x HCC compression
            • 5x typical
          • Current data using OLTP compression
          • Archival Data – HCC (Query High)
  • Data Loading Times
      • Faster by 40%
contd
-contd
    • Resource Management
      • DBRM & IORM configured
    • High, medium and low consumer groups defined
      • Resource limits for CPU, IO and parallelism configured for each group
    • Users categorized into consumer groups based on services used to connect
  • Monitoring & tuning
      • Oracle 12c Grid Control
      • SQL Monitoring used extensively for tuning queries
  • Efficient Setup and startup time
        • HW/OS/Storage/DB Setup to Best Practices
        • ~70% less time than conventional infrastructure startup
  • Support
        • Leverage Platinum support
        • ~50% reduction in Infrastructure+DBA operational calls
application changes key learnings
Application Changes – Key Learnings
  • Leverage 11g R2 features
      • Tuning queries from 10g to 11g – Used 12c SQL monitoring
    • Changes for Exadata
      • Smart Scan, compression (OLTP for most recent data and HCC for older data)
      • Minimize indexes to enable offload to storage
    • Optimizer
    • Parallelization & partitioning
    • Globalization with GMT
      • Date/time values stored in TIMESTAMP WITH LOCALTIMEZONE columns
      • Date/time retrieved based on client time zone settings – managed using services and logon trigger
    • Centralized DB
      • Site specific schemas
      • 1 global core schema for common functionalities
key challenges
Key Challenges
  • Query performance
      • Adapt from 10g->11g optimizer + Exadata specific features
      • Significant effort to tune queries as legacy system had embedded SQL hints
  • Storage reduction
      • Identifying indexes to reduce – Several still needed to support OLTP queries
      • Testing with varying levels of compression models
  • Globalization using GMT
      • Date/time from sources distributed geographically stored in the DB
      • Geographically distributed users require data retrieved in local time zone.
  • Centralization
      • Large DB size – Manageability
      • Resource management – Thousands of adhoc users
contd1
-Contd
  • Data migration
    • Two approaches
      • AS-IS for certain existing data domains – Load from raw source files on empty schema
      • Complete re-architect – Incrementally load historical data from existing legacy systems
      • Data loading 24x7 – does not follow conventional batch loading with DW
  • Application cut-over
    • Phased approach
support
Support
  • Single vendor support
    • One number to call for all components.
    • No triage time – single vendor for servers, storage, networking, OS, DB
  • Platinum Services
    • Platinum Gateway
    • Quarterly Rolling patching
    • Automated Service Requests
    • Grid Control Monitoring
  • Single patch set
    • 40% savings in patching time
    • Single application validation - Reduced time for application validation
ad