1 / 244

Oracle Real Application Clusters (RAC) Database Implementation Course

Learn how to implement Oracle RAC databases, including installation, administration, backup and recovery, performance tuning, and more.

mmickle
Download Presentation

Oracle Real Application Clusters (RAC) Database Implementation Course

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction

  2. Overview • This course is designed for anyone interested in implementing a Real Application Clusters (RAC) database. • Although coverage is general, most of the examples and labs in this course are Linux based. • Knowledge of and experience with Oracle Database 10g architecture are assumed. • Lecture material is supplemented with hands-on practices.

  3. Course Objectives • In this course, you: • Learn the principal concepts of RAC • Install the RAC components • Administer database instances in a RAC and ASM environment • Manage services • Back up and recover RAC databases • Monitor and tune performance of a RAC database • Administer Oracle Clusterware

  4. Typical Schedule

  5. A History of Innovation AutomaticWorkload Management Automatic Storage Management Enterprise Grids GridControl RAC OracleClusterware Data Guard Low-cost commodity clusters Resource manager Nonblocking queries OPS

  6. What Is a Cluster? • Interconnected nodesact as a single server. • Cluster software hides the structure. • Disks are availablefor read andwrite by all nodes. • Operating systemis the same on each machine. Private Interconnect Node Publicnetwork Publicnetwork Publicnetwork Publicnetwork Clusterwareon each node Disks

  7. Oracle Real Application Clusters • Multiple instancesaccessing the samedatabase • One instance per node • Physical orlogical accessto each database file • Software-controlleddata access Interconnect Sharedcache Instancesspreadacross nodes Databasefiles

  8. Benefits of Using RAC • High availability: Surviving node and instance failures • Scalability: Adding more nodes as you need them in the future • Pay as you grow: Paying for only what you need today • Key grid computing features: • Growth and shrinkage on demand • Single-button addition of servers • Automatic workload management for services

  9. Clusters and Scalability SMP model RAC model Sharedstorage Memory Cache Cache SGA SGA CPU BGP CPU CPU CPU BGP BGP BGP Cache coherency Cache fusion BGP (background process)

  10. Levels of Scalability • Hardware: Disk input/output (I/O) • Internode communication: High bandwidth and low latency • Operating system: Number of CPUs • Database management system: Synchronization • Application: Design

  11. Scaleup and Speedup Original system Hardware 100% of task Time Cluster system scaleup Cluster system speedup Hardware Up to 200%oftask Time Hardware Up to 300%oftask 100%of task Hardware Time Hardware Time/2 Hardware Time

  12. Speedup/Scaleup and Workloads

  13. I/O Throughput Balanced: Example Each machine has 2 CPUs: 2  200 MB/s  4 = 1600 MB/s Each machine has 2 HBAs:8  200 MB/s = 1600 MB/s HBA1 HBA2 HBA1 HBA2 HBA1 HBA2 HBA1 HBA2 Each switch needs to support 800 MB/sto guarantee a total system throughputof1600 MB/s. FC-switch Each disk array has one 2-Gbit controller: 8  200 MB/s = 1600 MB/s Diskarray 3 Diskarray 8 Diskarray 4 Diskarray 5 Diskarray 1 Diskarray 6 Disk array 2 Diskarray 7

  14. Performance of Typical Components Throughput Performance Component Theory (Bit/s) Maximal Byte/s HBA ½ Gbit/s 100/200 Mbytes/s 16 Port Switch 8  2 Gbit/s 1600 Mbytes/s Fibre Channel 2 Gbit/s 200 Mbytes/s Disk Controller 2 Gbit/s 200 Mbytes/s GigE NIC 1 Gbit/s 80 Mbytes/s Infiniband 10 Gbit/s 890 Mbytes/s CPU 200–250 MB/s

  15. Connectivity Hardware/OS kernel Complete Integrated Clusterware 9i RAC 10g Oracle Clusterware Event Services Applications System Management Applications/RACServices frameworkCluster control/Recovery APIsAutomatic Storage ManagementMessaging and lockingMembership Connectivity Cluster control Volume Managerfile system Management APIs Event Services Messaging and locking Membership Hardware/OS kernel

  16. Necessity of Global Resources SGA1 SGA2 SGA1 SGA2 1008 1008 1008 1 2 SGA1 SGA2 SGA1 SGA2 1009 1008 1009 Lostupdates! 1008 1008 4 3

  17. Global Resources Coordination Cluster Node1 Noden Instance1 Instancen GRD Master Cache GRD Master Cache Globalresources … … LMON LMON GES GES LMD0 LMD0 LMSx LMSx GCS GCS Interconnect LCK0 LCK0 DIAG DIAG Global Resource Directory (GRD) Global Enqueue Services (GES) Global Cache Services (GCS)

  18. Global Cache Coordination: Example Cluster Node1 Node2 Instance1 Instance2 Cache Cache 1009 1009 3 … … LMON LMON LMD0 LMD0 LMSx LMSx 4 LCK0 LCK0 DIAG DIAG 1 2 Instance 2 hasthe current version of the block. Block masteredby instance 1 Which instancemasters the block? GCS No disk I/O 1008

  19. Write to Disk Coordination: Example Cluster Node1 Node2 Instance1 Instance2 Cache Cache 1009 1010 3 … … LMON LMON LMD0 LMD0 LMSx LMSx 5 4 LCK0 LCK0 DIAG DIAG 1 2 Block flushed, make room Need to make roomin my cache.Who has the current versionof that block? Instance 2 owns it.Instance 2, flush the blockto disk. GCS Only one disk I/O 1010

  20. Dynamic Reconfiguration Reconfiguration remastering Node1 Node2 Node3 Instance1 Instance2 Instance3 masters granted masters granted masters granted R1 1, 2, 3 R3 2, 3 R5 2 R2 1, 3 R4 1, 2 R6 1, 2, 3 Node1 Node2 Node3 Instance1 Instance2 Instance3 masters granted masters granted masters granted R1 1, 3 R3 2, 3 R5 R2 1, 3 R4 1, 2 R6 1, 3 R3 3 R4 1

  21. Object Affinity and Dynamic Remastering Messages are sent to remote node when reading into cache. Node2 Node1 Instance1 Beforedynamicremastering  GCS message to master    Instance2 Object Read fromdisk Node2   Afterdynamicremastering   Instance1 Instance2 Node1 No messages are sent to remote node when reading into cache.

  22. Global Dynamic Performance Views • Retrieve information about all started instances • Have one global view for each local view • Use one parallel slave on each instance Cluster Node1 Noden GV$INSTANCE Instance1 Instancen V$INSTANCE V$INSTANCE

  23. Additional Memory Requirement for RAC • Heuristics for scalability cases: • 15% more shared pool • 10% more buffer cache • Smaller buffer cache per instance in the case of single-instance workload distributed across multiple instances • Current values: SELECT resource_name, current_utilization,max_utilization FROM v$resource_limit WHERE resource_name like 'g%s_%'; SELECT * FROM v$sgastat WHERE name like 'g_s%' or name like 'KCL%';

  24. Efficient Internode Row-Level Locking UPDATE UPDATE 1 2 Node1 Node2 Node1 Node2 Instance1 Instance2 Instance1 Instance2 No block-levellock COMMIT UPDATE Node1 Node2 Node1 Node2 Instance1 Instance2 Instance1 Instance2 3 4

  25. Parallel Execution with RAC • Execution slaves have node affinity with the execution coordinator but will expand if needed. Node 1 Node 2 Node 3 Node 4 Execution coordinator Shared disks Parallel execution server

  26. RAC Software Principles Cluster Node1 Noden Instance1 Instancen Cache Cache Globalresources … … LMON LMON LMD0 LMD0 LMSx LMSx LCK0 LCK0 DIAG DIAG Oracle Clusterware Oracle Clusterware CRSD & RACGIMON CRSD & RACGIMON Clusterinterface EVMD EVMD OCSSD & OPROCD OCSSD & OPROCD Applications Applications Globalmanagement:SRVCTL, DBCA, EM ASM, DB, Services, OCR ASM, DB, Services, OCR VIP, ONS, EMD, Listener VIP, ONS, EMD, Listener

  27. RAC Software Storage Principles Node1 Noden Node1 Noden Instance1 … Instancen Instance1 … Instancen CRS_HOME CRS_HOME CRS_HOME CRS_HOME ORACLE_HOME ORACLE_HOME ASM_HOME ASM_HOME Local storage Local storage Local storage Local storage Voting files Voting files OCR files OCR files ORACLE_HOME Shared storage ASM_HOME Shared storage Permits rolling patch upgrades Software is not a singlepoint of failure.

  28. RAC Database Storage Principles Node1 Noden … Instance1 Instancen Archivedlog files Archivedlog files Local storage Local storage Data files Undo tablespacefiles forinstance1 Undo tablespacefiles forinstancen Temp files Control files Flash recovery area files Onlineredo log filesfor instance1 Onlineredo log filesfor instancen Change tracking file SPFILE TDE Wallet Shared storage

  29. RAC and Shared Storage Technologies • Storage is a critical component of grids: • Sharing storage is fundamental. • New technology trends • Supported shared storage for Oracle grids: • Network Attached Storage • Storage Area Network • Supported file storage for Oracle grids: • Raw volumes • Cluster file system • ASM

  30. Oracle Cluster File System • Is a shared disk cluster file system for Linux and Windows • Improves management of data for RAC by eliminating the need to manage raw devices • Provides open solution on the operating system side • Can be downloaded from OTN: • http://oss.oracle.com/projects/ocfs/ (Linux) • http://www.oracle.com/technology/software/products/database/oracle10g/index.html (Windows)

  31. Application Database Filesystem ASM Volumemanager Operating system Automatic Storage Management • Provides the first portable and high-performancedatabase file system • Manages Oracle database files • Contains data spread across disks to balance load • Provides integrated mirroring across disks • Solves many storage management challenges

  32. CFS or Raw? • Using CFS: • Simpler management • Use of OMF with RAC • Single Oracle software installation • Autoextend • Using raw: • Performance • Use when CFS not available • Cannot be used for archivelog files • ASM eases work

  33. Typical Cluster Stack with RAC Servers Interconnect High-speed Interconnect: Gigabit Ethernet Proprietary UDP Proprietary Oracle Clusterware OS C/W RAC Linux, UNIX, Windows RAC Linux Windows RAC Linux Windows RAC AIX, HP-UX, Solaris ASM OCFS RAW ASM RAW CFS OS CVM Database shared storage

  34. RAC Certification Matrix • Connect and log in to http://metalink.oracle.com. • Click the Certify tab on the menu frame. • Click the “View Certifications by Product” link. • Select Real Application Clusters. Then click Submit. • Select the correct platform and click Submit.

  35. RAC and Services Application server ERP CRM Run-time connection load balancing Service location transparency Service connections Listeners Connection load balancing Service availability aware Stop or start service connections. RAC instances ERP ERP ERP ERP Modify service to instance mapping. Backup Priority CRM CRM CRM CRM Alerts Tuning Oracle Clusterware Up/down/LBA events notification engine Restart failed components

  36. Available Demonstrations • RAC scalability and transaction throughput • RAC speedup and parallel queries • Use TAF with SELECT statements http://www.oracle.com/technology/obe/demos/admin/demos.html

  37. Oracle Clusterware Installation and Configuration

  38. Objectives • After completing this lesson, you should be able to: • Describe the installation of Oracle RAC 11g • Perform RAC preinstallation tasks • Perform cluster setup tasks • Install Oracle Clusterware

  39. Oracle RAC 11g Installation • Oracle RAC 11g incorporates a two-phase installation process: • Phase one installs Oracle Clusterware. • Phase two installs the Oracle Database 11g software with RAC.

  40. Oracle RAC 11g Installation: Outline • Complete preinstallation tasks: • Hardware requirements • Software requirements • Environment configuration, kernel parameters, and so on • Perform Oracle Clusterware installation. • Perform ASM installation. • Perform Oracle Database 11g software installation. • Install EM agent on cluster nodes if using Grid Control. • Perform cluster database creation. • Complete postinstallation tasks.

  41. Windows and UNIX Installation Differences • Startup and shutdown services • Environment variables • DBA account for database administrators • Account for running OUI

  42. Preinstallation Tasks Check system requirements. Check software requirements. Check kernel parameters. Create groups and users. Perform cluster setup.

  43. Hardware Requirements • At least 1 GB of physical memory is needed. • A minimum of 1 GB of swap space is required. • The /tmp directory should be at least 400 MB. • The Oracle Database 11g software requires up to 4 GB of disk space. # grep MemTotal /proc/meminfo MemTotal: 1126400 kB # grep SwapTotal /proc/meminfo SwapTotal: 1566328 kB # df -k /tmp Filesystem 1K-blocks Used Available Use% /dev/sda6 6198556 3137920 2745756 54%

  44. Network Requirements • Each node must have at least two network adapters. • Each public network adapter must support TCP/IP. • The interconnect adapter must support User Datagram Protocol (UDP). • The host name and IP address associated with the public interface must be registered in the domain name service (DNS) or the /etc/hosts file.

  45. Virtual IP Addresses and RAC clnode-1 clnode-2 clnode-1vip clnode-2vip 2 2 Clients ERP=(DESCRIPTION= ((HOST=clusnode-1)) ((HOST=clusnode-2)) (SERVICE_NAME=ERP)) ERP=(DESCRIPTION= ((HOST=clusnode-1vip)) ((HOST=clusnode-2vip)) (SERVICE_NAME=ERP)) 4 1 5 1 6 6 7 Timeoutwait 5 7 3 3 clnode-2vip 4 clnode-1 clnode-2 clnode-1vip

  46. RAC Network Software Requirements • Supported interconnect software protocols are required: • TCP/IP • UDP • Reliable Data Gram • Token Ring is not supported on AIX platforms.

  47. Package Requirements • Package versions are checked by the cluvfy utility. • For example, required packages and versions for Red Hat 4.0 and Oracle Enterprise Linux 4 include: • glibc-2.3.4-2.25 • glibc-common-2.3.4.2-25 • glibc-devel-2.3.4.2-25 • gcc-3.4.6-3 • gcc-c++-3.4.6-3 • libaio-0.3.105-2 • libaio-devel-0.3.105-2 • libstdc++-3.4.6-3.1 • make-3.80-6 • sysstat-5.0.5-11

  48. Required UNIX Groups and Users • Create an oracle user, a dba, and an oinstall group on each node: • Verify the existence of the nobody nonprivileged user: # groupadd -g 500 oinstall # groupadd -g 501 dba # useradd -u 500 -d /home/oracle -g "oinstall" \ –G "dba" -m -s /bin/bash oracle # grep nobody /etc/passwd Nobody:x:99:99:Nobody:/:/sbin/nobody

  49. oracle User Environment • Set umask to 022. • Set the DISPLAY environment variable. • Set the ORACLE_BASE environment variable. • Set the TMP and TMPDIR variables, if needed. $ cd $ vi .bash_profile umask 022 ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE TMP=/u01/mytmp; export TMP TMPDIR=$TMP; export TMPDIR

  50. User Shell Limits • Add the following lines to the /etc/security/limits.conf file: • Add the following line to the /etc/pam.d/login file: * soft nproc 2047 * hard nproc 16384 * soft nofile 1024 * hard nofile 65536 session required /lib/security/pam_limits.so

More Related