1 / 29

Siebel Maximum Availability Architecture MAA

kynthia
Download Presentation

Siebel Maximum Availability Architecture MAA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. Siebel Maximum Availability Architecture (MAA) Richard Exley High Availability Systems and Maximum Availability Architecture Group Oracle Server Technologies

    2. Agenda Maximum Availability Architecture (MAA) Siebel MAA Target Architecture Oracle Database MAA Siebel High Availability Transparent Application Failover Unplanned Outage Solutions Planned Maintenance Solutions Tips and Best Practices Resources

    3. Maximum Availability Architecture (MAA)

    4. Maximum Availability Architecture (MAA) Maximum Availability = Unbreakable Architecture + Best Practices Oracle's best practices blueprint based on proven Oracle high availability technologies and recommendations Technology + Configuration + Operational Practices Applications, Enterprise Manager, Application Server, Collaboration Suite and Database Constantly validated and enhanced as new products and features become available Focused on reducing unplanned and planned downtime Papers published to the Oracle Technology Network (OTN) http://www.oracle.com/technology/deploy/availability/htdocs/maa.htm

    5. Siebel MAA

    6. Siebel MAA Target Architecture

    7. Siebel MAA Oracle Database MAA Explain RAC DG ASM Flashback TAF – Siebel Support All existing database MAA papers apply to Siebel – pointer Explain RAC DG ASM Flashback TAF – Siebel Support All existing database MAA papers apply to Siebel – pointer

    8. Siebel MAA Siebel HA Deployment

    9. Siebel MAA Siebel HA Deployment Options Load Balancing Client initiated workload is distributed across multiple component instances running on multiple servers. Distributed Services Siebel Server initiated workload is distributed across multiple component instances running on multiple servers. Clustering Server clusters consist of two or more physical servers linked together so that if one server fails, resources such as disks, network addresses, Siebel Servers and Gateway Servers can be switched over to another server. Load Balancing Client initiated workload is distributed across multiple component instances running on multiple servers. If an instance or server fails, requests are automatically routed to the remaining instances. Not all components support load balancing. Distributed Services Siebel Server initiated workload is distributed across multiple component instances running on multiple servers. If one instance or server fails, the remaining instances take over processing the requests. Not all components support distributed services. Clustering Server clusters consist of two or more physical servers linked together so that if one server fails, resources such as disks, network addresses, Siebel Servers and Gateway Servers can be switched over to another server. All components support deployment in a clustered Siebel Server. In a clustering configuration, each component instance is running in an active/passive mode, however, component instances can be distributed across multiple servers and mixed with load balanced components so that all the available servers are utilized (active) during normal operation. For example, In a two node cluster you'd go with two clustered Siebel Server installations which run on separate nodes when they are available. If one node dies, they would both run on the surviving node. It is important to plan carefully so that you will have sufficient capacity to run Siebel in the event of node failure.Load Balancing Client initiated workload is distributed across multiple component instances running on multiple servers. If an instance or server fails, requests are automatically routed to the remaining instances. Not all components support load balancing. Distributed Services Siebel Server initiated workload is distributed across multiple component instances running on multiple servers. If one instance or server fails, the remaining instances take over processing the requests. Not all components support distributed services. Clustering Server clusters consist of two or more physical servers linked together so that if one server fails, resources such as disks, network addresses, Siebel Servers and Gateway Servers can be switched over to another server. All components support deployment in a clustered Siebel Server. In a clustering configuration, each component instance is running in an active/passive mode, however, component instances can be distributed across multiple servers and mixed with load balanced components so that all the available servers are utilized (active) during normal operation. For example, In a two node cluster you'd go with two clustered Siebel Server installations which run on separate nodes when they are available. If one node dies, they would both run on the surviving node. It is important to plan carefully so that you will have sufficient capacity to run Siebel in the event of node failure.

    10. Siebel MAA Available Siebel Component Deployment Options

    11. Siebel MAA Siebel Clustering Requirements Shared High Availability File System Shared for failover but accessed by only one node at any given time Siebel software home, name server backing file, etc. Cluster Manager that supports: Virtual IP management with failover Single Siebel Server and Gateway network address independent of physical service location Service Monitoring Ability to monitor Siebel service availability Service Control Ability to restart and relocate Siebel services in the event of failure Load Balancing Client initiated workload is distributed across multiple component instances running on multiple servers. If an instance or server fails, requests are automatically routed to the remaining instances. Not all components support load balancing. Distributed Services Siebel Server initiated workload is distributed across multiple component instances running on multiple servers. If one instance or server fails, the remaining instances take over processing the requests. Not all components support distributed services. Clustering Server clusters consist of two or more physical servers linked together so that if one server fails, resources such as disks, network addresses, Siebel Servers and Gateway Servers can be switched over to another server. All components support deployment in a clustered Siebel Server. In a clustering configuration, each component instance is running in an active/passive mode, however, component instances can be distributed across multiple servers and mixed with load balanced components so that all the available servers are utilized (active) during normal operation. For example, In a two node cluster you'd go with two clustered Siebel Server installations which run on separate nodes when they are available. If one node dies, they would both run on the surviving node. It is important to plan carefully so that you will have sufficient capacity to run Siebel in the event of node failure.Load Balancing Client initiated workload is distributed across multiple component instances running on multiple servers. If an instance or server fails, requests are automatically routed to the remaining instances. Not all components support load balancing. Distributed Services Siebel Server initiated workload is distributed across multiple component instances running on multiple servers. If one instance or server fails, the remaining instances take over processing the requests. Not all components support distributed services. Clustering Server clusters consist of two or more physical servers linked together so that if one server fails, resources such as disks, network addresses, Siebel Servers and Gateway Servers can be switched over to another server. All components support deployment in a clustered Siebel Server. In a clustering configuration, each component instance is running in an active/passive mode, however, component instances can be distributed across multiple servers and mixed with load balanced components so that all the available servers are utilized (active) during normal operation. For example, In a two node cluster you'd go with two clustered Siebel Server installations which run on separate nodes when they are available. If one node dies, they would both run on the surviving node. It is important to plan carefully so that you will have sufficient capacity to run Siebel in the event of node failure.

    12. Siebel MAA Transparent Application Failover Works for: RAC Instance or Node Failure Local Data Guard Standby Failover and Switchover Database Shutdown/Startup

    13. Demo 1

    14. Transparent Application Failover Siebel Client Behavior on Failover or Switchover

    15. Siebel MAA Unplanned Outage Solutions Recovery time for human errors depend primarily on detection time. If it takes seconds to detect a malicious DML or DLL transaction, it typically only requires seconds to flashback the appropriate transactions. Longer detection time usually leads to longer recovery time required to repair the appropriate transactions. An exception is undropping a table, which is literally instantaneous regardless of detection time. Data Guard recovery time indicated applies to database and Siebel recovery. Network connection changes and other site-specific failover activities may lengthen overall recovery time. Recovery time for human errors depend primarily on detection time. If it takes seconds to detect a malicious DML or DLL transaction, it typically only requires seconds to flashback the appropriate transactions. Longer detection time usually leads to longer recovery time required to repair the appropriate transactions. An exception is undropping a table, which is literally instantaneous regardless of detection time. Data Guard recovery time indicated applies to database and Siebel recovery. Network connection changes and other site-specific failover activities may lengthen overall recovery time.

    16. Siebel MAA Unplanned Outage Solutions (continued) Recovery time for human errors depend primarily on detection time. If it takes seconds to detect a malicious DML or DLL transaction, it typically only requires seconds to flashback the appropriate transactions. Longer detection time usually leads to longer recovery time required to repair the appropriate transactions. An exception is undropping a table, which is literally instantaneous regardless of detection time. Data Guard recovery time indicated applies to database and Siebel recovery. Network connection changes and other site-specific failover activities may lengthen overall recovery time. Recovery time for human errors depend primarily on detection time. If it takes seconds to detect a malicious DML or DLL transaction, it typically only requires seconds to flashback the appropriate transactions. Longer detection time usually leads to longer recovery time required to repair the appropriate transactions. An exception is undropping a table, which is literally instantaneous regardless of detection time. Data Guard recovery time indicated applies to database and Siebel recovery. Network connection changes and other site-specific failover activities may lengthen overall recovery time.

    17. Siebel MAA Planned Siebel Maintenance Solutions

    18. Siebel MAA Planned Database 10gR2 Maintenance Solutions Performing a rolling patch application with RAC is possible only for patches that are certified for rolling upgrades. Typically, patches that can be installed in a rolling upgrade include: Patches that do not affect the contents of the database, such as the data dictionary Patches not related to Oracle RAC internode communication Patches related to client-side tools such as SQL*Plus, Oracle utilities, development libraries, and Oracle Net Patches that do not change shared database resources, such as data file headers, control files, and common header definitions of kernel modules Do not use Oracle RAC to perform rolling upgrades of patch sets. Performing a rolling patch application with RAC is possible only for patches that are certified for rolling upgrades. Typically, patches that can be installed in a rolling upgrade include: Patches that do not affect the contents of the database, such as the data dictionary Patches not related to Oracle RAC internode communication Patches related to client-side tools such as SQL*Plus, Oracle utilities, development libraries, and Oracle Net Patches that do not change shared database resources, such as data file headers, control files, and common header definitions of kernel modules Do not use Oracle RAC to perform rolling upgrades of patch sets.

    19. Siebel MAA Planned Database 11g Maintenance Solutions Performing a rolling patch application with RAC is possible only for patches that are certified for rolling upgrades. Typically, patches that can be installed in a rolling upgrade include: Patches that do not affect the contents of the database, such as the data dictionary Patches not related to Oracle RAC internode communication Patches related to client-side tools such as SQL*Plus, Oracle utilities, development libraries, and Oracle Net Patches that do not change shared database resources, such as data file headers, control files, and common header definitions of kernel modules Do not use Oracle RAC to perform rolling upgrades of patch sets. Online patches are a special type of interim patch that can be applied while the instance remains online. Oracle provides online patches when the changed code is small in scope and complexity, such as with diagnostic patches or small bug fixes. Oracle provides online patches when the patch does not change shared memory structures in the System Global Area (SGA), or other critical internal code structures. Applying an online patch increases memory consumption on the system because each Oracle process uses more memory from the Program Global Area (PGA) during the patch application. You need to take your memory requirements into consideration before you begin applying an online patch. Each online patch is unique and the memory requirements are patch specific. As is always the case, the best practice is to apply the patch on your test system first. Doing so also enables you to assess the effect of the online patch on your production system and estimate any additional memory usage.Performing a rolling patch application with RAC is possible only for patches that are certified for rolling upgrades. Typically, patches that can be installed in a rolling upgrade include: Patches that do not affect the contents of the database, such as the data dictionary Patches not related to Oracle RAC internode communication Patches related to client-side tools such as SQL*Plus, Oracle utilities, development libraries, and Oracle Net Patches that do not change shared database resources, such as data file headers, control files, and common header definitions of kernel modules Do not use Oracle RAC to perform rolling upgrades of patch sets. Online patches are a special type of interim patch that can be applied while the instance remains online. Oracle provides online patches when the changed code is small in scope and complexity, such as with diagnostic patches or small bug fixes. Oracle provides online patches when the patch does not change shared memory structures in the System Global Area (SGA), or other critical internal code structures. Applying an online patch increases memory consumption on the system because each Oracle process uses more memory from the Program Global Area (PGA) during the patch application. You need to take your memory requirements into consideration before you begin applying an online patch. Each online patch is unique and the memory requirements are patch specific. As is always the case, the best practice is to apply the patch on your test system first. Doing so also enables you to assess the effect of the online patch on your production system and estimate any additional memory usage.

    20. Siebel MAA Siebel Database Upgrade Using Logical Standby

    21. Demo 2

    22. Siebel MAA Siebel Database Upgrade using Logical Standby 11.1.0.6 – apply patch for bug 7198303

    23. Siebel MAA Tips and Best Practices Configure Siebel with MAA best practices, see:

    24. Siebel MAA Tips and Best Practices Apply RAC and Data Guard MAA best practices For RAC failover best practices, see: For Data Guard best practices, see:

    25. Siebel MAA Tips and Best Practices Automate Siebel Startup Siebel Shutdown Data Guard Broker Consider Fast Start Failover Test, Tune and Practice Recovery Procedures RAC node failure Site failure Database Recovery

    26. Siebel MAA Resources For demos of Siebel MAA RAC and DR failover, see:

    27. Q & A

    28. Siebel Maximum Availability Architecture (MAA) Richard Exley High Availability Systems and Maximum Availability Architecture Group Oracle Server Technologies

    29. Oracle is the Information Company

    30. Copyright

More Related