1 / 72

ASE121: Shared Disk Cluster Technical Review

ASE121: Shared Disk Cluster Technical Review. Linda Lin Senior Architect llin@sybase.com Ganesan Gopal Senior Manager gopal@sybase.com August 15-19, 2004. The Enterprise. Unwired. The Enterprise. Unwired. Industry and Cross Platform Solutions. Manage Information. Unwire

bian
Download Presentation

ASE121: Shared Disk Cluster Technical Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ASE121: Shared Disk Cluster Technical Review Linda Lin Senior Architect llin@sybase.com Ganesan Gopal Senior Manager gopal@sybase.com August 15-19, 2004

  2. The Enterprise. Unwired.

  3. The Enterprise. Unwired. Industry and Cross Platform Solutions Manage Information Unwire Information Unwire People • Adaptive Server Enterprise • Adaptive Server Anywhere • Sybase IQ • Dynamic Archive • Dynamic ODS • Replication Server • OpenSwitch • Mirror Activator • PowerDesigner • Connectivity Options • EAServer • Industry Warehouse Studio • Unwired Accelerator • Unwired Orchestrator • Unwired Toolkit • Enterprise Portal • Real Time Data Services • SQL Anywhere Studio • M-Business Anywhere • Pylon Family (Mobile Email) • Mobile Sales • XcelleNet Frontline Solutions • PocketBuilder • PowerBuilder Family • AvantGo Sybase Workspace

  4. Agenda • ASE HA Architecture • Shared Disk Cluster Feature Overview • Shared Disk Cluster Architecture Overview • What Benefits The Customers Would See? • Q&A

  5. ASE HA Architecture • Current • Standby (active-passive): ASE on standby node is not started until primary node failure • Distributed workloads (active-active) 12.0 implementation of companion server architecture • Work in progress • Concurrent access (shared disk cluster)

  6. Active-Passive • Two nodes failover cluster with single database store • ASE on primary node is active • ASE on standby node is inactive before primary failover HA System SAN Disk Storage ASE DB Quorum

  7. Quorum Active-Active Symmetric configuration ASE #2 ASE #1 • Two node failover cluster with two database stores • Both ASE servers are active servicing disjoint applications • Both ASE servers are companions of each other for availability HA System SAN ASE #1 Disk Storage ASE #2

  8. Motivations for Shared Disk Cluster • TCO • Emerging Low Cost Intel/Linux combo - 4-8 nodes with high speed interconnect • Ability to consolidate multiple apps for effective utilization of unused H/W capacity • Ease of administration • Continuous Availability • Uninterrupted operation in case a component fails. • High Performance • Increase the capacity by adding additional SMP Boxes with no restructuring of data. • Single System Presentation • Single Server appearance to the application • Applications run at multiple nodes with Shared Data • Technology Trends

  9. All servers provide service to single database store • Applications run on multiple nodes with shared data • Connections are directed to least loaded servers S1 S2 S3 S4 Private Interconnect SAN Shared Disk Storage Cluster DB Quorum Shared Disk Cluster

  10. Failover connections are re-directed to the least loaded servers. • Delivers continuous availability. S1 S2 S3 S4 Private Interconnect SAN Shared Disk Storage Cluster DB Quorum Shared Disk Cluster - An instance of server failover

  11. Agenda • ASE HA Architecture • Shared Disk Cluster Feature Overview • Shared Disk Cluster Architecture Overview • What Benefits The Customers Would See? • Q&A

  12. Shared Disk Cluster Highlights • Shared Disk Cluster (Concurrent Access) • Single database store • Single system presentation • All servers provide services at all time • All servers can access all DBs directly • Independent of platform-specific clustering service • Incremental node growth and shrink • Cluster-aware application partitioning • Workload based load balancing • Instantaneous server failover • Workload is re-distributed at a server failure Improved TCO, Reliability and Availability!

  13. All servers provide service to single database store • Applications run on multiple nodes with shared data • Connections are directed to least loaded servers S1 S2 S3 S4 Private Interconnect SAN Shared Disk Storage Cluster DB Quorum Shared Disk Cluster

  14. Shared Disk Cluster Feature – Logical Clusters • Similar to the Logical Process Management (LPM) feature that’s currently available in ASE • Rules can be specified at a instance level rather than at the engine level (LPM will be supported as well) • DBAs can setup application, users, etc binding to the various instances.

  15. Logical Cluster with Application Partitioning Finance Application Sales Application • Client connections are transparently redirected to a logical cluster • Cluster-aware application partitioning improves scalability • Easy to implement Service Level Agreement S1 S1 S2 S2 S3 S3 S4 S4 Finance Logical Cluster Sales Logical Cluster Sales Cluster Finance Cluster Private Interconnect SAN Shared Disk Storage Cluster DB Quorum Finance DB Shared Sales DB

  16. Logical Clusters for Disjoint Applications Finance Application Sales Application • Single system presentation with transparent connection redirection to logical cluster • Cluster-aware application partitioning improves scalability • Frequently accessed data is available locally and infrequently accessed data is shared by all servers Finance Logical Cluster Sales Logical Cluster S1 Private Interconnect SAN Shared Disk Storage Cluster DB Quorum Finance DB Shared Sales DB

  17. Logical Cluster with Incremental Node Growth Finance Application Sales Application • Add more low cost SMP boxes to logical cluster as demands grow - TCO • New connections are directed to the least loaded server within a logical cluster Sales Application Finance Logical Cluster Sales Logical Cluster Private Interconnect SAN Shared Disk Storage Cluster DB Quorum Finance DB Shared Sales DB

  18. Logical Cluster with Server Failover Finance Application Sales Application • Continuous Availability with Instantaneous Failover • Failover connections are redirected to another server in the same logical cluster Sales Application Finance Logical Cluster Sales Logical Cluster Private Interconnect SAN Shared Disk Storage Cluster DB Quorum Finance DB Shared Sales DB

  19. Logical Cluster Failover Finance Application Sales Application • Continuous Availability with Instantaneous -Failover across logical cluster • When a logical cluster fails, failover connections are redirected to an alternate logical cluster • Databases are available as long as one server is available Sales Application S3 Finance Logical Cluster Sales Logical Cluster Private Interconnect SAN Shared Disk Storage Cluster DB Quorum Finance DB Shared Sales DB

  20. Logical Cluster Recovery Finance Application Sales Application • After a logical cluster recovers, connections are redirected to the logical cluster. Sales Application Finance Logical Cluster Sales Logical Cluster S3 Sales Cluster Private Interconnect SAN Shared Disk Storage Cluster DB Quorum Finance DB Shared Sales DB

  21. Shared Disk Cluster Installation and Administration Topics • Platform Recommendations • Installation • Configuration and Administration • Upgrade

  22. Platform Recommendations • Homogeneous operating systems – Linux or Solaris • Similar HW configuration • Gigabit Ethernet as private interconnects • Raw devices on SAN connected storage for globally accessible databases and quorum disk devices • Other options: clustered file systems or virtualized storage with volume managers • Single installation of $SYBASE on globally mounted file system

  23. Cluster Installation • Procedure similar to that of a standalone SMP • Done from a single node in the cluster • Installation is on globally accessible raw devices and file systems • All instances in the cluster share • single installation of databases, • server binaries and scripts • configuration files.

  24. Cluster Administration – sybcluster • Client side cluster administration tool • Command line interface • Java-based program supporting multiple platforms • Communicates with Unified Agent Framework (UAF) and ASEAgent on server side • UAF and ASEAgent are configured through UAF Web Console.

  25. start, stop, monitor scripts binary start, stop, monitor scripts binary ASEAgent plug-in ASEAgent plug-in Unified Agent Framework Unified Agent Framework Cluster Administration –sybcluster architecture Server Node 1 Server Node 2 sybcluster

  26. Cluster Administration – sybcluster usage • Example: • sybcluster addcluster –cluster <cluster_name> .. • sybcluster addserver –server <server_name> -cluster <cluster_name> .. • sybcluster startcluster –cluster <cluster_name> .. • sybcluster startserver –server <server_name> –cluster <cluster_name> .. • sybcluster stopserver –server <server_name> -cluster <cluster_name> .. • sybcluster stopcluster –cluster <cluster name> .. • sybcluster dropserver –server <server_name> -cluster <cluster_name> • sybcluster dropcluster –cluster <cluster_name> .. • sybcluster serverstatus –server <server_name> -cluster <cluster_name> .. • sybcluster clusterstatus –cluster <cluster_name> .. • Command line input also takes user authentication and UAF agent discovery information

  27. Cluster Administration –Integration with third party cluster admin tool Server Node 1 Server Node 2 start, stop, monitor scripts binary start, stop, monitor scripts binary Third party cluster admin agent Third party cluster admin agent Third party cluster admin tool

  28. ASE Cluster Management • Clusters are contained within the clusters folder. • Operations are organized by task. • Servers are color coded with status.

  29. ASE Cluster Management – Cluster Summary

  30. Upgrade to Shared Disk Cluster • Upgrade SMP Installation to Shared Disk Cluster • Upgrade existing installation as in SMP server • Configure cluster and add additional servers to the cluster • Upgrade Active-Passive Installation to Shared Disk Cluster • Disable and de-configure Active-Passive Mode • Upgrade existing installation as in SMP server • Configure cluster and add additional servers to the cluster • Upgrade Active-Active Installation to Shared Disk Cluster • Disable and de-configure Active-Active Companionship • Upgrade one installation as in SMP server • Dump and load databases from the other installation (if needed) • Configure cluster and add additional servers to the cluster

  31. Shared Disk Cluster Feature – Single System Presentation Topics • Cluster Configuration File • Interfaces File • Server Name • ASE Configuration Parameters • Stored Procedures • Cluster Database Administration • DDL and DML • RPC and DTM Handling • Error Log • Tempdb • Trace Flag Handling • Load Balancing • Connection Failover

  32. Single System Presentation – Cluster configuration file • Format <cluster_name> • quorum <quorum disk device absolute path> • Interfaces_file < interfaces file> • config_file <server configuration file> • Traceflags < trace flags separated by space> • interconnect primary <protocol> • server <id1> <server name 1> <primary addr1> <protocol spec info> • server <id2> <server name 2> <primary addr2> <protocol spec info> • server <id3> <server name 3> <primary addr3> <protocol spec info> • interconnect secondary <protocol> • server <id1> <server name 1> <secondary addr1> <protocol spec info> • server <id2> <server name 2> <secondary addr2> <protocol spec info> • server <id3> <server name 3> <secondary addr3> <protocol spec info>

  33. Single System Presentation – Cluster configuration file (cont.) • Example cluster1 • quorum /dev/rdsk/c1d2s4 • interfaces_file /opt/sybase/interfaces • config_file /opt/sybase/server.cfg • traceflags 3623 3605 • interconnect primary udp • server 1 S1 node1 24040 24051 • server 2 S2 node2 24040 24051 • server 3 S3 node3 24000 24011 • interconnect secondary udp • server 1 S1 node1 24060 24071 • server 2 S2 node2 24060 24071 • server 3 S3 node3 24060 24071

  34. Single System Presentation – Interfaces file • Location must be specified in the cluster configuration file • The “-i” option will not be allowed in a cluster environment • Each instance must have a server entry in the interfaces file • ASE instance on each node will use the server name as it does today in a SMP environment

  35. Single System Presentation – Interfaces file (cont.) • The recommended way on the client side • specify a cluster name as server name entry in the interfaces file • add query lines for all the instances in a cluster These provide transparency on the client side in terms of not having to know which instance it is communicating to • Continuing from the cluster configuration file example, the recommended way would be

  36. Single System Presentation – Interfaces file (cont.) # Will be used by server S1 S1 master tcp ether node1 4048 query tcp ether node1 4048 # Will be used by server S2 S2 master tcp ether node2 4048 query tcp ether node2 4048 # Will be used by server S3 S3 master tcp ether node3 4048 query tcp ether node3 4048 # Will be used by the clients when connecting to the cluster cluster1 query tcp ether node1 4048 query tcp ether node2 4048 query tcp ether node3 4048

  37. Single System Presentation – Server name • @@servername – Cluster vs SMP

  38. Single System Presentation – Config parameters • All the instances will use the same server config file • Location of the file must be specified in the cluster config file • The “-c” option will not be allowed • Most of the config parameters will apply to every instance • For example, lets assume a 2 node cluster and the config parameter “number of locks” is set to 1000, then both the instances comes up with 1000 locks each

  39. Single System Presentation – Config parameters (cont.) • Few config parameters will be instance specific • New config block will be defined in the config file to override the global values on a per-instance basis • Example: …. tcp no delay = 1 …. max online engines = 10 ….. [Instance 1] max online engines = 5 [Instance 2] tcp no delay = 0

  40. Single System Presentation – Config parameters (cont.) • Config parameters that are currently dynamic will continue to remain as is. • 2 types of static config parameters – • One that requires just an instance reboot • Example: “max online engines” • The other which requires a cluster reboot • Example: “enable java”

  41. Single System Presentation – Database administration • Dump and Load • one backup server for the entire cluster • Space and Log Management • Single set of database devices and single log as in SMP server (minimal administration impact)

  42. Single System Presentation • No change to existing DDL and DML • Some stored procedures can have a cluster-wide scope • Example: sp_who • New set option will be added to indicate the desired scope • cluster wide or instance specific • Provides ability to control the scope of the execution of the applicable stored procedures

  43. Single System Presentation – RPC and DTM handling • Within a cluster, node-to-node RPCs will not be supported • Transparent handling of Distributed transactions and Replication

  44. Single System Presentation – Error log and tempdb • Each instance will use an instance specific errorlog • Each instance can be configured to use an instance specific tempdb • Worktable and #temp tables will be created in this instance specific tempdb • The system wide tempdb will be shared across all the nodes • This will be used for persistent tempdb tables

  45. Single System Presentation – Tempdb (cont) • Instance specific tempdb is not recovered on a failover • This will imply that #temp tables will be lost on a failover • Persistent temp tables will be logged and recovered in the event of a failover and will be available on all instances • The overall architecture would look like:

  46. Single System Presentation – Tempdb (cont) Instance 1 specific tempdb Instance 2 specific tempdb System tempdb (shared by instance 1 and instance 2)

  47. Single System Presentation – Trace flags • Traceflags that are needed across all the instances need to be specified in the cluster config file • The usage of “-T” option will only be applicable to the instance using it, • i.e, this traceflag will not be set on other instances • dbcc traceon() will be instance specific • A new dbcc option will be introduced to propagate a traceflag among all the instances in a cluster

  48. Single System Presentation – Load balancing Critical aspects on load balancing: • Connection re-direction • Client side load balancing

  49. Workload Management – Connection re-direction • Based on workload, • Client connecting to an available instance be redirected to a different available instance • Transparent to the application • New OCS capability (enabled by default) for clients that link with the new OCS libraries

  50. Workload Management – Client-side load balancing • Client randomly picks a query line from the list of query lines. Example: cluster1 query tcp ether node1 4048 query tcp ether node2 4048 query tcp ether node3 4048 query tcp ether node4 4048 • Follows Round Robin once a query line is picked Example: Once node 2 is picked, client will try node 2, followed by node 3 and node 4 wrapping up with node 1 • Option will be turned off by default • Can be enabled in the OCS config file without any application change

More Related