large scale sql server deployments for dbas l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Large-Scale SQL Server Deployments for DBAs PowerPoint Presentation
Download Presentation
Large-Scale SQL Server Deployments for DBAs

Loading in 2 Seconds...

play fullscreen
1 / 93

Large-Scale SQL Server Deployments for DBAs - PowerPoint PPT Presentation


  • 291 Views
  • Uploaded on

Large-Scale SQL Server Deployments for DBAs . Unisys. Agenda. Optimization Failover Cluster Resources Indexes I/O Configuration Data Management Enterprise Class Systems Infrastructure Intel Platform Architecture Storage Architecture Microsoft Systems Architecture.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Large-Scale SQL Server Deployments for DBAs' - arleen


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
agenda
Agenda
  • Optimization
    • Failover Cluster
    • Resources
    • Indexes
  • I/O Configuration
    • Data Management
  • Enterprise Class Systems Infrastructure
    • Intel Platform Architecture
    • Storage Architecture
    • Microsoft Systems Architecture
sql server failover clustering
SQL Server Failover Clustering
  • Hot standby solution
  • Best high-availability configuration
    • Redundant system
    • Shared access to the database files
    • Recovery in seconds
    • Automatic failure detection
    • Automatic failover
    • Minimal client application awareness
  • Built on the server cluster technology of Windows Clustering
windows clustering
Windows Clustering
  • Software Components
    • Cluster name
    • Cluster IP address
    • Cluster Administrator account
    • Cluster resource
    • Cluster group
    • Microsoft Distributed Transaction Coordinator(MS DTC)
  • Hardware Components
    • Cluster node
    • Heartbeat
    • External network
    • Shared cluster disk array
    • Quorum drive
  • Virtual Server
how failover clustering works
How Failover Clustering Works
  • Operating-system checks
    • Heartbeat checks availability of nodes and virtual servers.
  • SQL Server checks
    • LooksAlive check runs every five seconds.
    • IsAlive check runs SELECT @@SERVERNAME query.
  • Failover to another node
    • Windows Clustering attempts restart on same node or fails over to another node.
    • SQL Server service starts.
    • Brings master online.
    • Database recovery proceeds.
    • End users and applications must reconnect.
enhancements to failover clustering
Enhancements to Failover Clustering
  • SQL Server Setup installs/uninstalls a cluster.
  • Service packs can be applied directly to virtual servers.
  • SQL Server supports multiple instances and multiple network addresses.
  • Failover and failback occur to or from any node in a cluster.
  • SQL Server 2000 on Windows 2000 Datacenter Server supports four server nodes in a cluster.
  • All nodes have local copies of SQL Server tools and executables.
  • Rerunning Setup updates Failover Cluster configurations.
  • SQL Server Service Manager or SQL Server Enterprise Manager now start/stop SQL Server Services.
sql server 2000 failover clustering

Public Network

SQL Server 2000 Virtual Server

Heartbeat

MSCS

MSCS

Node A

Node B

SharedDisk Array

SQL Server 2000 Failover Clustering

Virtual Server Instance

  • System & User Databases
  • SQL Server & SQL Server Agent Services
  • Executables
  • Network Connection
four node clusters
Four-Node Clusters
  • SQL Server 2000 fully supports 4-node failover clustering
    • Can assume max 1 node failure and assign resources appropriately (e.g. leave one node out of four idle  25% redundancy (improves on Windows 2000 Advanced Server 2-node 50% redundancy)
    • When running Active/Active/Active/Active for critical workloads ensure that one node has the power/resources to handle the worst case (i.e. memory)
sql server 2000 cluster storage considerations
SQL Server 2000Cluster Storage Considerations
  • Fibre Channel connected to SAN is the preferred approach
  • Network Attached Storage (NAS)
    • Not supported for clusters
    • (Q) 304261

“Because of the risks of network errors compromising database integrity, together with possible performance implications that may result from the use of network file shares to store databases, Microsoft recommends that you store database files either on local disk subsystems or on Storage Area Networks (SANs).”

sql server 2000 cluster storage considerations11
SQL Server 2000Cluster Storage Considerations
  • Basic Disks are required by MSCS (Microsoft Cluster Service)
    • Dynamic Disk & Mount Points are not natively supported (Veritas required)
    • Software RAID is not supported
    • Balance letters of the alphabet vs. placement of:
      • Log files
      • Data files
      • Filegroups
      • tempdb
cluster fail over memory cpu configuration planning

64-Bit dynamically manages memory

Cluster Fail-over Memory & CPU Configuration Planning

Would all contents fit into one glass?

  • Memory Resources
  • CPU Resources

If you poured contents from one glass into another…

resources
Resources
  • Memory
    • Very Large Memory Support
    • AWE
    • AWE & I/O
    • Settings
  • Processor Allocation
    • User Mode Scheduler
    • Settings
    • Affinity
windows 2000 very large memory support
Windows 2000Very Large Memory Support
  • Real Memory - 4GB Limit & /3GB switch
  • Physical Addressing Extension (PAE)
  • Address Windowing Extensions (AWE)
  • Large Memory Enabled (LME)
  • AWE I/O Implications

Much simpler in 64 bit

memory management 101

4G

4GB

Memory Management 101

Virtual Memory

Physical Memory

Windows 2000 Kernel

2GB or 3GB

process address spaces

SQL Instance3

SQL Instance2

4GB

Process Address Spaces

Virtual Memory

Physical Memory

Windows 2000 Kernel

2GB or 3GB

SQL Instance1

virtual memory and awe
Virtual Memory And AWE
  • Windows 2000, a 32-bit Operating System, can only address 232 bytes (4GB)…So how do we support more memory?
  • The next few slides look at how AWE enables SQL Server to use 64GB
awe memory model
AWE Memory Model
  • AWE Window resides in real memory (below 4GB)
  • Data pages mapped to reserved bufferpool above 4GB
  • Applications call AWE API to get data pages
    • Data pages referenced or mapped
    • Allows huge amounts of data to be cached
    • Look for Page Life Expectancy counter in perfmon
    • AWE provides a 64GB address space to SQL Server
  • Largest benefits with systems that have large amounts of frequently referenced pages (“hot” pages)
    • Note: SQL Server-based applications that one-time scan through large tables and/or have large intermediate results sets may not benefit from much larger bufferpool
awe memory allocation
AWE Memory Allocation

Virtual Memory

Physical Memory

Windows 2000 Kernel

2GB or 3GB

AWE Memory Window

SQL Instance1

1. Physical Memory is allocated

Mapped Virtual Memory Allocation

(Above 4GB)

awe memory reallocation
AWE Memory Reallocation

Virtual Memory

Physical Memory

Windows 2000 Kernel

2GB or 3GB

AWE Memory Window

SQL Instance1

1) Mapping Removed

2) New Mapping

Mapped Virtual Memory Allocation

(Above 4GB)

sql server use of awe memory
SQL Server Use of AWE Memory
  • Using AWE means that SQL Server 2000 memory is no longer dynamic (i.e. it can no longer be relinquished if the system is under stress)
    • Caution when using AWE and SQL Server with other workloads – configure memory appropriately or SQL Server will allocate total available memory (minus 128MB)
    • Caution when running in a cluster - allow enough memory for failover on each node
awe and i o
AWE and I/O
  • Applications can only read/write to AWE Memory Window
  • Two methods of achieving this:
    • With LME hardware(hardware can access mapped pages anywhere in memory)
    • Without LME hardware(hardware can only access mapped pages from <4GB)
awe i o with lme required for datacenter certification

Network

Disk

AWE I/O With LMERequired for Datacenter Certification

Virtual Memory

Physical Memory

Windows 2000 Kernel

2GB or 3GB

AWE Memory Window

SQL Instance1

Mapped Virtual Memory Allocation

(Above 4GB)

Direct I/O

awe i o without lme

Network

Disk

AWE I/O Without LME

Physical Memory

Virtual Memory

Windows 2000 Kernel

2GB or 3GB

AWE Memory Window

SQL Instance1

Mapped Virtual Memory Allocation

(Above 4GB)

Double-buffered I/O

large memory settings
Large Memory Settings
  • To grab 4GB or less:
    • No /PAE needed in boot.ini
    • /3GB available to use to grab 3 out of 4 GB of real memory.
    • AWE enabled but doesn’t do anything for SQL Server
  • To grab >4gb to 16gb
    • /PAE enabled in boot.ini
    • /3GB enabled in boot.ini
    • AWE enabled
  • To grab > 16gb
    • /PAE enabled
    • AWE enabled
    • /3GB disabled
memory configuration settings
Memory ConfigurationSettings
  • Min Server Memory (MB)
    • SQL Server will maintain this value once it’s committed
    • Not automatically allocated on startup, unless AWE is enabled
  • Max Server Memory (MB)
    • Becomes SQL Server target memory unless (Available Bytes – 5 MB) less than Max Server Memory.
    • Buffer pages will continue to be committed until target reached.
    • Under memory pressure SQL Server will pay out buffer pool until Min Server Memory reached.
    • Set to same as min when AWE enabled
  • AWE Enabled
    • Enable AWE window for SQL Server
memory configuration settings cont
Memory ConfigurationSettings (cont.)
  • Locks
    • Sets maximum number of locks
    • When set to 0, 2% of memory allocated to pool of locks initially.
    • Dynamic lock pool do not exceed 40% of total server memory allocation
  • Set Working Set Size
    • Reserve physical memory space for SQL Server instead of being swapped out, set min and max to same value when enable this option.
  • Min memory per query (KB)
    • Sets minimum memory that must be met by that Memory Grant Manager to avoid query waiting
  • query wait (s)
    • Wait time for Memory Grant Manager to acquire min memory per query before query times out.
resources29
Resources
  • Memory
    • Very Large Memory Support
    • AWE
    • AWE & I/O
    • Settings
  • Processor Allocation
    • User Mode Scheduler
    • Settings
    • Affinity
user mode scheduler

Worker Pool

Worker Pool

Worker Pool

Worker Pool

Workers

Workers

Workers

Workers

User Mode Scheduler

CPU 0

CPU 1

CPU 2

CPU n

Workers in “Runnable” status assigned to CPU by UMS Scheduler

UMS

Scheduler

UMS

Scheduler

UMS

Scheduler

UMS

Scheduler

Query Results

Network

UMS

Work

Queue

UMS

Work

Queue

UMS

Work

Queue

UMS

Work

Queue

Work requests assigned to workers in pool

Work Requests

Network

context switching
Context Switching
  • No real multiprocessing, just time slicing
  • SQL Server User Mode Scheduler
    • Per processor
    • Scheduler assigns processor
  • Performance Factors
    • Perfmon: Context switches/sec (10-15K bad)
    • CPUs have L2 cache
    • CPU Affinity mask
    • Lightweight pooling
related sql server configuration settings
Related SQL Server Configuration Settings
  • Max worker threads
    • Sets maximum workers to service work requests
    • Assigned evenly across CPUs available to UMS
  • Lightweight pooling
    • When set, workers are created as NT fibers
    • One thread per CPU manages subset of fibers
  • Priority boost
    • If set, workers execute at NT priority 14
    • By default, workers execute at priority 7-8
  • Max degree of parallelism
    • Number of processors considered for parallel query execution plan
  • Cost threshold for Parallelism
    • Minimum execution cost before optimizer creates parallel execution plan
sql server affinity masks
SQL Server Affinity Masks
  • CPU Affinity
    • Reduce context switching
    • Increase cache hits
  • New SQL Server 2000 SP1 Features!
    • I/O Affinity
    • Connection Affinity (VIA only)
io affinity

MSU

MSU

MSU

MSU

Crossbar

Intra-connect

Crossbar

Intra-connect

Crossbar

Intra-connect

Crossbar

Intra-connect

TLC

TLC

TLC

TLC

TLC

TLC

TLC

TLC

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

I/O

I/O

I/O

I/O

I/O

I/O

I/O

I/O

IO Affinity

Storage Area Network

Example: Set IO Affinity to Processor x0000000F

Sends SQL Server disk I/O to affinitized processors.

( An extreme high end feature )

index overview
Index Overview
  • Clustered and Non-Clustered Indexes
  • Covering Indexes
  • Unique Indexes
  • Indexed Views
  • Index Maintenance
  • Recommendations
clustered indexes
Clustered Indexes

------------------ Root Page

-------------

Non-leaf pages

.

.

.

-------------

--- Leaf

Pages

A-C D-E F-G H-J K-M N-O P-Q R-S T-V W-Z

Example: - “select * from table where lastname like ‘S%’

- Assume 100 names start with ‘S’

- Assume 100+ rows per page

- Clustered index will issue 4 in this example

- contrast with nonclustered noncovering heap table

clustered indexes choosing a clustered key
Clustered IndexesChoosing a clustered key
  • Clusterkey is row locator in all non-clustered indexes
    • Increase potential for covering queries
    • Size consideration – long composite clusterkeys may not be appropriate
    • Small clusterkeys (smallint or INT) save space in non-clustered index:
      • INT = 4 byte row locator stored in non-clustered index entry
      • RID = 6 byte row locator stored in non-clustered index entry
  • Update frequency
    • Clusterkey columns must be stable
      • Updates are expensive
      • Cost increases with number of non-clustered indexes
  • Continuously ascending clusterkey value is most efficient model for high volume insert:
      • CREATE table orders
      • (Order_ID INT IDENTITY PRIMARY KEY CLUSTERED,
      • ..............................................
    • No hot spot contention
    • No page splitting
    • Suitable as a default clustered index
non clustered indexes
Non-clustered Indexes

Root Page

ANDERSON 4:300

JOHNSON 4:301

SIMONSON 4:302

Branch

Page 4:300

Page 4:301

Page 4:302

JOHNSON 2:103

KRANSTEN 2:104

MILLER 2:105

ANDERSON 2:100

BENSON 2:101

JOHNSEN 2:102

SIMONSON 2:106

TANNER 2:107

Leaf

Page 2:100

Page 2:101

Page 2:102

Page 2:103

Page 2:104

Page 2:105

Page 3:106

Page 2:107

ANDERSON

....................

....................

....................

BENSON

....................

....................

....................

JOHNSEN

....................

....................

....................

JOHNSON

....................

....................

....................

KRANSTEN

....................

MILLER

....................

....................

....................

SIMONSON

....................

....................

....................

TANNER

....................

....................

....................

Forwarding pointer to new row location; avoids index update when RID changes

Heap (Data Pages)

....................

..........→..

....................

....................

....................

....KRANSTEN...

....................

UPDATE CUSTOMER

SET CUSTOMER_COMMENT = ‘.......’

WHERE CUSTOMER NAME = ‘KRANSTEN‘

example non clustering index carrying clustering key
Example: Non-clustering index carrying clustering key:

Note: NC affected by all Clustering

Key operations including Reindex,

Drop/Create, etc.

Root Page

Non-Leaf

Pages

Leaf Pages

Clustering

Key

……..

example drop clustering key change nc key values to rids
Example: Drop Clustering Key, change NC key values to RIDs.

Note: Change clustering key and

You change all non-clustering

Indexes automatically

Root Page

Non-Leaf

Pages

Leaf Pages

RID

Data

Pages

non clustered indexes composite index considerations
Non-clustered IndexesComposite Index Considerations
  • Leading column density
          • CREATE UNIQUE INDEX IX_PRODUCT_TYPE_BRAND_NAME
          • ON PRODUCT
          • (PRODUCT_TYPE,
          • PRODUCT_BRAND,
          • PRODUCT_NAME)
  • Important that statistics exist on all columns of index
    • sp_createstats ‘indexonly’
  • Using left-most subset of columns not absolutely necessary on non-clustered indexes:
    • Low density of interior column (and existing stats) can use index scan WHERE argument to filter prior to bookmark lookup:

SELECT * FROM PRODUCT WHERE PRODUCT_BRAND = 'XYZ'

StmtText

----------------------------------------------------------------------------------------

|--Bookmark Lookup(BOOKMARK:([Bmk1000]), OBJECT:([orders].[dbo].[PRODUCT]))

|--Index Scan(OBJECT:(IX_PRODUCT_TYPE_BRAND_NAME), WHERE:(PRODUCT_BRAND='XYZ‘ ))

If leading column used alone in search argument, density is crucial

If columns used together (as search argument) leading column density less important than composite All Density

covering indexes
Covering Indexes
  • Materializes query result from a non-clustered index
  • All necessary data must be in the index
  • Data pages are not touched (no bookmark lookup)
  • References to clustered key columns are available and used from the non-clustered index
  • A composite index is useful even if the first column is not referenced

SELECT SUPPLIER_ID, ORDER_ID, COUNT(*)

FROM ORDER_ITEM

WHERE YEAR(SHIP_DATE) = 1993

GROUP BY SUPPLIER_ID, ORDER_ID

ORDER BY SUPPLIER_ID, ORDER_ID

StmtText

----------------------------------------------------------------------------------------------

|--Compute Scalar(DEFINE:([Expr1002]=Convert([Expr1006])))

|--Stream Aggregate(GROUP BY:(SUPPLIER_ID, ORDER_ID) DEFINE:([Expr1006]=Count(*)))

|--Sort(ORDER BY:(SUPPLIER_ID ASC, ORDER_ID ASC))

|--Index Scan(OBJECT:(IX_ORDER_ITEM_SHIP_DATE), WHERE:(datepart(year, Convert(SHIP_DATE))=1993))

covering indexes considerations
Covering IndexesConsiderations
  • Queries covered by indexes often found during trivial plan phase
    • Reduced compilation time
  • Multiple indexes can be used
    • Index rows retrieved from non-clustered indexes
    • Joined on row locator (clusterkey or RID)
    • Not a common access method
  • Clustered keys always available (exist in row locator)
  • Can reduce some lock contention
    • Data page / row are not accessed; not locked
  • Avoid exaggerating non-clustered key lengths to force covering queries
    • Longer key increases index size
    • Volatile columns cause lock contention in index leaf
data management overview
Data Management Overview
  • I/O strategies
  • Capacity Planning Guidelines
  • RAID
  • File System
  • Database and Object Placement Strategies
  • SAN
  • Monitoring
understanding your disks
Understanding your Disks
  • Check the numbers
  • There is a difference in speed for new drives
  • Numbers from HW vendors are perfect world
workload comparison
Workload Comparison
  • OLTP environment
    • Short transactions
    • Random read and write I/O
    • 50-60% read, 40-50% write a good start point of workload breakdown
  • DSS, OLAP environment
    • Mostly long transactions
    • Many sequential I/O, typically reads
    • 75-80% read, 20-25% write is a good start point
capacity planning approach
Capacity Planning Approach
  • Minimally
    • Size of database / size of disk
    • Table sizing tool in W2K resource kit and http://toolbox
    • Not a good approach for performance
  • Guesstimate
    • Estimated data throughput / size of disk
    • Compare with the minimal approach and take the larger # (of disks) of the two.
  • Sampled approach
    • Take a small sample workload and monitor on a system with tempdb, log and datafile placed on separate spindles. Using perfmon to capture the I/O statistics and calculate the spindle requirements under real production load.
capacity planning oltp planning
Capacity PlanningOLTP Planning
  • Random read/writes
    • Plan on more random reads and writes or 80 I/Os per second
  • Turn on controller cache to match read / write ratio. Example for read intensive applications maybe 75% read 25% write
  • Larger number of smaller drives = better performance
  • Put log on its own controller with 100% write on controller (except replication environment)
capacity planning data warehouse
Capacity PlanningData Warehouse
  • Sequential Read
    • Plan on 120 I/Os/second per drive due to scans
  • Set controller for heavy read through cache
  • In general, spread data and index across many drives, don’t be a hero and guess where to place objects
  • Consider striping Tempdb separately
  • Log less important
raid introduction
RAID Introduction

Stripe

RAID 0

Data 1

Data 5

Data 9

Data 13

Data 2

Data 6

Data 10

Data 14

Data 3

Data 7

Data 11

Data 15

Data 4

Data 8

Data 12

Data 16

Mirror

RAID 1

Data 1

Data 2

Data 3

Data 4

Data 1

Data 2

Data 3

Data 4

Stripe

RAID 5

Data 1

Data 4

Data 7

Parity

Data 2

Data 5

Parity

Data 10

Data 3

Parity

Data 8

Data 11

Parity

Data 6

Data 9

Data 12

Mirror

Stripe

Stripe

RAID 10

Data 1

Data 3

Data 5

Data 7

Data 2

Data 4

Data 6

Data 8

Data 1

Data 3

Data 5

Data 7

Data 2

Data 4

Data 6

Data 8

which raid to choose
Which RAID to Choose
  • Using Sequential read/write operations
    • RAID 0 - 1 read/1 write (No redundancy)
    • RAID 1/RAID 10 - 1 split read/2 writes
    • RAID 5 - 2 reads/2 writes

RAID 0

MB/sec

RAID 10

RAID 1

RAID 5

raid 5 vs raid 10 performance
RAID 5 vs. RAID 10 Performance

Iometer version 1998.10.08, Copyright1997-1998 by Intel Corporation

raid recommendations
RAID Recommendations
  • RAID 10 is best
    • Very high read-write performance
    • Higher theoretical fault tolerance
  • RAID 5 as last resort
    • Highest usable capacity
    • Moderate read performance
    • Poor write performance
  • Stripe Configuration
    • 64K Stripe Unit
    • 256K Stripe Depth

Data 1

Data 5

Data 9

Data 13

Data 2

Data 6

Data 10

Data 14

Data 3

Data 7

Data 11

Data 15

Data 4

Data 8

Data 12

Data 16

Stripe

Unit

Stripe

Width

raid implementation in vldb
Raid Implementation in VLDB
  • Hardware or Software Raid
    • Performance consideration
    • Cluster support
  • Battery backup for controller cache a must
  • Some controller performance are optimized with number of disks.
  • IDE Raid
    • Good sequential performance
    • Lack of scalability
file system topics
File System Topics
  • NTFS vs. FAT32
    • NTFS required for cluster support
    • FAT32 has file size limit
  • Allocation unit size
  • Stripe size must be multiple of 8K (SQL Server I/O), 4K block (NT I/O) and 512Bytes (Disk I/O)
  • 64K size is optimum for most SQL Server implementation
  • Larger stripe size (128K, 256K) may be used for data warehouse environment
i o load balancing
I/O load Balancing

System Bus

Understand theoretical maximums versus actual, sustainable throughput capability in all bus architectures

PCI

Bus

PCI

Bus

PCI

Bus

Network

Factor in network I/O requirements

Distribute spindles in RAID set across available Channels

Do not saturate one PCI bus and underutilize another – balance your work load. Monitor!

Understand your bus architecture – know the point of diminishing returns on spindles per bus

io utilization object placement
IO Utilization Object Placement
  • Where to place my objects?
    • Log should be on its own mirrored drive or drives
    • OS bits, SQL bits and pagefile on separate drive
  • Use multiple filegroups
    • Large and heavy hit table and its indices should be placed in separate filegroup for both performance and management reasons
    • Place table (data), index, and tempdb on separate filegroup and spindle to promote sequential I/O during table/index scan, e.g. index creation (using sort_in_tempdb option)
    • Multiple datafiles can be used for tempdb if usage is high.
    • Database Backups should go to separate spindles due to the highly sequential nature of backup and restore operation. Performance improvement can be significant vs. shared spindles in a non-SAN environments, e.g. SCSI sub-systems with limited number of spindles.
storage area network
Storage Area Network
  • SAN Advantage
    • Large cache and large number of spindles often simplify the storage architecture design
    • High availability, reliability and often rich feature set for VLDB environment
  • SAN Watch Outs
    • Storage design is still necessary, it’s not the solution for everything
    • Performance information may not be easily accessible
    • Investment is skilled SAN Administration is a requirement
i o monitoring
I/O Monitoring
  • Performance monitor counters
    • Processor & SQL Server Process: %Privileged Time vs. % Processor Time
    • PhysicalDisk: % Disk Time, Avg Disk Queue Length, Current Disk Queue Length, Avg. Disk Sec/Read, Avg. Disk Sec/Write
    • Avoid logging to disks being monitored
  • Vendor Tools
    • SAN environment often do not provide the accurate statistics to the OS. Use vendor monitoring tools to obtain the comparable information.
  • SQL Server Datafile I/O statistics
    • Transaction log bottleneck - Query virtual table ::fn_virtualfilestats
    • Select * from ::fn_virtualfilestats(<database id>, -1)
      • if IOStallMS / (NumberReads + NumberWrites) >= 20ms, then a log I/O bottleneck may exist.
enterprise class systems infrastructure63
Enterprise ClassSystems Infrastructure
  • Enterprise Wintel Architectures
  • Enterprise Storage Architectures
  • Microsoft Systems Architecture
profusion architecture
Profusion Architecture
  • Current Intel Xeon Processor MP speeds
    • 2GHz, 1.90GHz, 1.60GHz, 1.50GHz, 1.40GHz
  • Current supported Profusion products
    • 700 MHz, 900 MHz
  • Stops at 8 processors
es7000 scaling up all 32 cpus
ES7000 Scaling Up – all 32 CPUs

MSU

MSU

MSU

MSU

MSU

MSU

MSU

MSU

Crossbar

Intra-connect

Crossbar

Intra-connect

Crossbar

Intra-connect

Crossbar

Intra-connect

FLC

FLC

FLC

FLC

FLC

FLC

FLC

FLC

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

I/O

I/O

I/O

I/O

I/O

I/O

I/O

I/O

es7000 partitioning
ES7000 Partitioning

MSU

MSU

MSU

MSU

MSU

MSU

MSU

MSU

Crossbar

Intra-connect

Crossbar

Intra-connect

Crossbar

Intra-connect

Crossbar

Intra-connect

TLC

TLC

TLC

TLC

TLC

TLC

TLC

TLC

1

2

3

4

5

6

7

8

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

I/O

I/O

I/O

I/O

I/O

I/O

I/O

I/O

es7000 common usage model
ES7000 – Common Usage Model

MSU

MSU

MSU

MSU

MSU

MSU

MSU

MSU

Crossbar

Intra-connect

Crossbar

Intra-connect

Crossbar

Intra-connect

Crossbar

Intra-connect

TLC

TLC

TLC

TLC

TLC

TLC

TLC

TLC

1

2

3

4

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

CPU

I/O

I/O

I/O

I/O

I/O

I/O

I/O

I/O

unisys es7000 32 bit

Intel

Proc

Intel

Proc

Intel

Proc

Intel

Proc

Intel

Proc

Intel

Proc

Intel

Proc

Intel

Proc

12 PCI 66mhz slots

12 PCI 66 66mhz slots

12 PCI 66mhz slots

12 PCI 66mhz slots

4th level Cache

4th level Cache

Shared Cache

Shared Cache

Shared Cache

Shared Cache

High Performance Crossbar

Centralized Memory

High Performance Crossbar

4th level Cache

4th level Cache

Intel

Proc

Intel

Proc

Intel

Proc

Intel

Proc

Intel

Proc

Intel

Proc

Intel

Proc

Intel

Proc

Unisys ES7000 – 32 bit

Single-domain 16-way

Dual-domain 32-way

unisys es7000 32 bit72
Unisys ES7000 - 32 bit
  • Highest performer of Xeon MP based ES7000s
  • Best availability and manageability features w/ Server Sentinel
  • For large databases and mission-critical applications
  • Can be configured from 8 to 32 Xeon processors MP with up to 8 partitions
  • More than twice the I/O bandwidth of the Orion 200 model
    • - Up to 64 x 66 MHz and 32 x 33 MHz PCI slots
unisys es7000 32 bit73
Unisys ES7000 – 32 bit
  • Up to 8x cellular sub-pods
    • - 4-way Intel Xeon processors MP
      • (1.4 GHz or 1.6 GHz, 64-bit bus at 400 MHz FSB)
    • - 32MB module of shared cache
  • Up to 4 crossbars
  • - High-performance, non-blocking
  • Up to 64GB of centralized memory
  • - Contains cache coherency mechanism
enterprise wintel architecture
Enterprise Wintel Architecture
  • Windows Datacenter Program
    • High end systems
      • 32 Processor
      • 64GB Memory
    • Higher level of support
      • Joint support queue
  • Unisys Server Sentinel
    • System health monitoring & Self-healing
    • Automated operations & Problem reporting
    • Hardware and software inventory
    • Remote management
  • Unisys Application Sentinel (SQL Server)
    • Self-Optimization for SQL Server environments
      • Analyzes metrics and compares to known good baseline
      • Chooses setting/resource changes to relieve bottlenecks
    • Self-Healing for SQL Server environments
      • Predicts halt/hang condition and triggers graceful failover
    • Configurable automation level
pilgrim s pride
Pilgrim’s Pride

$2.2B poultry producer located in Pittsburg, Texas,

3rd largest in US. Its 2 TB database is the largest SAP user

in the world running SQL Server

Business Problem:

Required additional reliable, scalable capacity for a database server that was running out of headroom on a commodity 8-way server. Current number of users is 2600 and is expected to double by 2004 through acquisition and regular business expansion.

Solution:

Two 16-way ES7000s running Windows 2000 Datacenter Server and SQL Server 2000, each with 32GB of memory. Each system is configured with a 12-way DB server and a 4-way application server clustered to failover to the other.

enterprise class systems infrastructure77
Enterprise ClassSystems Infrastructure
  • Enterprise Wintel Architectures
  • Enterprise Storage Architectures
  • Microsoft Systems Architecture
high end storage arrays
High-End Storage Arrays

Architecture—Design Alternatives

Yesterday

  • Advantages:
    • Good performance
    • Simple, low cost
  • Challenges:
    • Bus contention
    • Limited scaling

Bus1.6 GB/s

  • Advantages:
    • Good performance
  • Challenges:
    • Switch contention
    • Limited scaling
    • Complex, costly

Switch10.6+5.3 = 15.9 GB/s

high end storage arrays79
High-End Storage Arrays

Architecture—Design Alternatives

Yesterday

Today

  • Advantages:
    • Good performance
    • Simple, low cost
  • Challenges:
    • Bus contention
    • Limited scaling

Bus1.6 GB/s

Direct Matrix64+6.4 = 70.4 GB/s

  • Advantages:
    • Good performance
  • Challenges:
    • Switch contention
    • Limited scaling
    • Complex, costly

Switch 10.6+5.3 = 15.9 GB/s

  • Breakthrough design:
    • Simple, cost-effective
    • More reliable
    • More scalable
    • Future-proof growth
symmetrix direct matrix
Matrix Interconnect

Pathing

128 direct point-to-point dedicated connections

Matrix backplane

Dynamic Global Cache

Parallelism

32 independent regions

32 concurrent IOs

Symmetrix Direct Matrix

Architecture— Interconnect and Cache Architecture

More

Performance

  • More IOPs and bandwidth
  • More burst performance
  • More business continuity performance

More

Functionality

  • Full software compatibility
  • Assured network interoperability

More

Availability

  • Simpler design
  • Eliminates bus limitations
  • Eliminates switch limitations

Better

Economics

  • Scale high / scale low
  • More efficient use of hardware
  • Precisely target configurations
symmetrix direct matrix81
Symmetrix Direct Matrix

Architectural Specifications

synchronous mirroring
Synchronous Mirroring
  • Most Durable Form of Remote Mirroring
  • Confirmation Returned as Part of I/O Completion
  • Synchronous vs. Asynchronous, and Adaptive Mode
emc srdf

SRDF links

EMC SRDF
  • Architecture uses delta technology (Track Tables)
  • Fully independent of host operating systems, DBMS, filesystems
  • Data is mirrored remotely
  • Bi-directional source-to-target(s) architecture
  • Mirrored volumes logically synchronized
  • Added protection against drive, link, and server failures

Source

Target

srdf modes of operation synchronous mode

3

4

2

1

1

I/O write received from host / server into cache of source

2

I/O is transmitted to the cache of the target

SRDF links

3

Receipt acknowledgment is provided by target back to cache of source

Ending status is presented to host / server

4

SRDF Modes of OperationSynchronous Mode

Source

Target

srdf and clustering
SRDF and Clustering

GeoSpan for MSCS

  • Provides cluster support across a geography

Private Interconnect

Heartbeat Connector

SRDF

Primary Location

“Failover” Location

old mutual life assurance company
Old Mutual Life Assurance Company

Old Mutual, a Fortune Global 500 company, is a world-class international financial services company that manages more than $234 billion in funds for millions of customers worldwide. Old Mutual has been aggressively expanding its operations in life assurance, asset management, banking and short-term general insurance.

Business Challenge:

Promote business continuity by streamlining an overgrown server environment and providing total disaster recovery and backup to prevent downtime and service interruptions.

Solution:

Unisys ES7000 servers, Microsoft Windows 2000 Advanced Server and SQL Server 2000, and EMC Symmetrix, SRDF, GeoSpan, and PowerPath products.Outage time went from 54 hours per month to near zero.

using timefinder for backups
Using TimeFinder for Backups
  • Quiesce SQL Server
  • Ensure All Updates Completed to Production Disk Volume
  • Split Off a Point-In-Time BCV Mirror of Logical Volume
  • Resume Production Application Processing While …
  • Performing Backups from the BCV Mirror

TAPE

SYSTEM

Production

BCV for

Backup

storage management
Storage Management

Functionality—Storage Management Made Simple, Automated, Open

Intelligent Supervision:ControlCenter StorageScope, Database Tuner, Automated Resource Manager, SAN Manager, Common Array Manager; EMCLink

Information Safety:

EMC Data Manager (EDM),ControlCenter Replication Manager,ISV backup applications

Infrastructure Services:PowerPath, Celerra HighRoad, GeoSpan

Device Management:ControlCenter Symmetrix Manager, SRDF/TimeFinder Manager, Symmetrix Data Mobility Manager (SDMM)

Performance Management:ControlCenter Symmetrix Optimizer, Workload Analyzer

Legend:Multivendor storage management

EMC platform storage management

Replication and Recovery:TimeFinder, SRDF, Consistency Groups

enterprise class systems infrastructure89
Enterprise ClassSystems Infrastructure
  • Enterprise Wintel Architectures
  • Enterprise Storage Architectures
  • Microsoft Systems Architecture
slide90
The Unisys Microsoft Systems Architecture is a best-of-breed, Windows-based infrastructure producing faster implementations with predictable costs, reduced risk and faster time to benefit.
unisys msa edc program
Unisys MSA EDC Program

Fully redundant data centers supporting “Unisys-unique” capabilities

  • Disaster recovery
  • Enhanced systems management
  • Secure application tier
  • Large OLTP data bases
  • Multi-terabyte business intelligence hosting
  • Server consolidation

Site A - EMC Lab

Site B - Unisys Lab

a summary of today s session
A Summary of Today’s Session
  • SQL Server 2000
    • SQL Server Failover Clustering
    • Resource management
      • Memory
      • CPUs
    • Indexing
    • I/O and Strategies
  • Enterprise Class System Infrastructure
    • Enterprise Class Servers
      • Unisys ES7000
    • Enterprise Class Storage
      • EMC Symmetrix
    • Unisys Services and MSA
      • Infrastructure engineered for your most demanding requirements