SQL Server 2000 Performance Improvements Walter Wong Technology Specialist Microsoft Hong Kong Limited - PowerPoint PPT Presentation

Slide1 l.jpg
Download
1 / 45

SQL Server 2000 Performance Improvements Walter Wong Technology Specialist Microsoft Hong Kong Limited. Current Customer Requirements And Future Performance Trends. Increasing user populations Increasing transaction rates Increasing database disk I/O Increasing data and query complexity

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

Download Presentation

SQL Server 2000 Performance Improvements Walter Wong Technology Specialist Microsoft Hong Kong Limited

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Slide1 l.jpg

SQL Server 2000 Performance Improvements

Walter Wong

Technology Specialist Microsoft Hong Kong Limited


Current customer requirements and future performance trends l.jpg

Current Customer Requirements And Future Performance Trends

  • Increasing user populations

  • Increasing transaction rates

  • Increasing database disk I/O

  • Increasing data and query complexity

  • Increasing application network I/O

  • Stricter response time requirements

  • Need to “Scale-Up” on SMP platforms

  • Need to “Scale-Out” on data tier


Performance challenge increasing disk i o l.jpg

Performance ChallengeIncreasing Disk I/O

  • Drive larger/faster I/O subsystems

  • More efficient use of large memory

  • Support for TB+ size databases

  • Optimize for OLTP I/O profiles

  • Optimize for DSS I/O profiles

  • Handle text/image data efficiently


Sql server 2000 solutions l.jpg

SQL Server 2000 Solutions

  • Improved parallel scan processing

  • Merry-go-round scans

  • More scalable Readahead

  • Dynamic asynchronous I/O

  • Large memory support (AWE)

  • Scalable checkpoint processing

  • Efficient “Text in Row” format


Improved parallel scan l.jpg

CPU1

CPU2

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

8K

Readahead I/O

Readahead I/O

Readahead I/O

Readahead I/O

Improved Parallel Scan

  • Less time spent in parallel page supplier critical section

  • Readahead I/O now issued outside the critical section

  • Architecture feeds 16 pages at a time (fewer calls to PPS)

SDES

SDES

Parallel Page

Supplier CritSec

Parallel Page

Supplier CritSec


Merry go round scans l.jpg

User 1: 25% Scanned

User 1: 50% Scanned

User 2: 25% Scanned

With Merry-go-round scans:

User 1: 25% Scanned

User 2: Starts Scanning

User 1: Scan Complete

User 2: 75% Scanned

User 2: Remaining 25% Scanned

Merry-Go-Round Scans

Without Merry-go-round scans:

Can result in disk

thrashing during large

table scans!

  • Reduces disk thrashing

  • Improves cache behavior

  • Affects unordered scans

  • No synchronization


Large memory support l.jpg

= Mapped

= Unmapped

< 3GB

> 3 GB

  • AWE allows 36bit physical address space via PAE (up to 64gb)

  • Pages mapped/unmapped between virtual and physical

  • Maps are magnitude less costly than copies

  • Enabled via “awe enabled” parameter (static non-paging)

Large Memory Support

  • Virtual address space still limited to 3GB


Scalable checkpoint l.jpg

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

1

1

1

1

Chkpt starts

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

Chkpt continues

1

1

1

1

…while Lazywriter flushes pages

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

Chkpt completes

Scalable Checkpoint

Results in Fewer

Checkpoint I/Os

on Busy Systems!


Text in row format l.jpg

SQL 7.0 Text Format:

  • Text stored “off row” in tree

  • Text pointer jump to root

Col1

Col2

Col3

Tree Root

Text Pages

  • Text stored “in row” if it fits

Text In Row Format:

  • Otherwise, root stored “in row”

Col1

Col2

Col3

Text

  • Reduced I/O for text retrieval

  • Enabled/disabled by table option

Col1

Col2

Col3

  • Optional “inline limit” (256 default)

  • Increased I/O for non-text scans

Text Pages

Text In Row Format


Performance challenge processor and smp scale up l.jpg

Performance ChallengeProcessor and SMP Scale-Up

  • More efficient use of CPU resources

  • Reduction of critical code paths

  • Elimination of unnecessary processing

  • Improve overall SMP scaling

  • Improve processor affinity

  • Reduce cross processor cache traffic/contention


Sql server 2000 solutions11 l.jpg

SQL Server 2000 Solutions

  • Server side cursor caching

  • Parameter caching

  • Enhanced intra-query parallelism

  • Full parallel index creation

  • Partitioned free buffer lists


1 server side cursor caching l.jpg

1. Server Side Cursor Caching

  • Cursor compile plan

    • Saves allocation of object

  • Cursor execution plan

    • Representation of cursor is cached

  • Benefits:

    • Workloads dominated by cursor operations

    • 10-15% reduction in CPU consumption for cursor operations

  • No application changes necessary!


2 parameter caching l.jpg

2. Parameter Caching

  • Type checks and default parameter values cached on first execution

  • Subsequent executions use cached information and default values

  • Re-cached if plan recompile/schema change

  • Benefits:

    • Heavy stored procedures use with many parameters/defaults

    • 6% improvement over SQL Server 7.0 on SAP SD workload

  • No application changes necessary (if using prepare/execute to call stored procedure)


3 intra query parallelism l.jpg

3. Intra-Query Parallelism

  • Integrated bitmap filtering

  • Multi-threaded access to spools

  • Reduced parallel sort stalls

  • Elapsed time costing

  • Insert/Update/Delete integration

  • Results:

    • Improved TPC-H DSS workload by 40% over SQL Server 7.0


4 parallel index creation l.jpg

Index

  • DOP Determined on Execution

  • Partition Functions Defined Per CPU

  • Per CPU Table Scans Initiated

  • Filter Applied to Scans

  • Filtered Results Merged

CPU 1

CPU 2

CPU 3

CPU 4

Partition

Function

Filter

Partition

Function

Filter

Partition

Function

Filter

Partition

Function

Filter

CPU 1 Scan

CPU 2 Scan

CPU 3 Scan

CPU 4 Scan

Table:

4. Parallel Index Creation

Near Linear

Scaling!


5 partitioned free buffer lists l.jpg

CPU1

CPU2

CPU3

CPU4

Hash Table (PageId, DbId)

Free List

Free List

Free List

  • Results in Free List per CPU

  • Enhances cache locality of free list descriptors and buffers

  • Distributed Lazywriting balances population of free lists

  • Improves Scale-Up on SMP platforms

  • Architecture in place for further NUMAish enhancements

5. Partitioned Free Buffer Lists

Free List

Buffer Array (contains buff descriptors which point to 8k pages)


Performance challenge increasing network i o l.jpg

Performance ChallengeIncreasing Network I/O

  • Larger and more complex app architectures (2-tier, N-tier, hybrid)

  • App tier network traffic increasing (eCommerce driven)

  • Data tier network traffic increasing

  • Stricter response time requirements

  • Increasing CPU consumption for network I/O processing


Sql server 2000 solutions18 l.jpg

SQL Server 2000 Solutions

  • Prepare/unprepare caching to reduce network roundtrips

  • Metadata caching to reduce data on wire

  • Support for high-speed System Area Networks (SAN)


1 prepare unprepare caching l.jpg

1. Prepare/Unprepare Caching

  • Prepare/unprepare cached on client

    • Prepare piggybacked on first exec

    • Unprepare piggybacked on next prepare

  • Removes N+2 roundtrip scenario

    • Stored procedure help

  • Overall benefit inversely related to increase in execs of prepared plan

  • Must bind before exec in application to see benefit


Prepare unprepare event class l.jpg

Event class

Description

Exec Prepared SQL

Indicates when a prepared SQL statement or statements have been executed by ODBC, OLEDB, or DB-Library.

Prepare SQL

Indicates when an SQL statement or statements have been prepared for use by ODBC, OLEDB, or DB-Library.

SQL:BatchCompleted

Transact-SQL batch has completed.

SQL:BatchStarting

Transact-SQL batch has started.

SQL:StmtCompleted

Transact-SQL statement has completed.

SQL:StmtStarting

Transact-SQL statement has started.

Unprepare SQL

Indicates when a prepared SQL statement or statements have been unprepared by ODBC, OLEDB, or DB-Library.

Prepare/Unprepare event class


2 metadata caching l.jpg

2. Metadata Caching

  • Metadata cached on client between subsequent calls on connection

  • Coordinated between client/server

    • Metadata only sent if schema change

  • Data reduction inversely related to # of columns (~10 bytes per column)

  • Bonus: Saves CPU required to process metadata on both client and server!

    • Reduces cost of select with > 50 columns by over 10%

  • No application changes necessary!


3 san support l.jpg

3. SAN Support

  • Socket Netlib supports WSD (DC only)

  • Native VI Netlibs (Servernet and Giganet)

  • High bandwidth, low latency, reliable

  • Bonus:

    • Low system CPU overhead

  • Benefits:

    • Network intensive client/server apps

    • Scale-Out data tier interconnects

    • Reduced kernel time by 1/3 in SAP workload - 20%+ increase in users!


Performance challenge data and query complexity l.jpg

Performance ChallengeData and Query Complexity

  • Efficient query processing of ever increasing data set sizes

  • Increasing query complexity for business intelligence and data warehousing systems

  • Understanding of the “nature of the data”

  • Efficient execution of multi-user workloads with mixed query complexities


Sql server 2000 solutions24 l.jpg

SQL Server 2000 Solutions

  • Improved statistics collection and representation

  • Even more efficient plans

  • Advanced index enhancements

  • Improved multi-user robustness


1 improved statistics l.jpg

1. Improved Statistics

  • Similar Bucket Merge Algorithm (SBM) builds maxdiff histogram for statistics

    • Dynamic bucket boundaries based on variances

    • Effectively handles high low and high data skew

    • Captures density and distribution in one pass

    • Improves cardinality estimates by creating more granular statistics

  • Optimizer still understands SQL 7.0 statistics

    • Necessary in case where database is upgraded

    • Will update to new format automatically over time if “auto update statistics” enabled

    • To update all stats in db use sp_updatestats


2 more efficient plans l.jpg

2. More Efficient Plans

  • More granular and accurate statistics means improved cardinality estimates

    • Better understanding of “nature of data”

    • Higher quality input for optimizer

  • Modular and flexible QP design has allowed “fine tuning” for unique and complex queries

  • Even more “trivial plan” cases supported


3 indexed views l.jpg

3. Indexed Views

  • Index on view may contain join, aggregation, or combination

  • Once created, contents of view persist

  • Optimizer may use index even if not directly referenced by query

  • Benefits:

    • Decision Support Systems

    • Data Warehouses / DataMarts

    • Business Intellegence Systems

  • Integration with OLAP Services

  • Consider cost of index maintenance!


4 multi user robustness l.jpg

4. Multi-User Robustness

  • Complex or long running queries require resources (memory/CPU)

  • SQL Server 2000 implements more intelligent scheduling of queries

  • Prior to execution, QP will examine the “cost” of query relative to other current query activity

  • QP may choose to delay execution momentarily until adequate resources are available


Scale up results sap sd industry standard benchmark l.jpg

Scale-Up Results!SAP SD Industry Standard Benchmark

50% More

Users on Same

8way Platform!


Performance challenge data tier scale out l.jpg

Performance ChallengeData Tier Scale-Out

  • Eliminate single server bottleneck

  • Provide scalable performance/capacity

  • Application transparency to tables

  • Intelligent distributed query

  • Distributed transaction coordination

  • Integration with cluster services for high availability


Sql server 2000 solutions building blocks for scale out l.jpg

SQL Server 2000 SolutionsBuilding blocks for Scale-Out

  • Distributed Transaction Coordinator

  • OLEDB/Distributed Query

  • Distributed Partioned Views

  • Instead Of Triggers

  • System Area Network support

  • Windows Cluster Services

  • IIS/ASP/ADO/COM+

  • Windows 2000 Cluster Services


Distributed partitioned view l.jpg

Distributed Partitioned View

  • UNION ALL view over local table and one or more remote “member” tables

  • Member tables have identical structure with unique partitioning column

  • Disjoint partitioning intervals enforced with CHECK constraints

  • Participating member servers use OLEDB linked server support

  • DTC maintains transactional integrity

  • QP optimizes distributed plans


Scale out results l.jpg

227,079 tpmC @ $19.12 per tpmC

12 Compaq Proliant 8500 Servers

96 CPUs, 48GB memory, 18.7TB Disk

Scale-Out Results!

“Enough throughput to

handle all the e-commerce

orders placed on the web

last year in two days!”


Sql server 2000 features for high end servers l.jpg

SQL Server 2000 features for high-end servers

  • More memory

  • Faster networking

  • DSS workloads

  • Multi-instance SQL Server

  • Distributed Partitioning


1 more memory awe l.jpg

= Mapped

= Unmapped

< 3GB

> 3GB

  • AWE/PAE allows 36bit physical address space (up to 64gb)

  • Pages mapped/unmapped between virtual and physical

  • Maps are magnitude less CPU cycles than I/Os

  • Mapping latency is nearly zero

1. More Memory - AWE

  • Virtual address space still limited to 3GB


1 more memory awe cont d l.jpg

1. More Memory – AWE cont’d

  • More memory means…

    • Reduced disk I/O rate  fewer disks

    • Less I/O  fewer I/O stack CPU cycles

    • Faster latency  reduced lock contention

  • Considerations

    • Enable with sp_configure “awe enabled”

    • AWE memory is truly non pageable

    • Use “max server memory” as appropriate

    • Multi-instance/clusters need care


2 faster networking san l.jpg

2. Faster networking - SAN

  • 3 tier apps change net equation

    • App servers make short, frequent calls to SQL Server

    • High network traffic  CPU cycles

    • Reduces CPU available for “real” work

  • New network technology…

    • Apps can interact directly with NIC

    • TCP/IP software stack bypassed

    • Vast savings in CPU cycle cost

    • More CPU for “real” work (SQL!)


3 dss workloads l.jpg

3. DSS workloads

  • Parallel disk scans improvements

    • Reduce internal friction

    • Aggressive, self-tuning I/O scheduling

    • “max async I/O”  GONE in SQL 2000

    • Higher utilization of more disks

  • Parallel index build improvements

    • Partitioning of build across all CPUs

    • Merry-go-round I/O efficiencies

    • Higher utilization of more CPUs

    • Faster index creation times!


4 multi instance sql server l.jpg

4. Multi-instance SQL Server

  • Server consolidation solution

    • Mix differing apps on single system

    • Take advantage of more CPUs

    • Take advantage of more memory (PAE)

  • Tools to make life simple…

    • Automatic memory governor

    • sp_configure “max server memory”

    • sp_configure “affinity mask”


5 distributed partitioned views l.jpg

5. Distributed partitioned views

  • Server scale-out solution

    • Transparency provided by SQL

    • Shared-nothing  very linear scaling

    • Take advantage of more hardware

  • Works best when …

    • User requests partition readily

    • Usually routes to “home” partition

    • Reliable, hi-speed network between nodes

    • Not for complex queries that span cluster


Tips for sql 2000 l.jpg

Tips for SQL 2000

  • Data placement

    • Spread all your data everywhere

    • Let file groups/proportional fill do the work

    • Except the log – separate device, cached

  • Use fiber scheduling feature

    • User mode context switching

    • Best when CPU usage is high (80-90%)

    • Especially on 4 or more way SMP

    • Not for heterogeneous queries or extended SP environments


Where can i get msdn l.jpg

Where can I get MSDN?

  • Visit MSDN Online at msdn.microsoft.com

  • Register for the MSDN Flashemail newsletter at msdn.microsoft.com/resources/msdnflash.asp

  • Join the MSDN Online Membership at msdn.microsoft.com/community

  • Become an MSDN CD Subscriber at msdn.microsoft.com/subscriptions

  • Attend more MSDN events!


More resources l.jpg

More Resources

www.microsoft.com/dna

www.microsoft.com/sql


Questions l.jpg

Questions?


  • Login