1 / 23

Automatic Tuning of the MULTIPROGRAMMING LEVEL in Sybase SQL Anywhere

Automatic Tuning of the MULTIPROGRAMMING LEVEL in Sybase SQL Anywhere. Mohammed Abouzour , Kenneth Salem , Peter Bumbulis Presentation by Mohammed Abouzour SMDB2010. Introduction to Problem. Database servers have two main architectures: Worker-per-connection: (Oracle, Sybase ASE)

nani
Download Presentation

Automatic Tuning of the MULTIPROGRAMMING LEVEL in Sybase SQL Anywhere

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automatic Tuning of the MULTIPROGRAMMING LEVEL in Sybase SQL Anywhere Mohammed Abouzour, Kenneth Salem, Peter Bumbulis Presentation by Mohammed Abouzour SMDB2010

  2. Introduction to Problem • Database servers have two main architectures: • Worker-per-connection:(Oracle, Sybase ASE) • A worker is dedicated for each connection for the life of the connection • Worker-per-request:(Oracle, Microsoft SQL Server, Sybase ASE, Sybase SQL Anywhere) • A worker pool and a request queue: each worker picks and complete one request at a time. No guarantee that the same worker will service the same connection.

  3. Worker-Per-Request Architecture: • Efficient in resource use and handling of large number of connections(1) • How to choose the size of the worker pool? • A large worker pool: • Increases the concurrency level of the server • Increases contention on server resources • Increases working set size of server • A smaller worker pool: • Under utilization of hardware resources • Limit concurrency level of workload • Possibility of a server hang due to no workers available to handle outstanding requests

  4. Our Solution • Dynamically adjust the size of the worker pool (the multiprogramming level of the server) • Benefits of dynamic MPL: • One less parameter for DBAs to worry about • One of the design goals of Sybase SQL Anywhere • Improve server performance for different workloads • Better handling of changes in workload transaction mix • We used throughput as a way to detect degraded server performance

  5. Outline • Introduction to Problem • Controller Design and Tuning Algorithms • Experimental Study • Results • Future Work

  6. Controller design

  7. Controller design

  8. Tuning algorithms • Tuning algorithms are based on a paper by Heiss and Wagner(2): • Incremental Steps algorithm (IS) • Parabola Approximation (PA) • Original algorithms were studied in simulation • Our contribution to the original algorithms(3): • We have extended the IS algorithm • We have used a commercial database system and an industry standard benchmark (TPC-C) • We have studied the behaviour of the above two algorithms under different workloads

  9. Notation

  10. Hill Climbing Algorithm • Original IS algorithm did not handle a throughput curve that flattens out • Modified the IS algorithm as follows: • Follow the throughput curve up as long as percentage of change in throughput is greater than C times the percentage of change in number of threads • Follow the throughput curve down as long as throughput increases • Built-in bias to keep the number of threads as low as possible

  11. Global Parabola Approximation Algorithm • Model the throughput curve as a parabolic function: • Collect two points from the last two measurements and use the origin • Only make adjustments if the computed parabolic curve is concave down, otherwise just make a random adjustment

  12. Experimental study: • Used the TPC-C workload for our benchmark • Used two workloads variations: • TPCC1: Standard TPC-C transaction mix • TPCC2: Modified TPC-C transaction mix (lighter on I/O and higher CPU component) • Two different buffer pool configurations: • BIGMEM: 1.0GB • SMALLMEM: 0.5GB • Three different workload configurations: • TPCC1-BIGMEM • TPCC1-SMALLMEM • TPCC2-BIGMEM

  13. Experimental Study:Methodology • Workload characterization experiments: • Ran each workload under different fixed MPL setting and generated a throughput curve for each workload • Experiments: • Auto-tuning experiments: • We ran each of the workloads and allowed the auto tuning algorithms to adjust MPL dynamically • Changing workloads experiments: • We start with one workload and half way in the experiment we switch to a different workload

  14. Workload characteristics: TPCC1-BIGMEM TPCC1-SMALLMEM

  15. TPCC1-BIGMEM Global Parabola Approximation Hill Climbing

  16. TPCC1-SMALLMEM Global Parabola Approximation Hill Climbing

  17. TPCC1-BIGMEM to TPCC1-SMALLMEM Global Parabola Approximation Hill Climbing

  18. lessons learned • Not all workloads in database systems have hill-shaped throughput curve • Both algorithms were able to react appropriately to workloads changes • The Hill Climbing algorithm: • Smoother in making adjustments (more desirable) • Takes longer to reach optimal values • Can get stuck in a local minima • The Global Parabola approximation algorithm: • Takes variable step sizes and hence reacts faster to changes • Very sensitive to normal minor workload fluctuations

  19. Practical Experience with SQL Anywhere 12 • A hybrid controller: • Uses the Hill Climbing algorithm for a number of steps and collects data • Uses the data collected with the Parabola Approximation algorithm • We reduced the control interval length • Hill Climbing modifications: • Take dynamic step sizes to speed up algorithm • C parameter reduced to make algorithm more aggressive • Parabola Approximation modifications: • Use least-square method to do curve fitting on collected data • Results: • 90-95% of optimal tpmC compared to a hand tuned version • In other internal CPU-bound benchmark was able to improve performance by 32%

  20. Conclusion and Future Work • Dynamically adjusting the MPL: • Improves server performance in some workloads compared to a fixed MPL • Handles changes to workload conditions effectively • Brings us one step closer to a self managing database system • Algorithm parameters can affect the performance of the algorithms: • Control interval length • Hill Climbing: • Step sizes • C parameter • The algorithms did not behave good for the following workloads: • Bursty workloads: busy periods followed by idle periods followed by busy periods • Long running requests: e.g. creating an index • Use response time to help the auto tuning algorithm

  21. References • B. Schroeder, M. Harchol-Balter, A. Iyengar, E. Nahum, and A. Wierman, “How to determine a good multi-programming level for external scheduling,” in Proceedings of the 22nd International Conference on Data Engineering. Washington, DC, USA: IEEE Computer Society, 2006, p. 60. • H.-U. Heiss and R. Wagner. “Adaptive load control in transaction processing systems,” in VLDB ‘91: Proceedings of the 17th International Conference on Very Large Data Bases. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1991, pp. 47-54. • M. Abouzour, “Automatically tuning database server multiprogramming level,” Master’s thesis, University of Waterloo, 2007

  22. Questions

  23. TPC-C configuration • 150 Warehouses • 10 clients per warehouse • 500 connections • C = 0.4, Delta = 10 • Gamma=0.333, Delta = 10 • Windows 2003 32-bit • Dual 1.80Ghz Intel Xeon • 48 Spindles 10K RPM SCSI drives

More Related