1 / 48

System Performance Measurement and Analysis with Web-based Presentation of Results

System Performance Measurement and Analysis with Web-based Presentation of Results. Phil Cannata Sun Microsystems, Inc. Overview. iPlanet Directory server iDS Performance Engineering Group Methodology: Goal is presentation of credible results Data collection and analysis tools

ashton
Download Presentation

System Performance Measurement and Analysis with Web-based Presentation of Results

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. System Performance Measurement and Analysis with Web-based Presentation of Results Phil Cannata Sun Microsystems, Inc.

  2. Overview • iPlanet Directory server • iDS Performance Engineering Group • Methodology: • Goal is presentation of credible results • Data collection and analysis tools • Web-based presentation of results • Slide Show Demo

  3. iPlanet Directory Server (iDS) • Read-mostly Hierarchical DBMS for identity services, network resources and other enterprise-wide information resources • scalable • high availability • High performance compared to RDBMSs • Bottom line – performance is critical

  4. Index id dn cn Pointer id id id2entry entryDN CN iPlanet Directory Server (iDS) Search Update Operating System iDS Application LDAP Entry Cache Entries Algorithms? OS? Memory Size? Threading? Multiple CPUs? Locking? Indexing? Network? WAN? Standby Memory cache fault / page fault SleepyCat DB read db Cache Entries Indices System Cache flush copy Disks RAID? Database Entries Indicies Fast write cache Fast write cache Transaction Logs Fast write cache Replication Logs

  5. iPlanet Directory Server (iDS)

  6. iDS Performance Engineering:Mission • Develop performance evaluations and characterizations of iPlanet Directory Products • Work with developers, to help resolve iDS performance issues • Work with deployment engineers, to develop tuning guides and customer configuration planning aids

  7. Successes • Helped to keep iDS performance in leadership position • Located and found solutions for some major performance problems (see next page) • Helped develop credible/realistic statements about product performance • Gained enough knowledge to start working on discrete event simulation models of performance behavior

  8. Some Specific Performance Problems • Uncovered performance problems in new indexing scheme: • This scheme was removed from release • Discovered quadratic growth rate problem with new hashing scheme for Entry cache (see here) • Located problems with virtual attribute feature (see here) • Discovered transaction checkpointing bugs • Found replication performance on NT to be << than on Solaris

  9. Methodology • Produce baseline performance analysis and characterization of directory products: • independent • non-anecdotal • repeatable

  10. Methodology • Independent results: Development Project Management Deployment Engineering QA and Release Engineering Performance Engineering Development Austin Development Grenoble Development Santa Clara

  11. Methodology • Non-anecdotal results:

  12. Methodology • Reproducible results: • Each experiment uses an isolated network • Each experiment starts with a known copy of OS Level, iDS configuration and TCP/IP tuning • on Solaris, use Flash Archive plus rebuild filesystems and unarchive the database • on NT, use Imagecast plus unarchive the database • Problems had been found if this was not done: • 20% variability in results • files on disk effected performance • also see here

  13. Methodology • Use a common shell script to collect detailed data on system under test: • operational data: • cpu(s) • disk drives • memory • network • environmental data: • hardware, file system • operating system, iDS configuration

  14. Data Collection Tools • Standard test drivers: • Common scripts to submit streams of requests to server(s) • add • delete • modify • search • Each script collects summary data

  15. Data Collection Tools • System utilities: • Unix (Solaris) • Wham Software and Engineering: • DRM (Distributed Resource Monitor) • dstat • System utilities (iostat, vmstat, prtmem, etc.) • Windows NT/2000 • perfmon see here

  16. Typical Lab Configuration

  17. Presentation of Results • Problems: • Collect lots of data: need means for storing, summarizing and viewing data summaries • Need graphical presentations • Results need to be viewable in many locations: • Austin, Santa Clara, Fresno, Grenoble • Need to make comparisons between different systems, releases, parameter settings, etc.

  18. Presentation of Results Tom Herb ? Reads from Database Disk Blue: Adds with no replication Red: Adds with replication

  19. Web-Based Solution • Graphical • Interactive • Geographically distributed users • Off-of-the-shelf solutions were available • Central repository of data, with browser-based interface via web server

  20. Additional Issues • Now REALLY need a uniform way of collecting and archiving data • Compare performance of new releases with older releases • Evaluate impact of new features • Quantify improvements (or lack thereof) • Data will now be reusable

  21. Additional Issues • Want to build up a legacy of data and interpretations • Want to be a source of expertise for performance analysis in iPlanet Directory Products

  22. Methodology • Produce baseline performance analysis and characterization of directory products: • independent • repeatable • non-anecdotal • uniform • reusable • legacy

  23. Web Presentation Project • Had testing and data collection procedures in place (sort of) • Decided to use iDS to archive results for each experiment: • summary record (can be queried) • Environmental data • Server data (WHAM) • Client data (WHAM)

  24. Web Presentation Project • Graphing and Analysis package: • Mathematica • Web presentation: • WebMathematica http://www.wolfram.com/products/webmathematica/

  25. Typical Lab Configuration

  26. Web Pages and Reports

  27. Slide Show Demo:

  28. A Pretty Story – Web Page Development

  29. A Pretty Story – Web Page Development Tom Herb Mike Tom New Repository Entries New Data Description

  30. A Pretty Story – Web Page Development Tom Herb Mike Invalid Entry New Repository Entries New Data Description

  31. Adobe GoLive A Pretty StoryWeb Page Development Mathematica Server Page Web Server

  32. Adobe GoLive A Not So Pretty StoryWeb Page Development Mathematica Server Page Web Server Mature Audiences Only

  33. Adobe GoLive A Not So Pretty StoryWeb Page Development Mathematica Server Page Web Server Web Page Template Save Links Build Web Page Awk script hrefs on current page File of New Entries Get and Format Entries from Results Repository that are not on Web Page Results Repository Confirm that Entries on Web Page are still in the Results Repository

  34. A Little Nicer StoryServer Side Processing

  35. Web Browser Web Browser Submit Query Form Get Query Form A Little Nicer StoryServer Side Processing Client Apache Web Server Tomcat Servlet Container Web Mathematica UNIX Check Requested Page Check Requested Page Process .msp query form page Check Requested Page Allocate new thread kernel pool Results Repository dynamic Run WebMathematica Servlet Service Mathematica kernel All Data DRM Data static Serve HTML Page Generate Report Add HTTP headers Parsed Data

  36. Performance Engineering Web PresentationWeb Presentation Tom Herb ? Start: Nov. – Dec. 2001 First Results: March 1, 2002 There was a need to retrofit old data but it was not too bad.

  37. First Results

  38. Performance Engineering Web PresentationMarch 1, 2001 Notice the Repeatability of Tom’s Data Tom(Sept.) vs Herb (Dec.) Tom(Sept.) vs Tom (Dec.) Tom(Dec.) vs Herb (Dec.) System Time User Time IO Wait Time

  39. Performance Engineering Web PresentationTracking Down the Problem • Difference in methodology (reboot, ldif files were different) • Dual network card causing ftp transfer rate difference of 20% • Don’t trust information from cfgadm about disk manufacturer's model numbers, trust the Fru number (March 14).

  40. Performance Engineering Web PresentationMarch 1, 2001 Tom(Sept.) vs Herb (Dec.) Tom(Sept.) vs Tom (Dec.) Tom(Dec.) vs Herb (Dec.) Kbytes to disk Kbytes from disk

  41. Performance Engineering Web PresentationTracking Down the Problem Nothing yet.

  42. Once again, notice the Repeatability Performance Engineering Web PresentationMarch 1, 2001 Tom(Sept.) vs Tom (Dec) Tom(Sept.) vs Tom (Dec) Incoming packets Incoming kBytes Outgoing packets Outgoing kBytes

  43. Performance Engineering Web PresentationTracking Down the Problem Nothing yet.

  44. Performance Engineering System: • Data Collection • Data Storage • Data Analysis • Presentation of results

  45. Future Directions • Need to simplify user interface • graphs on demand • better summary presentations • more automatic operation • Return “live” Mathematica notebooks • Need to integrate results from Windows systems, Linux systems, etc.

  46. Summary • Have put in place a system for analyzing and characterizing performance if iDS: • data driven • reproducible experimental results • archived data • Web-based, interactive presentation of graphical results

  47. Summary • Have used this system (and its predecessors) to analyze and fix performance problems • Have quantified performance improvements in successive product releases • Have contributed to development of tuning guides and configuration planning aids

  48. Summary • Have demonstrated value of consistent, ongoing, and rigorous performance evaluation project

More Related