1 / 19

Performance Measurement of a Prosspero Based Trouble Ticket System

Performance Measurement of a Prosspero Based Trouble Ticket System. Nagendra Nagarajayya Staff Engineer Sun Microsystems, Inc. Vincent Perrot OSS/J Architect Sun Microsystems, Inc. 1. Business Problem. Everyone is knowledgeable about the OSSJ promise:

marge
Download Presentation

Performance Measurement of a Prosspero Based Trouble Ticket System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance Measurement of a Prosspero Based Trouble Ticket System • Nagendra Nagarajayya • Staff Engineer • Sun Microsystems, Inc. • Vincent Perrot • OSS/J Architect • Sun Microsystems, Inc. 1

  2. Business Problem Everyone is knowledgeable about the OSSJ promise: Significantly reduce integration and maintenance costs Significantly improve business agility Get back Service Providers in the driver’s seat Safe and robust foundation Not knowledgeable about, how does my OSS Perform once OSSJ APIs are adopted? No Methodology to measure Performance No Benchmarks

  3. Why Methodology/Specification? Measure the performance of OSS/J applications in a standard and predictable way Establish comparison bases Metrics and Measures Operations / second Costs $ per operation Measurement reflects different OSS components performance Modelize the environment constraints

  4. Application Server Communication layers Provider/Consumer Working towards a solution:Trouble Ticket Scenario Estimated Performance Model Customer facing workload Network facing workload Actual Performance OSSJ APIs

  5. Use Open Source Business Delegate named “ossj-clients”providing: Support for generic operations like create, update, etc. Support of all Profiles Java (JavaEE/JMS) XML over JMS Web Services (WS) Multi-threaded to generate load and scale Support for Customization: get sequences of operation from property files get implementations of entities from property files Benchmark Design: load Generation

  6. Typical Customer Facing load: More Create trouble tickets operations (18%) followed by Update Ticket (28%) GetByKey operation (36%) Cancel Ticket (8%) Close Ticket (10%) GetAll operations (0%) Typical Network Facing load: More Update trouble tickets operations (40%) followed by Create Ticket (10%) GetByKey operation (30%) Cancel Ticket ( 10%) Close Ticket (12%) GetAll operations Benchmark Design: load Models

  7. A create trouble ticket operation uses the following attributes: TroubleState TroubleStatus TroubledObject PreferredPriority TroubleDetectionTime TroubleDescription Originator TroubleFound TroubleType Benchmark Design: Data (Entities)

  8. Metric: Number of Operations via the OSSJ interface TT system is pre-loaded with 10,000 tickets Create TT clients start and create tickets for 5 minutes GetByKey, Update clients start after 5 minutes Primary Keys of created tickets used Benchmark Monitor communicates keys Cancel clients start after 20 minutes Cancel pre-created tickets in this version Close clients start after 25 minutes Close pre-created tickets in this version GetAll operation starts after 15 minutes Scenario 1: 100 clients / 1 hour

  9. Number of trouble ticket operations calculated using TT Ops = (Create TT ops + (Create TT ops – Cancel ops ) + (Create TT ops – Close TT ops) + (GetByKey ops – Create TT ops) + (Update Ticket – Create TT ops)) / total types of operation Example: assuming 100K tickets are created in an 1 hour in the Customer Facing Workload ~ 91K TT ops Create TT (18%) = 100,000 getByKey (36%) = (36*100,000/18) = 200,000 (achieved metric should be within +/-5%) cancelTicket (8%) =(5,000) = 5,000 (+/-5%) closeTicket (10%) = (5,000) = 5,000 (+/-5%) updateTicket (28%) -=(28*100,000/18)= 155,000 (+/-5%) = (100,000 + (100,000 – 5,000) + (100,000 – 5,000)+ (200,000-100,000) + (155,000- 100,000))/5 Model: Expected Metric

  10. Calculate Achieved Metric TT Ops = sum of tickets operated / number of operations Should be within +-5% of modeled expected metric For eg: TTs Created = 93937 TT retrieved by GetByKey = 34474 (not within +- 5%) UpdateTicket = 153430 (limit to +5%) CancelTicket = 4948 (voluntarily limited to 5000) CloseTicket = 4948 (voluntarily limited to 5000) = 93937 + (93937-4948) + (93937 – 4948) + (153430 - 93937) +(93937-34474))/5 = 58347 Cost Metric (cost of operations) Modeled metric vs Actual achieved metric $ TTOps / sec = $ Cost of hardware, software (total operations /no of profiles) Calculate Achieved and Cost Metric

  11. Real OSS environment has monitoring enabled Monitoring a requirement in the benchmark to measure the cost of monitoring Needs System monitoring to be enabled Eg. CPU usage through vmstat Application to be monitored Container, TT component, JVM Messaging System (topics and queues) Number of messages (in/out), rate and size Network to be monitored Number of packets (in/out), size of packets Storage to be monitored Read/write requests/sec, %busy, %wait, response time Monitoring Requirements

  12. To be generated automatically with tools to standardize reporting Some of the sections Diagrams of Measured and Priced Systems Measured Configuration Hardware, software, network, storage Metrics Expected Achieved Commercial Of The Shelf Software Tuning Options Driver Section Pricing Reporting Requirements (2)

  13. Diagrams of Measured and Priced Systems Measured is a T2000 Measured Configuration Sun T2000 (2 P, 2 cores, 2 threads), 1 Ghz, 16 GB of memory, 2 internal hard disks OS: Sun Solaris 10 OSS/J TT Components: FROX premioss-tt v2.23 Middleware Application Server: Sun JES FYO5Q4 JMS Server: Sun JESFY05Q4 Database Server: None Report:Results from an actual benchmark, premioss-tt v2.23 on a Sun T2000

  14. Diagrams of Measured and Priced Systems Report (2):Results from an actual benchmark

  15. ACHIEVED METRIC JVT TT Create OPS = 93937 JVT TT Update OPS = 153430 JVT TT GetByKey OPS = 34474 JVT TT Cancel OPS = 4948 JVT TT Close OPS = 4948 Total JV T TT OPS = 54389 ACHIEVED METRIC OPS = 58347 COST METRIC Achieved $ per TT OPS/hr = $0.34 (assuming the cost of the system is about $20k) Reporting Requirements (3):Results from an actual benchmark

  16. First version of the benchmark specification achieves measuring the cost of operating a TT system More needed Specifying ticket life cycle with state transitions Response times The sequence of operations with expected behavior Linking Inventory API Customers, Products and Services Additional Scenarios Conclusion

  17. Measure the current behavior of your certified TT implementation Improve and compare the performance of different OSS/J TT profiles Improve performance Hardware (change system, CPU, memory or disk as example) OS, the java virtual machine or the middleware stack Your application itself or other certified implementation Finally measure in terms of operations and cost $ TTops / sec Recap – TTPerf specification 1.1

  18. TTperf specification http://ossj-ttperf.dev.java.net TTPerf results http://ossj-ttperf.dev.java.net TTperf case study http://www.tmforum.org/browse.aspx?catID=2212&linkID=33039 TTperf project Open source (CDDL license) http://ossj-ttperf.dev.java.net Generic OSSJ client code https://ossj-clients.dev.java.net OSS Trouble Ticket API http://tmforum.org/ossj/downloads/jsr091 More Information

  19. TTPerf 1.1 • Nagendra Nagarajayya • nagendra.nagarajayya@sun.com Vincent Perrot vincent.perrot@sun.com 19

More Related