1 / 20

Detecting Performance Design and Deployment Antipatterns in Enterprise Systems

Detecting Performance Design and Deployment Antipatterns in Enterprise Systems. Trevor Parsons Performance Engineering Laboratory University College Dublin Ireland. Presentation Outline. Description of purpose Performance issues in Enterprise Systems

yaholo
Download Presentation

Detecting Performance Design and Deployment Antipatterns in Enterprise Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Detecting Performance Design and Deployment Antipatterns in Enterprise Systems Trevor Parsons Performance Engineering Laboratory University College Dublin Ireland Performance Engineering Laboratory

  2. Presentation Outline • Description of purpose • Performance issues in Enterprise Systems • Limitations in Current Performance Tools • Performance Design and Deployment Antipatterns • Goal Statement • Solution Overview and Architecture • What new understanding, knowledge, methods, or technologies (i.e. contributions) will this research generate? • Methodology • What experiments, prototypes, or studies will be done to achieve the stated goal? • Evaluation • How will the cotributions be evaluated? Performance Engineering Laboratory

  3. Performance Issues in Enterprise Systems • Enterprise Frameworks provide services to help developers meet performance goals, however… • Performance not guaranteed • Performance issues may still arise • Often poor performance is caused by poor design, not poor code • Example:High inter-component communication -> poor performance • Complex distributed enterprise systems are not properly understood • Applications tend to be large • Complex execution envronment (e.g. application servers) • Peformance implications of design decisions may be unknown • Parts of application outsourced • Commmercial Off the Shelf (COTS) Components • Time to market constraints • Performance is often a major issue for enterprise systems Performance Engineering Laboratory

  4. Limitations of current tools • Generate massive amounts of data • Multi User Systems (1000’s of users) • May overwhelm developers trying to locate bottlenecks • e.g. Profilers • Give no reason for, or solution to, performance issues • Do not detect potential bottlenecks • May occur if system properties changes • e.g. increase in workload Performance Engineering Laboratory

  5. Performance Design and Deployment Antipatterns • Design patterns – document recurring solutions to standard software developent problems • Antipatterns – document common mistakes made during software development • Suggest solutions/refactorings • Performance patterns/antipatterns solely focus on performance concerns • Performance antipattern discovery helps developers to locate and treat performance issues within the system • Helps developers form a sense of performance intuition Performance Engineering Laboratory

  6. Antipatterns Antipatterns associated with other software quality attributes Performance Antipatterns ….. EJB Performance Antipatterns .NET Performance Antipatterns EJB Performance Deployment Antipatterns EJB Performance Design Antipatterns EJB Performance Programming Errors Antipattern Hierarchy Performance Engineering Laboratory

  7. Solution Overview Automatically Detect and Assess Performance Design and Deployment Antipatterns in Component Based Enterprise Systems • Monitoring: Obtains performance information from a running application • Analysis: Uses analysis techniques(e.g. Data Mining) to make sense of large volumes of data • Detection: Rule Engine -Antipatterns are described in terms of rules • Assessment:Performance models applied to ranks antipatterns in terms of their performance impact on the system • Presentation: Creates UML models of the system augmented with performance data, highlighting discovered antipatterns. Diagrams are created at different levels of abstraction (following the MDA paradigm). Performance Engineering Laboratory

  8. Architecture Monitoring Call Path Tracer XML Parser Custom Monitoring Tool JMX MEJB Monitor JVMTI Analysis Determine System Structure Data Mining CustomAnalysis Statistical Analysis Detection Presentation Assessment & Ranking Rule Engine Micro Performance Models Rules Performance Engineering Laboratory

  9. Problem Solution Improved Performance Design Performance Engineering Laboratory

  10. New Knowledge, Methods, Technologies • Technologies • Monitoring: Non-Intrusive Call Path Tracer for J2EE • Collects chain of component methods called for each user request • Maintains the order of calls across all tiers (web,business and database) • Methods: • Analysis: Data Mining • Association Rule Mining (patterns of interest) • Clustering(data reduction) • Sequential Rule Mining (patterns of interest) • Statistical analysis (data reduction) • Detection: Apply Expert Systems (Rule Engine) to Analyse Performance Design • Assessment: Micro Performance Models • Knowledge: • Understanding of Enterprise Systems (from performance perspective) • Automatic Identification of Design and Deployment Issues (current tools focus on errors) • Identify Potential Problems • Data Reduction and Pattern Identification Performance Engineering Laboratory

  11. Methodology • Prototype & Evalutaion • Current Status • Currently Implementing tool: • Monitoring: COMPAS J2EE (on sourceforge) • Analysis: Currently being Implemented • Detection: Prototype has been implemented • Micro Performance Models: Future work • Access to a number of Very Large Real Enterprise Applications(with 100’s of software components) Performance Engineering Laboratory

  12. Evaluation • Criteria: • Accurate • How many antipatterns can we detect • How many false positives/negative • Data Reduction • Measure volume of data produced • Extensible Design • Can custom monitoring/analysis/antipatterns/models be added • Performant • Who profiles the profiler • Does the prototype perform reasonably well • Scalable • Do the above scale Performance Engineering Laboratory

  13. Future Work • Complete Implementation • Test on Real Systems!! • Find Real Problems! • Create and Apply Performance Models to Problems. Performance Engineering Laboratory

  14. Questions Publications: • Trevor Parsons, John Murphy. "Data Mining for Performance Antipatterns in Component Based Systems Using Run-Time and Static Analysis." Transactions on Automatic Control & Control Science, Vol. 49 (63), No. 3, May 2004, pp. 113-118 - ISSN 1224-600X. • Trevor Parsons, John Murphy. "A framework for automatically detecting and assessing performance antipatterns in component based systems using run-time analysis”. The 9th International Workshop on Component Oriented Programming, part of the 18th European Conference on Object Oriented Programming. June 2004, Oslo, Norway. • Trevor Parsons.  “A Framework for Detecting, Assessing and Visualizing Performance Antipatterns in Component Based Systems”. First Place at the OOPSLA ACM SIGPLAN Student Research Competition at The 19th Annual ACM Conference on Object-Oriented Programming, Systems, Languages, and Applications. October, 2004, Vancouver, Canada. (Poster and 2 page abstract) Performance Engineering Laboratory

  15. COMPAS J2EE Monitoring Tool: • Non-instrusive End-to-End monitoring infrastructure • Do not need to change application source code • Completely portable across application servers/databases • Can Trace requests across the different tiers and maintains the order of calls Performance Engineering Laboratory

  16. Antipattern Categories • Inter Component Communication Antipatterns • Fine Grained Remote Calls • Aggressive Loading • EJB Home Far Away • …… • Pooling Antipatterns • Incorrect Stateless Session Pool Size • Incorrect Stateful Session Cache • …….. • Stateful Antipatterns • Bloated Stateful Sessions • Bloated Stateless Sessions • Thin Sessions • Sessions A-Plenty • Eager Iterator • ….. • Other Antipatterns • Incorrect Transactions Size • ….. Performance Engineering Laboratory

  17. COMPAS J2EE Performance Engineering Laboratory

  18. Advantages of Approach • Antipatterns clearly defined (categorised) and documented • The better understood antipatterns become, the less likely developers will be to make such mistakes, and the easier it will be for them to recognize the antipatterns if they exist. • Examples/Test Results included in antipattern description • Automatic detection and assessment …. • Takes onus away from developer having to sift through large volumn of performance data • Makes sense of large amount of performance data rather than merely presenting it to user • Developers may gain performance Intuition • Highlights potential bottlenecks/antipatterns • Developers Can Reason about their Performance Design & Deployment Settings • previous tools analyse performance programming errors rather than design errors • allows developers to assess their design (Is my design optimal???) • New detection approach uses runtime (dynamic) analysis • Traditional design recovery/reverse engineering techniques use static analysis • source code often unavailable • COTS components • Outsourcing • Fits well with modern software development processes • Require a running implementation of the system at each iteration of the development process (e.g. XP) • Need run-time performance data to reason about performance (performance antipatterns have dynamic properties) Performance Engineering Laboratory

  19. Sample Rules and Facts Antipattern: Stateless Sessions-a-plenty If A is a Session Bean & If A has no transactional methods & If A has small (e.g. <3) no of methods & If A does not require security settings & If A does not have a relationship with any entities or DB (Facts) Data Required: • Component Types • Components Methods • Transactions • Security Settings • Component Relationships (from call paths) Performance Engineering Laboratory

  20. Sample Rules and Facts Example Association Rule: (Do not confuse the Association Rule with the RULES in the Rule Engine) employee.getName= > employee.getAddress , employee.getAge , employee.getID Confidence 90% Support 20% Antipattern: Fine Grained Remote Calls This antipattern describes a situation where fine grained remote calls are being used. Often Fine grained calls can be eradicated using a session façade. Rule which describes Antipattern (loaded into Rule Engine): If A is an association rule • & If A has two or more calls • & If A has more than X% confidence • & If A’s method calls belong to remote components RULE ENGINE FIRES =>fine grained remote calls Antipattern Detected !!!!!!!! (Facts) Data Required: • Association Rules (from call paths) • Component Types • Component Methods Performance Engineering Laboratory

More Related