1 / 53

Performance Engineering

Performance Engineering. Software Performance Engineering. Prof. Jerry Breecher. An ACM “Queue” Podcast interview with a performance analyst. Sample help wanted ads – what does the market define today when looking for a Performance Engineer/Analyst.

alvis
Download Presentation

Performance Engineering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance Engineering Software Performance Engineering Prof. Jerry Breecher Performance Engineering

  2. An ACM “Queue” Podcast interview with a performance analyst. Sample help wanted ads – what does the market define today when looking for a Performance Engineer/Analyst. One aspect of Performance Engineering – building performance into a product. What’s In This Document? Performance Engineering

  3. See http://queue.acm.org/ Under Browse Topics, click on performance. There are a number of great articles there – focused on practice. Several Views Of What A Performance Analyst Does • Five Common Issues When A Performance Problem Exists: • Product wasn’t developed with a performance test harness. • When a problem develops, no one takes responsibility • The developers on-site, don’t use the tools that are available to solve the problem. • After developing a list of possible causes, there’s no elimination of the unlikely problems – people don’t know how to determine what matters. • Often people don’t have the patience just to sift through the data. • A List of Useful Tools: • dtrace – instruments existing code by putting breaks at known points • Vtune – finds out what code is being executed. • strace on LINUX – prints out all the system calls executed Performance Engineering

  4. Great book and the price is right. Gleaned from many years of experience. Bob Wescott, “The Every Computer Performance Book “ ISBN-13: 978-1482657753 http://www.treewhimsy.com/TECPB/Book.html The less a company knows about the work their system did in the last five minutes, the more deeply screwed up they are.<If you can’t measure it, you can’t manage it.> What you fail to plan for, you are condemned to endure. <Bad things WILL happen.> If they don’t trust you, your results are worthless. <Be clear how much you trust your numbers.> Always preserve and protect the raw performance data. <You massage it too much, and the data will lose it’s meaning. Be able to get it back.> The meters should make sense to you at all times, not just when it is convenient. <Know your tools – what they can and cannot do; and what they give you.> If the response time is improving under increased load, then something is broken. <If results don’t fit your model, something is broken.> If you have to model, build the least accurate model that will do the job. <And I say, always have a model.> You’ll do this again. Always take time to make things easier for your future self. <Write down everything you do.> Ignore everything to the right of the decimal point. <Significant figures!!!!> Never offer more than two possible solutions or discuss more than three. <KISS> Bob Wescott’s Rules Of Performance Performance Engineering

  5. Performance Testing • Conduct reviews of application designs, business and functional requirements • Implement test plans and cases based on technical specifications • Design and execute automated and manual scripted test cases • Document, maintain and monitor software problem reports • Work with team members to resolve product performance issues • Utilize multiple test tools to drive load and characterize system performance • Execute tests and report performance/scalability test results Skills/Requirements 4-6 years post-graduation experience in QA testing client and server applications Demonstrated experience with MS SQL server databases Experience with running UI automated test scripts.  Familiarity with SilkTest preferred. Exposure to multi-threading and network programming Undergraduate degree from top-tier computer science/engineering university Sample Want Ads Performance Engineering

  6. Performance Debugging • Use home-grown and commercial tools to measure, analyze, and characterize performance, robustness, and scalability of the EdgeSuite Platform • Serve as a technical point of escalation to operations and customer care • Debug complex service issues: service incidents, complex customer setups, field trials, performance issues, and availability issues • Enable specific capabilities to our operational networks that are outside the capabilities of our Operations group • Work across all technical areas in the company to enable innovative new solutions that span multiple technologies and services, often to meet specific customer needs • Skills/Requirements • Familiarity with data analysis • Experience in network operation and monitoring • Depth networking principles and implementation, including TCP/IP, UDP, DNS, HTTP, and SSL protocols a plus. • Thorough understanding of distributed systems • Experience with principles of software development and design Sample Want Ads Performance Engineering

  7. Customer Performance • Ever stay up all night trying to squeeze 3 more fps out of your overclocked GPU? • Crash your bike because you were too busy thinking of ways to speed up a nasty triply nested loop? • Recompile your Linux kernel to extract that last ounce ofperformance? We have a job waiting for you.Endeca is seeking an energetic and driven engineer to join our new System Analysis team. • Engineers on this team will be responsible for exploring and understanding the behaviors and characteristics of Endeca's system. • Members of this team will work with developers to tune system performance, provide technical guidance to architects building customer applications, and help our customers continue to achieve unprecedented levels of performance and scalability. • Skills/Requirements • - 3 years experience in software engineering- Undergraduate or graduate degree in computer science, or equivalent depth of study in CS- Familiarity with the process of software performance investigation and tuning- Experience with Linux and Windows- Experience with scripting languages- Ability to grasp the complexities of large distributed systems- Strong analytical and troubleshooting skills- Very highly motivated, quick learner Sample Want Ads Performance Engineering

  8. Performance Architect The Performance Engineer will provide technical leadership to the organization in the areas of software frameworks and architecture, infrastructure architecture, middleware architecture and UI architecture. The Performance Engineer is expected to have versatile expertise in application performance (DB, middleware, UI, infrastructure). This engineer will collaborate with all teams within IT to implement an application performance measurement framework using end-to-end performance measurement and monitoring tools. Using data collected from these tools the Performance Engineer will work with the architects to influence application and infrastructure design. This performance engineer must demonstrate skill versatility in the areas of application architecture, infrastructure architecture and application performance. • JOB RESPONSIBILITY • Implement end-to-end performance measurement tools/frameworks. • Build processes around tools to conduct application performance benchmarks. • Design application benchmarks that will simulate application workloads. • Design and implement capacity measurement tools and performance benchmarks and testing. • Ability to wear many hats to help expedite multiple projects • Skills/Requirements • Strong performance measurement skills using tools like LoadRunner, SilkRunner. • Strong performance analysis skills with a thorough understanding of application bottlenecks and infrastructure bottlenecks (OS, Storage etc). • Strong skills using performance measurement/monitoring tools like BMC patrol, BMC Perform/Predict, HP OpenView, MOM. • Hands on experience writing LoadRunner scripts and simulating performance benchmarks • Experience with J2EE performance measurement tools is a plus Sample Want Ads Performance Engineering

  9. Performance Architect This individual will work with the systems architects and key stakeholders to develop a performance strategy for SSPG products and implement a methodology to measure fine grained resource utilization. This individual will establish a set of benchmarks and a benchmark methodology that include all supported storage protocols, the control path, applications and solutions. Will also be an evangelist for performance within the group and ensure that performance is a core SSPG competency. The position requires strong “hands on” development skills and a desire to work in a fast paced collaborative environment. Candidate must have a strong knowledge of operating system technology, device drivers, multiprocessor systems, and contemporary software engineering principles.  Skills/Requirements BS in CS/CE plus 7-10 years experience or equivalent. Proven experience with storage performance benchmarking and tuning including hands on experience with performance related applications such as Intel VTune, SpecFS, and IOMeter Strong operating system knowledge base with a focus on Linux, Windows and embedded operating systems. Strong C/C++ programming and Linux scripting experience. Knowledge of any of the following protocols and technologies is a plus: iSCSI, TCP/IP, Fibre Channel, SAS, File systems, RAID and storage systems Design and development experience with embedded system is desirable Candidate should possess excellent verbal and written communications skills. Sample Want Ads Performance Engineering

  10. Performance Engineering is the practice of applying Software Engineering principles to the product life cycle in order to assure the best performance for a product. The purpose is to know at each stage of development the performance attributes of the product being built. Performance Engineering Motivation This section is devoted to motivation, and talking through a number of the guiding tenets of Performance Engineering. Performance Engineering

  11. Example: Read the unbiased, "true-to-life" example portrayed below and answer the questions posed. Performance Engineering Motivation A project is planned and scheduled under tight constraints; marketing feels that it is strategic to offer this product, and upper management inquires on a daily basis about the status of the project. Numerous short-cuts are taken in the design and implementation of the project. The product gets to alpha "on schedule", but it's discovered that the product is bug-ridden, and performs at 1/10th the speed of the slowest competitor. When the product finally ships, it's 6 months behind schedule, never wins a benchmark, and serves only as a line item in the product catalog. Within a year, a project is launched to build it "right". Does this sound familiar? Does this ever happen in your life? Is there ever a situation where such an occurrence is acceptable? In the above example, what if the quality was ok, but the performance remained terrible; would the scenario then be acceptable? Is it ever acceptable to not spec a product because there won't be enough time? Performance Engineering

  12. LOTS OF OTHER QUESTIONS RELATE TO THIS TOPIC: Performance Engineering Motivation Can you get performance for free? Does it naturally fall out of a "good" design? Can you add performance at the end of a project? Are performance problems as {easy | hard} to fix as functional bugs? Is it easier to design in quality or performance? It’s often stated that since performance is decided by algorithms rather than by coding methodology, it's primarily project leaders or high level designers who need to worry about performance. Do you agree with this? What are the politics of performance estimation? What happens if you don't meet your performance goal? What will happen if you up-front make your best guess and then your product comes in below this guess (after all, it was a guess, just like we've been doing in class?) Does putting an uncertainty on the number make it OK? Does it help to have management stress the importance of performance? When it comes to the crunch, does management emphasize Performance, Quality, or Schedules? Performance Engineering

  13. FOLKLORE: Believe it or not, these are all comments/excuses I’ve heard!! Performance Engineering Motivation It leads to more development time There will be Maintenance Problems (due to “tricky code”). It's too difficult to build in performance. Performance problems are rare. Performance doesn't matter on this product since so few people will be using it. Performance can be solved with hardware, and hardware is (relatively) inexpensive. We can tune it later. Sam and Sally and Sarah didn't have to worry about performance, so it really isn't very important. Good performance is a natural byproduct of good design and coding techniques. If we move to hardware three times faster, the problem will disappear. Performance Engineering

  14. THE REALITY IS: Performance Engineering Motivation MANY systems initially perform TERRIBLY because they weren't well designed. Problems are often due to fundamental architectural or design factors rather than inefficient code. Performance engineering is no more expensive than software engineering. Performance problems are visible and memorable. It's possible to avoid being surprised by the performance of the finished product. Performance Engineering

  15. THE BENEFITS OF PERFORMANCE ENGINEERING INCLUDE: Performance Engineering Motivation • Timely implementation allows for: • Staff effectiveness • Fire prevention rather than fire fighting. • No surprises. Good-performing systems result in: User satisfaction User Productivity Development staff productivity Selling more systems and getting a bigger paycheck. Performance can be "orders of magnitude" better with early, high level optimization. Performance Engineering

  16. THE COSTS OF PERFORMANCE ENGINEERING: Performance Engineering Motivation The critical path time to deliver is minimized if modeling, analysis, and reporting are done up-front. This is the “Software Engineering Religion.” Time is required by the design team. Performance experts are part of that design team. Time for modifications - Pay now or pay later. Cost of needed skills. The way we do performance engineering today is analogous to the marksman - he shoots first and whatever he hits, he calls the target. Performance Engineering

  17. Little’s Law ….. Utilization …. Blah, blah, !!! Then a Miracle Happens!! An amazingly good-performing product results. YOU NEED TO BE A BIT MORE EXPLICIT ABOUT SOME OF THE DETAILS!! Performance Engineering Introduction In this section we begin looking at some of the practical ways of doing Performance Engineering. Performance Engineering isn't magic or miraculous, but an organized mechanism for building in performance. Performance Engineering

  18. The trickle down philosophy: KEY POINTS IN PERFORMANCE ENGINEERING By setting broad, verifiable performance targets at the beginning, in the Marketing Requirements, we can track those targets through the whole development lifecycle and verify along the way that the goals are being met. The goal is to show how to incorporate performance information into the Standard Development Life Cycle. Developers already have within them the information needed to make performance predictions; they need only to understand how to express that information. Performance Engineering

  19. KEY POINTS IN PERFORMANCE ENGINEERING We build on Software Engineering Methodology: This methodology employs a number of documents and review mechanisms to ensure the completeness and quality of our software. These same techniques can be used to improve the performance of systems; neither quality nor performance is an add-on, so the procedures in place to improve quality can also be used to improve performance. Performance is an intangible: It's easy to see and describe a function, but much harder to determine how fast it will go or what resources it will devour. Performance Engineering makes visible the performance expectations of a new product and quantifies whatever can be nailed down at any particular point in the development cycle. Performance Engineering

  20. KEY POINTS IN PERFORMANCE ENGINEERING Bootstrapping Performance Engineering

  21. VALIDATION AND VERIFICATION KEY POINTS IN PERFORMANCE ENGINEERING Validation Showing at project completion that the performance meets the stated goals. Verification Showing at each stage in the development that the projected performance will meet the previously stated goals. The costs to VALIDATE performance? Establish performance goals. Establish performance tests. Schedule time for Performance Assurance to do their thing. Schedule time to fix the performance. The costs to VERIFY performance? Establish performance goals. Establish performance tests. Schedule time for developers to conduct analysis and inspections. Schedule time for Performance Assurance since no one will believe you've verified the performance. Performance Engineering depends on a combination of verification and validation. For those of you who've forgotten this nuance, here's a brief review: Performance Engineering

  22. SETTING MEASURABLE PERFORMANCE OBJECTIVES KEY POINTS IN PERFORMANCE ENGINEERING Unambiguous There should be no doubt as to what the goal means. It is no good saying "A will be the same as B" without saying what will be the same. Specify in terms of CPU time, IO's, etc. Specify also the environment that will be used. Measurable Every performance goal must have an associated measurement. The measurement must be defined as carefully as the goal because it is the measurement that will tell you that you have reached your goal. Avoid vague goals without well defined measurements; they will lead to unreasonable expectations being set for your design. Performance Engineering

  23. SETTING MEASURABLE PERFORMANCE OBJECTIVES KEY POINTS IN PERFORMANCE ENGINEERING Metrics There are an infinite number of ways to measure performance, many of them invalid, inaccurate, or just plain dumb. The problem lies in trying to state the performance of a complex system in simple terms. We will concentrate on: Finding the most common paths/functions. Determining metrics for those paths. Defining tests to evaluate these metrics. Performance Engineering

  24. What Are They? PERFORMANCE ENGINEERING INSPECTIONS Performance Inspections are a technique, very similar to Software Engineering inspections, for analyzing performance issues during the preparation of specifications. The goal of inspections is to gather information needed to complete the performance documentation. There's a mapping between: REQUIREMENTS IN SPECS <===> QUESTIONS ON INSPECTIONS Performance Engineering

  25. Practical Aspects of Doing Inspections PERFORMANCE ENGINEERING INSPECTIONS These inspections should be conducted in a formal way within one meeting. There may well be questions generated that can only be answered by more thorough research. Experience shows an inspection requires several hours, with a few more hours to resolve action items. Be careful -- like any inspection, several people should be involved, including a dispassionate outsider. Be careful -- it's very possible to get so mired in details that the whole performance business becomes an overwhelming burden. Performance Engineering

  26. Practical Aspects of Doing Inspections PERFORMANCE ENGINEERING INSPECTIONS Whenever possible, make a guess. But clearly label your guess and talk about the assumptions going into it. Software developers have a way of being overly detail-conscious when it comes to gathering performance numbers. The specs themselves should contain answers to the questions posed here. When reviewing the document, those involved in the review should insure that the questions are indeed answered. In each of the following sections are questions that might be asked during inspections. Many others are also possible, especially those which delve into the details of the specific project. Performance Engineering

  27. Practical Aspects of Doing Inspections PERFORMANCE ENGINEERING INSPECTIONS Whenever possible, make a guess. But clearly label your guess and talk about the assumptions going into it. Software developers have a way of being overly detail-conscious when it comes to gathering performance numbers. The specs themselves should contain answers to the questions posed here. When reviewing the document, those involved in the review should insure that the questions are indeed answered. In each of the following sections are questions that might be asked during inspections. Many others are also possible, especially those which delve into the details of the specific project. Performance Engineering

  28. OVERALL GOALS AT THE REQUIREMENTS LEVEL: PERFORMANCE ENGINEERING MARKETING REQUIREMENTS DOCUMENTS Determine the best and worst expectations for this product. State the performance needed to meet marketing needs: this can range from "we must beat the competition" to "get it out no matter how slow it is". (As we've discussed, the second approach will come back to haunt you.) What is the "drop dead" point - the performance below which the project shouldn't be done. To determine a target at which we can aim later. Performance Engineering

  29. WHAT PERFORMANCE ITEMS SHOULD BE IN THE REQUIREMENTS? PERFORMANCE ENGINEERING MARKETING REQUIREMENTS DOCUMENTS What metrics matter. What are the current competitor’s products and what performance do they achieve (or suffer.) The current products you produce and the performance they achieve. NOTE: there is ALWAYS a comparable product against which the performance of a new product should be compared; NO ONE creates totally new product lines, companies merely extend existing ones. Overall performance goals. In order to be a viable product, what are the maximum resources that can be used. Performance Engineering

  30. WHAT PERFORMANCE ITEMS SHOULD BE IN THE REQUIREMENTS? PERFORMANCE ENGINEERING MARKETING REQUIREMENTS DOCUMENTS Placement in the market: What are the expected/potential performance wins in the new product. What are the expected/potential performance pitfalls in the new product. At this point, there is little need for detail on how to combat the problems - identification is enough. Stretching the limits: Where will the performance of your company and of its competitors be in 1 year / 2 years? Into what environment/market will this product be sold? What other applications will be run on the machine? What machine resources are available for this product? Performance Engineering

  31. WHERE DOES THIS INFORMATION COME FROM? PERFORMANCE ENGINEERING MARKETING REQUIREMENTS DOCUMENTS From Inspections (see the next section.) Input comes from marketing and from looking around. Determining expectations. Expectations are set based on: Marketing Observing the competition Baseline of the previous product. The "field" Setting general performance goals. Goals should be determined by, and expressed in terms of: Customer satisfaction Sales Benchmarks How to gather statistics. This can also be seen as resolving general goals into metrics. A goal of "customers will be happy" is all fine and good, but it's difficult to measure. We need real concrete metrics (we'll know we've succeeded when we achieve these metrics.) Performance Engineering

  32. QUESTIONS TO USE ON A REQUIREMENTS INSPECTION PERFORMANCE ENGINEERING MARKETING REQUIREMENTS DOCUMENTS What is the current performance of competitor's products? What is the current performance of your existing products? (When none exist, use close cousins.) Based on 1 and 2, what's the minimum performance we need in order to achieve parity? This can be answered by: "as fast as Compaq", "20% better than today", etc. If the number is answered qualitatively rather than quantitatively, how can a more solid number be obtained ( and who will get it )? In order to meet these minimum performance requirements, is it acceptable to use the entire machine’s resources? Performance Engineering

  33. QUESTIONS TO USE ON A REQUIREMENTS INSPECTION PERFORMANCE ENGINEERING MARKETING REQUIREMENTS DOCUMENTS What performance problems/successes did the competition encounter when introducing the comparison product? What performance problems/successes did you encounter when introducing the comparison product? These are "looking ahead" type goals: To be a force in the market, what performance do we need? What performance increment would be required to open new markets? There are other types of questions asking about environments: What fraction of a module can be used to produce this performance? (What other work must the machine carry on?) How will customers be using this product; what are typical scenarios? Performance Engineering

  34. Detailed schedules should include work items such as: PERFORMANCE ENGINEERING PROJECT PLAN/SCHEDULE Preparation of Performance components of specs. Analysis necessary to include performance components in the various documentation. Performance walkthroughs. Performance checkpoints; ensuring at each stage of the project that performance targets are being met. Final performance verification. Include time for performance enhancement - we still don't know how to get it right the first time. Performance Engineering

  35. OVERALL GOALS AT THE FUNCTIONAL SPEC LEVEL INCLUDE: PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION The goal of a functional spec is to define the interfaces of a product (that is, address environmental issues) and to describe how the user of the product will view that interface, without telling how the thing works. The performance portions of the spec have the same goal: We want to know who will call the function, and what will be the most common modes they will use -- we want to define the environment. Comparison with the MRD: Knowing the goals at the MRD level, it's possible now to set limits in terms of definable resources such as I/O and CPU. We want to determine ways to assure that we've been successful. Performance Engineering

  36. SLIGHTLY MORE DETAILED GOALS: PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION It is reasonable to expect the following performance information at this time: 1. Who will be calling this function? Approximately how many times per second will this function be called. Given the resource usage in item 2, what fraction of the system resources will be expended on this function? TOTAL COST = COST PER REQUEST * TOTAL REQUESTS Having done this, you can answer: If you can't win on all the functions you've defined, which ones are the most important (must wins!)? Which situations provide big wins? Performance Engineering

  37. SLIGHTLY MORE DETAILED GOALS: PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION 2. Set performance goals for CPU, I/O, and memory. Though there is still no detailed information on resource usage, it is time for informed guesses. This means we expect answers in milliseconds, furlongs, accesses/sec. etc. Ultimately, you can estimate the final performance! 3. Here we divide up the total project and estimate how many resources each part will take. The mechanism defined in this functional spec, or in all the functional specs addressing an MRD, must be able to deliver the performance promised in the MRD! 4. How will success in meeting these goals be measured. A description of the necessary tools should be at the same level of detail as the functional spec itself. Performance Engineering

  38. WHAT PERFORMANCE ITEMS SHOULD BE IN THE FUNCTIONAL SPEC? PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION Your functional spec will normally defend decisions made; why was one algorithm chosen over another, why store particular data in this spot, etc. You should also include performance factors; defending decisions based on performance criteria. REMEMBER - the philosophy here is to make estimates - no hard numbers make any sense at this point. Performance Engineering

  39. WHAT PERFORMANCE ITEMS SHOULD BE IN THE FUNCTIONAL SPEC? PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION 1. What is (are) the most frequently used time-lines described in this spec? What really matters is the small amount of code that is frequently traveled. All other code can be ignored. Techniques for determining this are discussed. How do you gather this data? The best method is intuition. Sure it's possible to go off and make lots of detailed measurements, but at this phase of the project such detail may not be possible. It's probably adequate to follow arguments such as the following: "This routine is used be every system call, therefore it is frequently used." or "This routine is called when opening a direct queue, so it happens less often". This item is designed simply to single out those routines meriting further investigation. We'll get more numerical later on. The remaining questions apply only to these often-used time-lines identified in item 1; all other time-lines can be ignored. Performance Engineering

  40. WHAT PERFORMANCE ITEMS SHOULD BE IN THE FUNCTIONAL SPEC? PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION 2. When determining resource numbers, make sure you include the cost of calling routines at layers below those defined by this spec. If you don't know, guess. What "lower level" functions will be called by these time-lines? By "lower level" is meant functions called by the mechanism you are designing. a. Estimate the CPU usage for the called functions. b. Estimate the disk usage for the called functions. c. Estimate the number of suspends for the called functions, and include the cost of doing suspends/reschedules in CPU usage. Performance Engineering

  41. WHAT PERFORMANCE ITEMS SHOULD BE IN THE FUNCTIONAL SPEC? PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION 3. Specific resource-usage numbers for CPU, memory and I/O. These numbers should be estimated for the most common time-lines in the most common environments. Where numbers are available from previous revs or from the competition, they should be included. For the high-usage time-lines described in your spec, estimate a) CPU usage b) Disk usage c) Suspends/reschedules Based on the answers to questions 2 and 3, you can determine the total cost of executing your new high-usage functions. Performance Engineering

  42. WHAT PERFORMANCE ITEMS SHOULD BE IN THE FUNCTIONAL SPEC? PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION 4. How many times per second will these time-lines be called by higher level functions? This is an environment question; you may have figured this out already when you identified in question 1 that certain functions were "high-usage". 5. Based on #4, and the sum of #2 + #3, what fraction of the total system resources ( utilization ) are used by this time-lines? 6. What fraction of the resources called out in the MRD will be used by these time-lines. Performance Engineering

  43. WHAT PERFORMANCE ITEMS SHOULD BE IN THE FUNCTIONAL SPEC? PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION 7. Checkpointing: When you add up all the time(s) in your most commonly used time-lines, did you get a number consistent with what you estimated in the MRD? 8. What are the metrics (what will you measure) in order to assure the performance given above? (Do NOT describe how to measure at this point.) 9. Describe in general terms how you expect to measure that these goals have been met. A description of the necessary methodology should be at the same level of detail as the functional spec itself. Performance Engineering

  44. WHERE DOES THIS INFORMATION COME FROM? PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION Lot's of information has been requested here in order to meet the ultimate goal of determining the total resource usage of your product. Here are some of the places where you can find help in preparing numbers: The MRD. Previously known performance Previous products (how fast did this system call run in the last rev?) How fast can the competition do this operation? Benchmarks of system performance. Intuition. The philosophy which says all the performance and resources must come from one pie; you can only cut it so many ways; pies and resources are both finite. Performance Engineering

  45. QUESTIONS TO USE ON A FUNCTIONAL SPEC WALKTHROUGH PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION 0. What other algorithms were looked at; why was this determined to be the best for performance reasons? The philosophy here is to make guesses - no hard numbers make any sense at this point. The use of "time_lines" is explained in the unit on Design Strategies. 1. What is ( are ) the most frequently used time-lines described in this spec? THE REMAINING QUESTIONS apply only to these often-used time-lines; all other time-lines can be ignored. Performance Engineering

  46. QUESTIONS TO USE ON A FUNCTIONAL SPEC WALKTHROUGH PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION 2. What lower level functions will be called by these time-lines? Estimate the time for CPU usage for the called functions. Estimate the time for disk usage for the called functions. Estimate the time spent in interrupts resulting both directly and indirectly from this function. Estimate the number of suspends for the called functions, and include the time for doing suspends/reschedules. Estimate the amount of time a lock will be held by this function, and thus the percentage contention on the lock. Include this contention in your time-line. 3. For the high-usage time-lines themselves, as described in your spec, estimate CPU usage Disk usage Suspends/reschedules Performance Engineering

  47. QUESTIONS TO USE ON A FUNCTIONAL SPEC WALKTHROUGH PERFORMANCE ENGINEERING FUNCTIONAL SPECIFICATION 4. How many times per second will these time-lines be called by higher level functions? 5. Based on #4, and the sum of #2 + #3, what fraction of the total system resources (utilization) are used by this time-line? 6. What fraction of the resources called out in the MRD will be used by these time-lines. 7. When you add up all the resources, do they equal what was specified in the MRD? 8. What are the metrics (what will you measure) in order to assure the performance given above? (Do NOT describe the details of measurement at this point; remember, this is a functional level.) Performance Engineering

  48. OVERALL GOALS AT THE DESIGN SPEC LEVEL INCLUDE: PERFORMANCE ENGINEERING DESIGN SPECIFICATION This is where you should be able to make detailed estimates. And this is where you have a real chance to ensure that the numbers you’ve been guessing are real. At this point, in the design, you should be able to make very concrete assumptions. Again, you can roll the detailed numbers you get back into the functional spec and requirements. Will the product perform as required? Now you know. Performance Engineering

  49. QUESTIONS TO USE ON A DESIGN SPEC INSPECTION PERFORMANCE ENGINEERING DESIGN SPECIFICATION REMEMBER - the philosophy here is to get numbers. These numbers should be as accurate as possible, but the code isn't written so the data can only be a best guess. NOTE ALSO - the methodology is the same as used at Functional Spec level. 0. What metrics matter? 1a. Are the most-used time-lines the same as they were in the functional spec? If not, or none were defined, what are they? 1b. What are the low level library routines that are important in this design? Identify those routines that have a large fan-in. Performance Engineering

  50. QUESTIONS TO USE ON A DESIGN SPEC INSPECTION PERFORMANCE ENGINEERING DESIGN SPECIFICATION THE FOLLOWING QUESTIONS APPLY ONLY TO THE HEAVILY USED PATHS. 2. Determine the low level functions, in other components, called by your time-lines. These are routines subsidiary to those in the spec. What are the costs of using these functions? As before, these costs include CPU usage Disk usage Suspends/reschedules Other 3. Calculate also the costs to do CPU in your own routines. This means you can estimate the total lines of code you'll run. Do these calculations for both library routines and often-used time lines, though the library routine work is meant mainly to raise red flags. Performance Engineering

More Related