1 / 14

Type of Workloads

Type of Workloads . Chapter 4 M. Keshtgary Spring 91. Types of Workloads. Test workload – denotes any workload used in performance study Real workload – one observed on a system while being used Cannot be repeated (easily) May not even exist (proposed system )

cyrah
Download Presentation

Type of Workloads

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Type of Workloads Chapter 4 M. Keshtgary Spring 91

  2. Types of Workloads • Test workload – denotes any workload used in performance study • Real workload – one observed on a system while being used • Cannot be repeated (easily) • May not even exist (proposed system) • is generally not suitable for use as a test workload • Synthetic workload – similar characteristics to real workload • Can be applied in a repeated manner • Relatively easy to port; Relatively easy to modify without affecting operation • No large real-world data files; No sensitive data • May have built-in measurement capabilities • Benchmark == Workload • Benchmarking is process of comparing 2+ systems with workloads

  3. Test Workloads for Computer Systems • Addition instructions • Instruction mixes • Kernels • Synthetic programs • Application benchmarks

  4. Addition Instructions • Early computers had CPU as most expensive component • System performance == Processor Performance • CPUs supported few operations; the most frequent one was addition • Computer with faster addition instruction performed better • Run many addition operations as test workload • Problem • More operations, not only addition • Some more complicated than others

  5. Instruction Mixes • Number and complexity of instructions increased • Additions were no longer sufficient • Could measure instructions individually, but they are used in different amounts • => Measure relative frequencies of various instructions on real systems • Use as weighting factors to get average instruction time • Instruction mix – specification of various instructions coupled with their usage frequency • Use average instruction time to compare different processors • Often use inverse of average instruction time • MIPS – Million Instructions Per Second • FLOPS – Millions of Floating-Point Operations Per Second • Gibson mix: Developed by Jack C. Gibson in 1959 for IBM 704 systems

  6. Example: Gibson Instruction Mix • Load and Store 13.2 • Fixed-Point Add/Sub 6.1 • Compares 3.8 • Branches 16.6 • Float Add/Sub 6.9 • Float Multiply 3.8 • Float Divide 1.5 • Fixed-Point Multiply 0.6 • Fixed-Point Divide 0.2 • Shifting 4.4 • Logical And/Or 1.6 • Instructions not using regs 5.3 • Indexing 18.0 Total 100 1959, IBM 650 IBM 704

  7. Problems with Instruction Mixes • In modern systems, instruction time variable depending upon • Addressing modes, cache hit rates, pipelining • Interference with other devices during processor-memory access • Distribution of zeros in multiplier • Times a conditional branch is taken • Only represents speed of processor • Bottleneck may be in other parts of system

  8. Kernels • Pipelining, caching, address translation, … made computer instruction times highly variable • Therefore we cannot use individual instructions in isolation • Instead, it became more appropriate to consider a set of instructions, which constitutes a higher level function, a service provided by the processors • Since most of the initial kernels did not make use of the input/output (I/O) devices and concentrated solely on the processor performance, this class of kernels could be called the processing kernel • Kernel = the most frequent function • Commonly used kernels: Tree Searching, Matrix Inversion, and Sorting • Disadvantages • Do not make use of I/O devices

  9. Synthetic Programs • Proliferation in computer systems, OS emerged, changes in applications • No more processing-only apps, I/O became important too • Use simple exerciser loops • Make a number of service calls or I/O requests • Compute average CPU time and elapsed time for each service call • Easy to port, distribute (Fortran, Pascal) • First exerciser loop by Buchholz (1969) • Called it synthetic program • May have built-in measurement capabilities

  10. Synthetic Programs • Advantages • Quickly developed and given to different vendors • No real data files • Easily modified and ported to different systems • Have built-in measurement capabilities • Measurement process is automated • Repeated easily on successive versions of the operating systems • Disadvantages • Too small • Do not make representative memory or disk references • Mechanisms for page faults and disk cache may not be adequately exercised • CPU-I/O overlap may not be representative • Not suitable for multi-user environments because loops may create synchronizations, which may result in better or worse performance

  11. Application Workloads • For special-purpose systems, may be able to run representative applications as measure of performance • E.g.: airline reservation • E.g.: banking • Make use of entire system (I/O, etc) • Issues may be • Input parameters • Multiuser • Only applicable when specific applications are targeted • For a particular industry: Debit-Credit for Banks

  12. Benchmarks • Benchmark = workload • Kernels, synthetic programs, application-level workloads are all called benchmarks • Instruction mixes are not called benchrmarks • Some authors try to restrict the term benchmark only to a set of programs taken from real workloads • Benchmarking is the process of performance comparison of two or more systems by measurements • Workloads used in measurements are called benchmarks

  13. SPEC • Systems Performance Evaluation Cooperative (SPEC) (http://www.spec.org) • Non-profit, founded in 1988, by leading HW and SW vendors • Aim: ensure that the marketplace has a fair and useful set of metrics to differentiate candidate systems • Product: “fair, impartial and meaningful benchmarks for computers“ • Initially, focus on CPUs: SPEC89, SPEC92, SPEC95, SPEC CPU 2000, SPEC CPU 2006 • Now, many suites are available • Results are published on the SPEC web site

  14. SPEC (cont’d) • Benchmarks aim to test "real-life" situations • E.g., SPECweb2005 tests web server performance by performing various types of parallel HTTP requests • E.g., SPEC CPU tests CPU performance by measuring the run time of several programs such as the compiler gcc

More Related