network performance analysis strategies
Download
Skip this Video
Download Presentation
Network Performance Analysis Strategies

Loading in 2 Seconds...

play fullscreen
1 / 59

Network Performance Analysis Strategies - PowerPoint PPT Presentation


  • 302 Views
  • Uploaded on

Network Performance Analysis Strategies . Dr Shamala Subramaniam Dept. Communication Technology & Networks Faculty of Computer Science & IT, UPM e-mail : [email protected] . Overview of Performance Evaluation. Intro & Objective The Art of Performance Evaluation

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Network Performance Analysis Strategies ' - arleen


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
network performance analysis strategies

Network Performance Analysis Strategies

Dr Shamala Subramaniam

Dept. Communication Technology & Networks

Faculty of Computer Science & IT, UPM

e-mail : [email protected]

overview of performance evaluation
Overview of Performance Evaluation
  • Intro & Objective
  • The Art of Performance Evaluation
  • Professional Organizations, Journals, and conferences.
  • Performance Projects
  • Common Mistakes and How to Avoid Them
  • Selection of Techniques and Metrics
intro objective
Intro & Objective
  • Performance is a key criterion in the design, procurement, and use of computer systems.
  • Performance  Cost
  • Thus, computer systems professionals need the basic knowledge of performance evaluation techniques.
intro objective6
Intro & Objective
  • Objective:
    • Select appropriate evaluation techniques, performance metrics and workloads for a system.
    • Conduct performance measurements correctly.
    • Use proper statistical techniques to compare several alternatives.
    • Design measurement and simulation experiments to provide the most information with least effort.
    • Perform simulations correctly.
modeling
Modeling
  • Model – used to describe almost any attempt to specify a system under study.

Everyday connotation

– physical replica of a system.

  • Scientific – a model is a name given to a portrayal of interrelationships of parts of a system in precise terms. The portrayal can be interpreted in terms of some system attributes and is sufficiently detailed to permit study under a variety of circumstances and to enable the system’ s future behavior to be predicted.
usage of models
Usage of Models
  • Performance evaluation of a transaction processing system (Salsburg, 1988)
  • A study of the generation and control of forest fires in California (Parks, 1964)
  • The determination of the optimum labor along a continuous assembly line in a factory (Killbridge and Webster, 1966)
  • An analysis of ship boilers (Tysso, 1979)
a taxonomy of models
A Taxonomy of Models
  • Predictability
    • Deterministic – all data and relationships are given in certainty. Efficiency of an engine based on temperature, load and fuel consumption.
    • Stochastic - at least some of the variables involved have a value which is made to vary in an unpredictable or random fashion. Example – financial planning.
  • Solvability
    • Analytical – simple
    • Simulation – complicated or an appropriate equation cannot be found.
a taxonomy of models10
A Taxonomy of Models
  • Variability
    • Whether time is incorporated into the model
      • Static – specific time (financial)
      • Dynamic – any time value (food cycle)
  • Granularity
    • Granularity of their treatment in time.
    • Discrete events – clearly some events (packet arrival)
    • Continuous models – impossible to distinguish between specific events taking place (trajectory of a missile).
the art of performance modeling
The Art of Performance Modeling
  • There are 3 ways to compare performance of two systems
  • Table 1.1

System Workload 1 Workload 2 Average

A 20 10 15

B 10 20 15

the art of performance modeling cont
The Art of Performance Modeling (cont.)
  • Table 1.2 – System B as the Base

System Workload 1 Workload 2 Average

A 2 0.5 1.25

B 1 1 1

slide13

The Art of Performance Modeling (cont.)

  • Table 1.3 – System A as the Base

System Workload 1 Workload 2 Average

A 1 1 1

B 2 0.5 1.25

performance projects
Performance Projects

I hear and forget. I see and I remember. I do and I understand – Chinese Proverb

performance projects16
Performance Projects
  • The best way to learn a subject is to apply the concepts to a real-system
  • The project should encompass:
    • Select a computer sub-system : a network congestion control, security, database, operating systems.
    • Perform some measurements.
    • Analyze the collected data.
    • Simulate AND Analytically model the subsystem
    • Predict its performance
    • Validate the Model.n
professional organizations journals and conferences
Professional Organizations, Journals and Conferences
  • ACM Sigmetrics : Association of Computing Machinery’s.
  • IEEE Computer Society – The Institute of Electrical and Electronic Engineers (IEEE) Computer Society.
  • IASTED – The International Association of Science and Technology for Development
common mistakes and how to avoid them
Common Mistakes and How to Avoid Them
  • No Goals
  • Biased Goals
  • Unsystematic Approach
  • Analysis without understanding The Problem
  • Incorrect Performance Metrics
  • Unrepresentative Workloads
  • Wrong Evaluation Techniques
  • Overlooking Important Parameters
  • Ignoring Significant Factors
common mistakes and how to avoid them19
Common Mistakes and How to Avoid Them
  • Inappropriate Experimental Design
  • Inappropriate Level of Detail
  • No Analysis
  • Erroneous Analysis
  • No Sensitivity Analysis
  • Ignoring Errors in Input
  • Improper Treatment of Outliers
  • Assuming No Change in the Future
  • Ignoring Variability
common mistakes and how to avoid them20
Common Mistakes and How to Avoid Them
  • Too Complex Analysis
  • Improper Presentation of Results
  • Ignoring Social Aspects
  • Omitting Assumptions and Limitations.
a systematic approach
A Systematic Approach
  • State Goals and Define the System
  • List Services and Outcomes
  • Select Metrics
  • List Parameters
  • Select Factors to Study
  • Select Evaluation Technique
  • Select Workload
  • Design Experiments
  • Analyze and Interpret Data
  • Present Results
overview
Overview
  • Key steps in performance evaluation technique
    • Selecting evaluation technique
    • Selecting a metric
  • Performance metrics
  • Problem of specifying performance requirements
selecting an evaluation technique
Selecting an evaluation technique
  • Three techniques
    • Analytical modeling
    • Simulation
    • Measurement
criteria for selection life cycle stage
Criteria for selection: Life-cycle stage
  • Measurements are possible only if something similar to the proposed system already exists.
  • For a new concept, analytical modeling and simulation are the only techniques from which to choose.
  • It is more convincing if analytical modeling and simulation is based on previous measurement.
criteria for selection time required
Criteria for selection: Time required
  • In most situations, results are required yesterday. Then analytical modeling is probably the only choice.
  • Simulations take long time
  • Measurements take longer than analytical modeling.
  • If any thing go wrong, measurement will.
  • So time required for measurement varies.
criteria for selection availability of tools
Criteria for selection: Availability of tools
  • Tools include modeling skills, simulation languages, and measurement instruments.
  • Many performance analysts are skilled in modeling. They would not touch real system at any cost.
  • Others are not as proficient in queuing theory and prefer to measure or simulate.
  • Lack of knowledge of the simulation languages and techniques keeps many analysts away from simulations.
criteria for selection level of accuracy
Criteria for selection: Level of accuracy
  • Analytical modeling requires so many simplifications and assumptions.
  • Simulations can incorporate more details and require less assumptions than analytical modeling and are often close to reality.
criteria for selection level of accuracy cont
Criteria for selection: Level of accuracy (cont.)
  • Measurements may not give accurate results simply because many of the environmental parameters such as system configuration, type of workload, and time of measurement and so on.
  • So, the accuracy of results can vary from very high to none with measurement techniques.
  • Note that, level of accuracy and corectness of conclusions are not identical.
criteria for selection trade off evaluation
Criteria for selection: Trade-off evaluation
  • Goal of performance study: compare different alternatives or to find the optimal parameter value.
  • Analytical models generally provide the best insights into the effects of various parameters and their interactions.
criteria for selection trade off evaluation31
Criteria for selection: Trade-off evaluation
  • With simulations it is possible to search the space of parameter values for the optimal combination.
  • Measurement is least desirable technique in this respect.
criteria for selection cost
Criteria for selection: Cost
  • Measurement requires real equipment, instruments, and time. It is most costly of the three techniques.
  • Cost is often the reason of simulating complex systems.
  • Analytical modeling requires only paper and pencils. Analytical modeling is the cheapest technique.
  • Can be decided based on cost allocated to the project.
criteria for selection saleability
Criteria for selection: Saleability
  • Convincing others is important.
  • It is easy to convince with real measurement.
  • Most people are skeptical of analytical measurements, because they do not understand the techniques.
criteria for selection saleability cont
Criteria for selection: Saleability (cont.)
  • So validation with other technique is important.
    • Do not thrust the results of simulation model until they have been validated by analytical modeling or measurements.
    • Do not thrust the results of an analytical model until they have been validated by a simulation model or measurements.
    • Do not thrust the results of a measurement until they have been validated by simulation or analytical modeling.
selecting performance metrics38
Selecting performance metrics
  • For each performance study, a set of performance criteria or metrics must be chosen.
  • We can prepare this set by preparing the list of services offered by the system.
  • The outcomes can be classified into three categories
    • The system may perform correctly
    • Incorrectly
    • Refuse to performs the service.
selecting performance metrics cont
Selecting performance metrics (cont.)
  • Example: A gateway in a computer network offers a service of forwarding packets to the specified destinations on heterogeneous networks. When presented with the packet
    • It may forward the packet correctly
    • It may forward it to wrong destination
    • It may be down
  • Similarly a database may answer query correctly, incorrectly, or may be down.
selecting metrics correct response
Selecting metrics: correct response
  • If the system performs the service correctly, its performance is measured
    • By the time taken to perform the service.
    • The rate at which the service is performed
    • And the resources consumed while performing the service.
  • These three metrics related to time–rate-resource for successful performance and also called responsiveness, productivity and utilization metrics.
selecting metrics correct response41
Selecting metrics: correct response
  • For example, the responsiveness of a network gateway is measured by response time: the time interval between arrival of a packet and its successful delivery
  • The gateway’s productivity is measured with throughput: the number of packets forwarded per unit time.
  • The utilization gives the indication of the percentage of time the resources of the gateway are busy for the given load level.
selecting metrics incorrect response
Selecting metrics: incorrect response
  • If the system performs the service incorrectly, its performance is measured
    • By classifying errors / packet loss
    • Determining the probabilities of each class of errors.
  • For example, in case of gateway
    • We may want to find the probability of single-bit errors, two-bit errors, and so on.
    • Also, we may want to determine the probability of a packet being partially delivered.
the possible outcomes of service request

Time

(Response time)

Request for

Service i

Rate

(Throughput)

Done

Correctly

Resource

(Utilization)

Done

System

Probability

Error

j

Done

incorrectly

Time between

errors

Duration of the

event

Cannot

do

Event

k

Time between

events

The possible outcomes of service request
metrics45
Metrics
  • Most systems offer more than one metrics and the number of metrics grows proportionately.
  • For many metrics mean value is important
  • Also, variability is important.
  • For computer systems, shared my by many users, two types of metrics need to be considered: individual and global.
    • Individual metrics reflect the utility of each user
    • Global metrics reflect the system wide utility
  • Resource utilization, reliability and availability are global metrics.
metrics46
Metrics
  • Normally, the decision that optimizes individual metric is different from the one that optimizes system metric.
    • For example, in computer networks the performance is measured by throughput (packets per second). If the number of packets allowed in the system is constant, increasing the number of packets from one source may lead to increasing its throughput , but it may also decrease someone else’s throughput.
  • So both system wide throughput and its distribution among individual users must be studied.
selection of metrics
Selection of Metrics
  • Completeness: The set of metrics included in the study should be complete.
    • All possible outcomes should be reflected in the set of performance metrics.
    • For example, in a study comparing different protocols on a computer network, one protocol was chosen as the best until it was found that the best protocol lead to the highest number of disconnections.
    • The probability of disconnection was then added to the set of performance metrics.
commonly used performance metrics response time

User’s

request

System’s

response

Time

Response time

Instantaneous request and response

Commonly used performance metrics: response time
  • Response time is defined as the interval between a user’s request and the system response.
  • This definition is simplistic since the requests as well as responses are not instantaneous.
throughput
Throughput
  • Throughput is defined as the rate (requests per unit of time) at which the requests can be serviced by the system.
    • For networks, throughput is measured in packets per second or bits per second.
throughput50

Knee

Throughput

Usable

capacity

Knee

capacity

Load

Response

time

Load

Throughput…
  • Throughput of the system increases as the load on the system initially increases.
  • After a certain load the throughput stops decreasing.
  • In most cases it starts decreasing
efficiency

Efficiency

Number of processors

Efficiency
  • The ratio of maximum achievable throughput (usable capacity) to nominal capacity is called the efficiency.
    • For example if the maximum throughout from 100MBps LAN is only 85 Mbps, then its efficiency is 85 percent.
    • The ratio of the performance of a n-processor system to that of a one-processor system is its efficiency.
utilization
Utilization
  • The utilization of the resource is measured as the fraction of time the resource is busy servicing requests.
    • It is the ratio of busy time and total elapsed time over a given period.
  • Idle time: the period during which a resource is not being used is called the idle time.
  • System managers are often interested in balancing the load.

Reliability

  • The reliability of the system is measured by the probability of errors or by the mean time between errors.
availability
Availability
  • The availability of the system is defined as the fraction of the time the system is available to service user’s requests.
  • The time during which the system is not available is called down time.
  • The time during which the system is available is called up time.
slide54

Cost/Performance ratio

  • Cost/performance ratio is commonly used as a metric for comparing two or more systems.
  • Cost: Hardware/software licensing, installation, and maintenance over a given number of years.
  • Performance is measured in terms of throughput under given response time constraint.
  • For example two transaction processing systems may be compared in terms of dollars per TPS.
utility classification of performance metrics
Utility classification of performance metrics
  • Higher is better or HB: System users or system managers prefer higher values of such metrics.
    • System throughout is an example of an HB metric .
  • Lower is better or LB: System users or system managers prefer lower values of such metrics.
    • System response time
  • Nominal is best or NB: Both high and low values are undesirable.
types of metrics

(a) Lower is better

(b) Higher is better

Better

Better

Utility

Utility

Utility

Metric

Metric

Metric

(c) Nominal is better

Better

Types of metrics
setting of performance requirements
Setting of performance requirements
  • Main problem faced by performance analyst is to specify the performance requirements for a system to be acquired or designed.

General method:

the performance requirements are specified with the help of requirement statements.

setting of performance requirements cont
Setting of performance requirements(Cont.)
  • Consider the following requirement statements:
    • The system should be both processing and memory efficient. It should not create excessive overhead.
    • There should be an extremely low probability that the network will duplicate a packet, deliver a packet to wrong destination, or change the data in a packet.
setting of performance requirements59
Setting of performance requirements….
  • What all these problems lack can be summarized in one word: SMART.
  • That is requirements must be specific, measurable, acceptable, realizable, and thorough.
  • One should not use the words such as “low-probability” and “rare”.
  • Measurability requires verification that given system meets the requirements.
ad