1 / 56

3.6

3.6. Textbook: Chapter 11. Multi-service queueing systems. Multi-service queueing systems – 1 . There are more than one type (class, service, stream) of customers

demi
Download Presentation

3.6

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 3.6 Textbook: Chapter 11 Multi-service queueing systems

  2. Multi-service queueing systems – 1. There are more than one type (class, service, stream) of customers Customers of same type (class, service, stream) belong to a specficchain, a queueing system is a node in a queueingnetwork In reversible systems , the departure process is of the same type as the arrival process, in the investigated cases: Poisson processes. In this case several queueing systems may be combined into a network of queueing systems. In classical queueing systems we have non-sharing. A customer is either waiting or being served. When being served it has a server alone. There are queueing systems with different sharing strategies. Customers may share servers with other customers so that no one is waiting, but always served with some rate, which may be smaller than the requested rate. By requesting reversibility and usage of all servers whenever possible we get the processor sharing (PS) and generalised processor sharing strategy (GPS).

  3. Multi-service queueing systems – 2. In multi-service queueing systems and in queueing networks customers in some way share the available capacity, and therefore they are served all the time. But they may obtain less capacity than requested, which results in an increase of sojourn time. The sojourn time is not split up into separate waiting time and service time. For multiservice queueing systems and for queueing networks the definitions below are used:

  4. Multi-service queueing systems – 2. Inmulti-service queueing systems and in queueing networks the waiting time is defined as the total sojourn time , including the service time, s. E.g. if in the Internet, the bandwidth is smaller than requested, then the mean transfer time Wj will be bigger then sjand the increaseis defined as the mean virtual waiting time: In a similar way the mean virtual queue length i.e.Ljis defined as: where Aj is the offered traffic of type j.

  5. Multi-service queueing systems – 3. Example. In general there are N PCT1 input processes with j arrival and μ1 departure intensities (j = 1 … N) (The exact value of the μi departure intensities requires further consideration !)

  6. Multi-service queueing systems – 4. PS: ProcessorSharing PR: PreemptiveResume

  7. 3.7 Queueing networks • Introduction to queueing networks • Symmetric queueing systems, Jackson’s theorem • Closed networks: single chain • Closed networks: several chains • Other questions Textbook: Chapter 12

  8. Queueing networks, introduction – 1. • A queueing system consists of nodes (group of servers with the same function) and jobs (requests) travelling from node to node. • In queueing networks we define the queue-length in a node as the total number of jobs in the node, including delayed and servedjobs. • A queueing network might be: • open – the number of jobs may change, i. e. M/M/n • closed– the number of jobs is fixed, i.e. Palm’s machine repair model • The departure process from one node is the arrival process at another node, special attention should be payed to the departure process

  9. p22 Queueing networks, introduction – 2. Four nodes. Four open chains. State: where: gives the number of jobs in node k and pj,kis the probability, that a jobt having left node j. is directed to node k.

  10. Symmetric systems • The queueing system is symmetric if both the arrival and the departure processes are Poissonian. • The four models: • M/M/nstate probabilitiesand • M/G/∞*state probabilities(Poisson !) • M/G/1–PS* state probabilities • M/G/1-LCFS-PR* state probabilities * immediate service! Reversibility PS = Processor Sharing PR = Preemptive Resume

  11. Jackson’s theorem– 1a. Jackson’s theorem: Consider an open queueing network with K nodessatisfying the following conditions:

  12. Jackson’s theorem– 1b.

  13. Jackson’s theorem – 1c. The key point of Jackson's theorem is that: each node can beconsidered independently of all other nodes and that the state probabilities are as for Erlang'sdelay system. (Erlang’s C formula.) This simplifies thecalculationof the state space probabilities significantly. Jackson's first model thus only deals with open queueing networks. Example: Open queuing network consisting of two M|M|1 systems in series

  14. Jackson’s theorem – 2. • InJackson'ssecondmodelthe arrival intensity from • outside: • may depend on the current number of customers in the network, • furthermore, μkcan dependon the number of customers at node k. In this way, we can model queueing networks which • are either closed, open, or mixed. In all three cases, the state probabilities have productform.

  15. Independence assumption Kleinrock’s independence assumption If we consider a real-life data network, then the packets will have the same constant length,and therefore the same service time on all links and nodes of equal speed. The theory ofqueueing networks assumes that a packet (a customer) samples a new service time in everynode. This is a necessary assumption for the product form. This assumption was firstinvestigated by Kleinrock (1964 [74]), and it turns out to be a good approximation in praxis.

  16. Single open chain Open system One has to find the state probabilities where ikis the number of requests in node k. Steps: 1. solution of 2. using μi–s one may get Ai –k. 3. by considering Erlang's delay system one may get the state probabilities for each node.

  17. Single closed-chain – 1. Convolution algorithm for closed networks We only know the relativeload at each node, not the absolute load, i.e. cΛjis obtained, but c is unknown. Wecan obtain the relative state probabilities. Finally, by normalizing we get the normalizedstate probabilities.Large system  complex run. Steps:

  18. Single closed-chain – 2. Convolution algorithm for closed networks

  19. Single closed-chain – 3.

  20. Singleclosed-chain – 4. CPU M/M/1 node Terminals M/G/1 – IS* node Example12.5.1 λ1= λ λ2= λ See details in the Textbook *IS = Immediate Service – There is always a free terminal for the new task.

  21. Single closed-chain – 5. Assumptions: S constant. S circulating jobs. The CPU andthe I/O channelsserve each job several times. A departing job will immediatly replaced by a new on. exponentialholding times s = 1/μ Example 12.5.2S = 4 K = 3 (CPU + 2 I/O) See details in the Textbook

  22. Singleclosed-chain – 6. MVA (MeanValueAlgorithm) Remember! K nodes, S jobs (in a single chain), αk= λksk relative traffics. Recursion according the number x of jobs. At node kthere are Lk(S)jobs on the average.

  23. Singleclosed-chain – 7. average sojourn time immediate service PS = Processor Sharing PR = Preemptive Resume

  24. Singleclosed-chain – 8. There is no waiting in the system, if S=1 Example for nk = 1 but the method might be generalized for any nk.

  25. Singleclosed-chain – 9. Example 12.5.3(= 12.5.2 but with MVA) K=3, S=4, Recursion formulae: relative k values: See details in the Textbook

  26. BCMP queueing networks Queueing networks with more than one type of customers also have product form state probabilities. (Generalization of Jackson’s second model. BCMP Baskett, Chandy, Muntzés Palacios - 1975) Necessary conditions: BCMP–networks can be evaluated with the multi-dimensional convolution algorithm and themultidimensional MVA algorithm.

  27. Mixed queueing networks Mixed queueing networks (open & closed) are calculated by first calculating the traffic load ineach node from the open chains. This traffic must be carried to enter statistical equilibrium.The capacity of the nodes are reduced by this traffic, and the closed queueing network iscalculated by the reduced capacity. So the main problem is to calculate closed networks.For this we have more algorithms among which the most important ones are convolutionalgorithm and the MVA (Mean Value Algorithm) algorithm.

  28. Complexity

  29. 4. Traffic measurement • Principles and methods • Theory of sampling • Continuous measurement • Scanning Textbook: Chapter 13

  30. Introduction • Traffic measurement gathering of data about thetraffic of real or fictisous systems which serve requests arriving in a random way. • A minimum of technical and administrative efforts should result in a maximum of information andbenefit. • A measurement during a limited time intervalcorresponds to a registration of a certain realizationof the traffic process. • Margin of error  condfidence interval. • For practical purposes it is in general sufficientto know the mean value and the variance.

  31. Principles, methods – 1. Whatdowemeasure? Howdowemeasure? Continuous measurement the measuring point is active, it activates the measuring equipment at the instant of the event Scanning the measuring point is passive, the measuring equipment itself tests from time to time, if any changes have taken place. Events arrival of requests, number of jobs, number of lost requests, etc. Time intervals holding times, waiting times, execution times of jobs, etc.

  32. Principles, methods – 2. • Continuousmeasurement - examples: • electro-mechanicalcounters (e.g. number of copies) • x-yplotters (earthquakesurvaillance) • Ampere-hourmeters • watermeters • Scanning - examples: • call-chargingimpulses • measuring of carriedtrafficbyrepeatedscans

  33. Principles, methods – 3. Busy state of servers: actual scanned

  34. Principles, methods – 4. Registration of changes

  35. Theory of sampling – 1. A sample of n IDD*observations with unknown finite mean value m1 and finite variance б2are available (population parameters) The average value and variance of the sample: These are functions of a rv., they are rv.-s also(sample distribution) andrepresent an estimation of the average value and variance of the unknown population: (corrected empirical variance) *IDD= Independent and Identically Distributed

  36. Theory of sampling – 2. The accuracy of an estimate of a sample parameter is described by means of a confidence interval : This is valid, if the samples are independent. Independence is fulfilled, if measurements take place on different days, but samples are not independent for samples taken by scanning in a limited time interval.

  37. Theory of sampling – 3. Percentiles of the t−distribution with n degrees of freedom. A specific value of αcorresponds to a probability mass α/2in both tails of the t−distribution. When n is large,then we may use the percentiles of the Normal distribution.

  38. Theory of sampling – 4. Example 13.2.1: Mean value of sample Corrected empirical variance

  39. Continuous measurement– 1. Time intervals marked by full lines are measured in case a. and b.,respectively. Relationships valid for stochastic sums might in principle be applied if the measuring period is unlimited. Application in practice is possible with precaution !

  40. Stochastic sum - 1. From the mathematical point of view the evaluation of the measurement consist of calculating the sum of a random amount of random variables Service without congestion. Arrival process and holding times are independent. The number of arriving requests in the given period of length T is a random variable: N. Ti. gives the holding time of the i-th incoming request.Distribution of Ti –s is uniform. Total traffic in T See 2.3.3 of Textbook

  41. Stochastic sum - 2. The problem in graphical form: A stochastic sum may be interpreted as a series/parallel combination of random variable. Ti and N are stochastically independent

  42. Stochastic sum 3. For a given i-th branch: mean value variance second non-central moment For all ibranches: variance, since the number of holding times is an r.v. variance, since the holding time is an r.v.

  43. Continuous measurement – 2. • Applying the relationships of the „stochastic sum”: • The amount of traffic (holding times) and traffic • intensity (arrival of requests) should be independent. • This is fulfilled, if congestion is negligible. • Assumption: Poisson input process Number of requests in interval T: Thus the required service time i.e. traffic: Palm’s form factor The first and second non-central moments of the holding time.

  44. Continuous measurement – 3. The distribution ST of the stochastic sum is a compound Poisson distribution. represents amount of traffic. The average value of the number occupied servers represents  traffic intensity = the amount of traffic per time unit if the time unit equals the average holding time: Valid for any holding time distribution !

  45. 2.9) Continuous measurement – 4. Independent of holding time distribution. Depends on holding time distribution See the following Figure ! Relative accuracy of measurement: The measurement of smaller intensities is more precise!!

  46. Result of holding time measurements Figure 1.18: Frequency function for holding times of trunks in a local switching centre.

  47. Scanning measurement – 1. Constant scanning interval (h) The continuously distributed holding time is approximated with a discretely distributed holding time. The continuous time intervals may overlap. Estimation is therefore more difficult.

  48. Scanning measurement – 2. If the real holding times have a distribution function F(t), thenit can be shown that one will observe the following discrete distribution: It can also be shown that for any distribution function F(t) one will always observe the correct mean value:

  49. Scanning measurement – 3. For exponential holding times – the so called Westerberg distribution may be observed: !!!

More Related