1 / 29

Modern Queueing Theory, and Internet Address Allocation

Modern Queueing Theory, and Internet Address Allocation. Clean Slate Workshop March 21, 2007. Balaji Prabhakar. Balaji Prabhakar. 21st Century Queueing Theory, and Internet Address Allocation. Clean Slate Workshop March 21, 2007. Balaji Prabhakar. Balaji Prabhakar. Overview.

mpearson
Download Presentation

Modern Queueing Theory, and Internet Address Allocation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Modern Queueing Theory,and Internet Address Allocation Clean Slate Workshop March 21, 2007 Balaji Prabhakar Balaji Prabhakar

  2. 21st Century Queueing Theory,and Internet Address Allocation Clean Slate Workshop March 21, 2007 Balaji Prabhakar Balaji Prabhakar

  3. Overview • I will talk about two projects • Scalable queuing theory for Internet flows • Current theoretical models are limited: (i) They are packet-centric, (ii) do not model the “feedback loop” of TCP dynamically, and (iii) use strong assumptions which allow a detailed analysis; but this has also made them fragile (assumption-dependent). • IPv6 address allocation • Addresses currently allocated without taking into account a user’s potential growth rate. This leads to address fragmentation and complicates the address lookup problem. • I will describe new approaches to both problems.

  4. Overview of flow models project • Themes • Flows in a packet-switched Internet • View a flow as a minimum data unit • Algorithms for enabling the cheap recognition of flows • Processing flows, not packets, at high speed • Flow-level models • Current theoretical models have two drawbacks: (i) They are packet-centric (ii) they do not model the “feedback loop” of TCP dynamically • We study models which address the above drawbacks • There are some interesting and serious theoretical challenges • Scaleable Network (Queueing) Theory • Classical queueing theory makes strong assumptions which allow detailed analysis • However, it is too fragile (assumption-dependent) and scales poorly • We propose a new approach

  5. Overview of Project • People • PIs: Amin Saberi and Balaji Prabhakar • Students: Mohsen Bayati, Abdul Kabbani, Yi Lu • Visitors/collaborators: Ashvin Lakshmikantha (UIUC), Maury Bramson (Minnesotta), Devavrat Shah (MIT), John Tsitsiklis (MIT) • Industry collaborators (flow detection, processing) • Flavio Bonomi and Rong Pan, Cisco Systems

  6. What is a flow-level model? • First, what is a packet-level model? • Minimum data unit: packet; i.e. decisions made per packet • Classical (20th century) queueing theory is cast in this set up • M/M/1 queue, Jackson Networks, Kelly Networks, etc • Used extensively from the late 80s onwards • To model packet radio networks and switches • E.g. Tassiulas and Ephremides; McKeown, Anantharam and Walrand • Very successful for determining maximum throughput algorithms, get delay/backlog bounds • More recently used by Gupta and Kumar for determining how throughput scales with network size in ad hoc wireless networks

  7. Switch scheduling The switch The queueing model 1 2 3 1 2 3 • Crossbar constraints • each input can connect to at most one output • each output can connect to at most one input

  8. Scheduling algorithms 19 19 18 1 7 Max Wt Matching Max Size Matching 19 3 4 21 1 18 7 Practical Maximal Matchings  Stable (Tassiulas-Ephremides 92, McKeown et. al. 96, Dai-Prabhakar 00)  Not stable  Not stable (McKeown-Ananthram-Walrand 96)

  9. Limitations of packet-level models • Open loop: Fails to incorporate end-to-end behavior • Scheduling and drop decisions on current packets affect future arrivals • Sees all packets as equal • Is unable to model short/long flows, short/long RTTs, etc • Fails to capture the flow-level bandwidth sharing that results from TCP/AQM interactions

  10. Flow-level models • This is more realistic: flows, of differing sizes, arrive at random times and are transferred through the network by the congestion management algorithms and transport protocols • Flow completion (transfer) time is the main quantity of interest: what is its mean? variance? how does it depend of flow sizes? on network topology, on round trip time, etc?

  11. A flow-level model of a 2 link network • This type of model introduced in the Internet context by • Bonald and Massoulie • Active research area… • Bramson, Gromoll, Kelly, Shah, Williams, etc L1 L2

  12. Bandwidth sharing: Link capacity = 1 • Proportional fair sharing • Route 1 flows: 1/7 • Route 2 flow: 4/7 • Route 3 flows: 2/7 • Max-min fair sharing • Route 1 flows: 1/5 • Route 2 flow: 2/5 • Route 3 flows: 1/5

  13. More formally… • The general picture • Routes on left, links on right • Adjacency graph shows captures network topology • Bandwidth sharing due to combination of TCP/AQM • The queueing model • Flows arrive according to Poisson processes on routes • Flow sizes are arbitrarily distributed (heavy-tailed) • Questions • Throughput: How many flows can be processed per unit time? • Delays: What is the flow delay due to a particular scheme? • What effect does network topology have? • What happens if we give more bandwidth to short flows? • What is the effect of scaling network size and/or link capacities?

  14. Need new theory • Classical QT: As in the books of Kleinrock and Kelly • Consider network of queues, and assume • Poisson arrivals, exponential services, random routing • These assumptions give networks with good properties • Reversibility • Decomposibility (independence or “product-form”) • Hence obtain • Throughputs, delays, backlogs, etc • Almost all classical results are some variation of the above • Our situation: none of the above holds! • Main reason: a job (flow) requires service from multiple servers (links) simultaneously; this leads to loss of independence • This type of network studied briefly in earlier literature as “Whittle Networks”

  15. The struggle for independence • Our approach: don’t consider small-sized networks, let network sizes go to infinity • Then, amazingly, you get all the good properties, esp independence • The key feature needed: correlation decay • In these arguments, an “infinite-sized network” is one which has about 100 or so nodes; i.e., infinity is not too far away • Have applied this approach to the so-called “supermarket model” used to study load balancing Allocator

  16. Further work • The main idea is: consider the thermodynamic limit of large networks • The problem simplifies and becomes meaningful because correlations are local and die in the limit • Now, what type of a large network should we consider? • Internet: Power law graphs • Ad hoc networks: Geometric random graphs • Other random graphs, e.g. Bernoulli random graph • In this context we can answer the questions we previously asked about how bandwidth sharing algorithms, topology, flow size, RTT, etc, affect performance • There is a huge simulation component to ensure fidelity of models, understand accuracy with scaling, etc

  17. IPv6 Address Allocation

  18. Address Allocation • Address allocation sets the foundation of the network hierarchy • Therefore, it is important to have the right address allocation scheme • The goal of this work is to answer the question: • How should addresses be allocated for future networks with a clean slate design? • Collaborators: Mei Wang, Ashish Goel • This is essentially the work of Mei Wang • She has been working with (at) Cisco and with CNNIC to apply her growth-based allocation scheme, neat software being developed, etc • See her poster this afternoon for more details

  19. How are Addresses Allocated? Internet Assigned Number Authority IANA Regional Internet Registries (ARIN, RIPE, APNIC, AfriNIC, LACNIC) RIR ISP/LIR Local Internet Registries (ISP’s) EU EU(ISP) End Users

  20. Definition – Address Collision Definition – Address Collision A A A B

  21. Definitions – Address Fragmentation A A A A B B B B Not Fragmented Fragmented

  22. Asia Pacific (APNIC) IPv4 Allocation Data Statistics • Number of customers is 21 • Address space is 225 • Number of requests is 197

  23. Current Practice a) 1 3 2 4 b) 1 5 3 2 4 Address space • Addresses allocated as prefixes; two methods • Sequential: Allocate in any (first) open range of suitable size • Bisection: same as above, but more systematic; a candidate for IPv6 allocation

  24. GAP:Growth-based Address Partitioning An+1, Rn+1 n+1 Ai, Ri 1 i n Li Si Take growth-rates into account, maximize time to collision. “A Growth-based Address Allocation Scheme for IPv6”, Mei Wang, Networking, 2005.

  25. Performance Results • Theoretical proofs: • Sequential is optimum given static sizes • GAP is online optimum • Simulations • Experiments with real data • Use IPv4 data, what if we had allocated addresses using GAP? • Growth-rates derived empirically, no additional input was needed. • Larger enhancements can be accomplished if customers provide growth-rate estimations.

  26. Comparison: Asia Pacific (APNIC) Allocation Data

  27. Comparison: China (CNNIC) Allocation Data

  28. Software for Address Allocation • Being jointly developed with Cisco and CNNIC • Provides visualization of address allocation • Simulate real allocations taking dynamic requests • A platform for analysis of different algorithms

  29. Current Status & Future Work • Increasing interest from parties in various layers: • Regional registries, country registries, ISPs, large corporations • Economic incentive model for accurate estimation of growth-rates • Non-prefix based address allocation (clean slate) • Prefix assignment equivalent to assigning subtrees • The simplest non-prefix assignment would assign a pair of subtrees, or equivalently, an edge on the hypercube • Provider Independent (PI) address allocation

More Related