1 / 0

Telecommunication Systems and Technologies

Telecommunication Systems and Technologies. Prof. Dr.-Ing. Georg Carle Christian Grothoff, Ph.D. Chair for Network Architectures and Services Institut für Informatik Technische Universität München http://www.net.in.tum.de. Providing multiple Classes of Service.

fonda
Download Presentation

Telecommunication Systems and Technologies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TelecommunicationSystems and Technologies

    Prof. Dr.-Ing. Georg CarleChristian Grothoff, Ph.D. Chair for Network Architectures and Services Institut für InformatikTechnische Universität Münchenhttp://www.net.in.tum.de
  2. Providing multiple Classes of Service

  3. Providing Multiple Classes of Service Traditional Internet approach: making the best of best effort service one-size fits all service model Alternative approach: multiple classes of service partition traffic into classes network treats different classes of traffic differently (analogy: VIP service vs regular service) granularity: differential service among multiple classes, not among individual connections history: ToS bits in IP header 0111
  4. Delay Distributions probability density function mean delay max delay delay propagation delay delay jitter
  5. H3 H1 R1 R2 H4 1.5 Mbps link R1 output interface queue H2 Multiple classes of service: scenario
  6. R1 R2 Scenario 1: mixed FTP and audio Example: 1Mbps IP phone, FTP or NFS share 1.5 Mbps link. bursts of FTP or NFS can congest router, cause audio loss want to give priority to audio over FTP Principle 1 packet marking needed for router to distinguish between different classes; and new router policy to treat packets accordingly
  7. R1 R2 Principles for QOS Guarantees (more) what if applications misbehave (audio sends higher than declared rate) policing: force source adherence to bandwidth allocations marking and policing at network edge: similar to ATM UNI (User Network Interface) 1 Mbps phone 1.5 Mbps link packet marking and policing Principle 2 provide protection (isolation) for one class from others
  8. Principles for QOS Guarantees (more) Allocating fixed (non-sharable) bandwidth to flow: inefficient use of bandwidth if flows doesn’t use its allocation 1 Mbps logical link 1 Mbps phone R1 R2 1.5 Mbps link 0.5 Mbps logical link Principle 3 While providing isolation, it is desirable to use resources as efficiently as possible
  9. Scheduling And Policing Mechanisms scheduling: choose next packet to send on link FIFO (first in first out) scheduling: send in order of arrival to queue real-world example? discard policy: if packet arrives to full queue: who to discard? Tail drop: drop arriving packet priority: drop/remove on priority basis random: drop/remove randomly
  10. Scheduling Policies: more Priority scheduling: transmit highest priority queued packet multiple classes, with different priorities class may depend on marking or other header info, e.g. IP source/dest, port numbers, etc..
  11. Scheduling Policies: still more round robin scheduling: multiple classes cyclically scan class queues, serving one from each class (if available)
  12. Scheduling Policies: still more Weighted Fair Queuing: generalized Round Robin each class gets weighted amount of service in each cycle when all classes have queued packets, class i will receive a bandwidht ratio of wi/Σwj
  13. Policing Mechanisms Goal: limit traffic to not exceed declared parameters Three common-used criteria: (Long term) Average Rate:how many packets can be sent per unit time (in the long run) crucial question: what is the interval length: 100 packets per sec or 6000 packets per min have same average! Peak Rate: e.g., 6000 packets per min. (ppm) avg.; 1500 pps peak rate (Max.) Burst Size: max. number of packets sent consecutively
  14. Policing Mechanisms Token Bucket: limit input to specified Burst Size and Average Rate. bucket can hold b tokens  limits maximum burst size tokens generated at rate r token/sec unless bucket full over interval of length t: number of packets admitted less than or equal to (r t + b).
  15. token rate, r arriving traffic bucket size, b per-flow rate, R WFQ D = b/R max Policing Mechanisms (more) token bucket, WFQ combined provide guaranteed upper bound on delay, i.e., QoS guarantee
  16. IETF Differentiated Services want “qualitative” service classes “behaves like a wire” relative service distinction: Platinum, Gold, Silver scalability: simple functions in network core, relatively complex functions at edge routers (or hosts) in contrast to IETF Integrated Services: signaling, maintaining per-flow router state difficult with large number of flows don’t define define service classes, provide functional components to build service classes
  17. marking r b scheduling . . . Diffserv Architecture Edge router: per-flow traffic management marks packets according to class marks packets as in-profile and out-profile Core router: per class traffic management buffering and scheduling based on marking at edge preference given to in-profile packets
  18. Rate A B Edge-router Packet Marking profile: pre-negotiatedrate A, bucket size B packet marking at edge based on per-flow profile User packets Possible usage of marking: class-based marking: packets of different classes marked differently intra-class marking: conforming portion of flow marked differently than non-conforming one
  19. Classification and Conditioning Packet is marked in the Type of Service (TOS) in IPv4, and Traffic Class in IPv6 6 bits used for Differentiated Service Code Point (DSCP) and determine PHB that the packet will receive 2 bits can be used for congestion notification:Explicit Congestion Notification (ECN), RFC 3168
  20. Classification and Conditioning May be desirable to limit traffic injection rate of some class: user declares traffic profile (e.g., rate, burst size) traffic metered, shaped or dropped if non-conforming
  21. Forwarding (PHB) PHB result in a different observable (measurable) forwarding performance behavior PHB does not specify what mechanisms to use to ensure required PHB performance behavior Examples: Class A gets x% of outgoing link bandwidth over time intervals of a specified length Class A packets leave first before packets from class B
  22. Forwarding (PHB) PHBs being developed: Expedited Forwarding: packet departure rate of a class equals or exceeds specified rate logical link with a minimum guaranteed rate Assured Forwarding: e.g. 4 classes of traffic each class guaranteed minimum amount of bandwidth and a minimum of buffering packets each class have one of three possible drop preferences; in case of congestion routers discard packets based on drop preference values
  23. The Evolution of IP Routers

  24. Line Interface CPU DMA DMA DMA Line Interface Line Interface Line Interface Memory MAC MAC MAC First-Generation IP Routers Shared Backplane Buffer Memory CPU
  25. DMA DMA DMA Line Card Line Card Line Card Local Buffer Memory Local Buffer Memory Local Buffer Memory MAC MAC MAC Second-Generation IP Routers Buffer Memory CPU
  26. Line Interface Line Interface Line Interface Line Interface Line Interface Line Interface Line Interface Line Interface CPU Memory Third-Generation Switches/Routers Switched Backplane Line Card CPU Card Line Card Local Buffer Memory Local Buffer Memory MAC MAC
  27. 1 5 6 2 7 3 4 8 Fourth-Generation Switches/RoutersClustering and Multistage 1 2 3 4 5 6 13 14 15 16 17 18 25 26 27 28 29 30 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 7 8 9 10 11 12 31 32 21
  28. Forwarding Implementations

    Principles Credits: Nick McKeown, Stanford University
  29. Associated Data { Hit? Address log2N Forwarding Implementation (1)Associative Lookups Associative Memory or CAM Advantages: Simple Disadvantages Relatively slow High power consumption Small Expensive Network Address Associated Data Search Data 48
  30. Associated Data { Hit? Address log2N Forwarding Implementation (2)Hashing Search Data Hashing Function 16 Data Address Memory 48
  31. Associated Data { Hit? Address log2N Lookups Using HashingAn example Memory #1 #2 #3 #4 Search Data Hashing Function 16 #1 #2 48 CRC-16 M entries #1 #2 #3 Linked lists N lists
  32. Lookups Using Hashing Advantages: Simple Expected lookup time can be small Disadvantages Non-deterministic lookup time Inefficient use of memory
  33. log2N N entries Forwarding Implementation: Trees and Tries Binary Search Tree Binary Search Trie < > 0 1 < > < > 0 1 0 1 010 111 A trie (from retrieval), is a multi-way tree structure useful for storing strings over an alphabet.
  34. Simple Tries and Exact Matching in Ethernet Each address in the forwarding table is of fixed length E.g. using a simple binary search trie it takes 48 steps to resolve an MAC address lookup When the table is much smaller than 2^48 (!), this seems like a lot of steps Instead of matching one bit per level, why not m bits per level?
  35. Trees and Tries: Multiway tries 16-ary Search Trie 0000, ptr 1111, ptr 1111, ptr 0000, 0 1111, ptr 0000, 0 ptr=0 meansno children 000011110000 111111111111
  36. IP Forwarding

  37. 128.9.0.0 142.12/19 216 IP Routers: CIDR Class-based: A C B D 0 232-1 Classless: 128.9/16 0 232-1 128.9.16.14
  38. IP Routers: CIDR 128.9.19/24 128.9.25/24 128.9.16/20 128.9.176/20 128.9/16 0 232-1 128.9.16.14 Most specific route = “longest matching prefix”
  39. 128.9.16.14 IP Routers: Metrics for Lookups Prefix Port Lookup time Storage space Update time 3 65/24 5 128.9/16 2 128.9.16/20 7 128.9.19/24 128.9.25/24 10 128.9.176/20 1 142.12/19 3
  40. IPv4 unicast destination address based lookup Forwarding Engine Dstn Addr Next Hop Computation HEADER Next Hop Forwarding Table Destination Next Hop ---- ---- ---- ---- Incoming Packet ---- ---- IP Router: Lookup
  41. Gigabit Ethernet (84B packets): 1.49 Mpps Required Lookup Performance
  42. About 10k new prefixes per year Exponential growth before CIDR Size of the Routing Table Source: http://www.telstra.net/ops/bgptable.html
  43. Method: CAMs (Content-Addressable Memories) Associative Memory Value Mask 10.0.0.0 255.0.0.0 R1 255.255.0.0 Next Hop 10.1.0.0 R2 255.255.255.0 10.1.1.0 R3 10.1.3.0 255.255.255.0 R4 10.1.3.1 255.255.255.255 R4 Priority Encoder
  44. Method: Binary Tries Example Prefixes 0 1 a) 00001 b) 00010 c) 00011 d) 001 e) 0101 d g f f) 011 g) 100 i h h) 1010 e i) 1100 a c b j) 11110000 j
  45. Four-way tries Reduced number of memory accesses But greater wasted space...
  46. Method: Patricia Tree Example Prefixes 0 1 a) 00001 b) 00010 c) 00011 d) 001 Skip=5 e) 0101 f) 011 f g j d g) 100 h) 1010 h i e i) 1100 a b c j) 11110000 Advantages Disadvantages Many memory accesses General solution Extensible to wider fields Pointers take a lot of space (Total storage for 40K entries is 2MB)
  47. Method: Compacting Forwarding Tables Optimize the data structure to store 40,000 routing table entries in about 150-160kBytes. Rely on the compacted data structure to be residing in the primary or secondary cache of a fast processor. Achieves e.g. 2Mpps on a Pentium. Advantages Disadvantages Good software solution for Only 60% actually cached low speeds and small routing Scalability to larger tables tables. Handling updates is complex
  48. Forwarding Decisions

    Packet Classification
  49. Providing Value­Added Services: Some examples Differentiated services Regard traffic from AS#33 as `platinum­grade’ Access Control Lists Deny udp host 194.72.72.33 194.72.6.64 0.0.0.15 eq snmp Committed Access Rate Rate limit WWW traffic from sub­interface#739 to 10Mbps Policy­based Routing Route all voice traffic through the ATM network Peering Arrangements Restrict the total amount of traffic of precedence 7 from MAC address N to 20 Mbps between 10 am and 5pm Accounting and Billing Generate hourly reports of traffic from MAC address M
  50. Forwarding Engine HEADER Flow Classification Flow Index Policy Database Predicate Action ---- ---- ---- ---- Incoming Packet ---- ---- Flow Classification
  51. Trend AnalysisHigh-Performance Networking and High-Performance Computing

    Prof. Dr.-Ing. Georg Carle carle@in.tum.dehttp://www.net.in.tum.de/~carle Chair for Network Architectures and ServicesTechnische Universität München
  52. Outline Background: Sources of packet delay Impact Analysis: Advances in Network Technology Advances in Switching Advances by Virtualisation and Parallelization Advances in Communication Software Questions on Communication Software Questions on Virtualisation Questions on Cloud Computing
  53. Motivation Network technology advances Networking is important driving factor for HPC data centers Recent advances: 40 Gbit/s, with Infiniband or Ethernet Networking advances from future Internet concepts Innovative node concepts, e.g. OpenFlow switches Network virtualisation as key concept of future Internet Cloud concepts for HPC Interconnection performance is a key factor Virtualisation approaches, including network virtualisation, is important Capability of transparent live migrating of virtual machines
  54. Background: Sources of packet delay 3. Transmission delay: R=link bandwidth (bps) L=packet length (bits) time to send bits into link = L/R 4. Propagation delay: d = length of physical link s = propagation speed in medium (~2x108 m/sec) propagation delay = d/s 1. Processing delay: Sending: prepare data for being transmitted Receiving: interrupt handling 2. Queueing delay time waiting at output link for transmission transmission propagation processing queueing
  55. Impact Analysis: Advances in Network Technology Assessment Transmission delaybecomeslessimportant Distancebecomesmoreimportantmattersforcommunicationbeyonddatacenter Network adapterlatencylessimportant Low-latencycommunicationsoftwarebecomesimportant
  56. Controller OpenFlow Switch OpenFlow Switch specification OpenFlow Protocol PC SSL Secure Channel sw Flow Table hw Advances in Switching Example: OpenFlow Switch architecture, Stanford University Concept: separationofswitchfabric, andswitchcontrol Allowsforcheapswitches, centrallycontrolledbyswitchmanager  Assessment: suitableforlow-latencydatacentercommunication The Stanford Clean Slate Program http://cleanslate.stanford.edu
  57. Advances by Virtualisation and Parallelization Disruptive Technologies: Virtualisation & Multicore Architectures in the Network Drivers for virtualisation 1) Complexity Virtualization allows to hide complexity if it is done right(Problem: right level of abstraction) 2) New management principles network management is a driver for virtualization What about multicore and networking? In future there may be dozens, hundreds of cores Need to expose parallel processing Affects the way how to design protocols Need to provide ways to access flow state
  58. Advances in Communication Software Wire-speed packet capture and transmission Linux: PF_Ring: network socket for high-speed packet caputing NAPI: New API - interrupt mitigation techniques for networking devices in the Linux kernel TNAPI - Multithreaded-NAPI Issues How to make advances for packet capturing available for general purpose applications? How to assess advances in communication software for other OS'es?
  59. Questions on Network Technology How does reduced latency from transmission delay change the overall picture? Which are the roles of different network technologies HPC-based: Infiniband Standards-based: Ethernet Future-Internet based: OpenFlow Switching
  60. Questions on Communication Software Questions on Software issues Which is the crucial software for low-latency communication Driver of network adapter? Protocol mechanisms? Protocol implementation? Implementation of interface User space to Kernel space? Which advances in communication software can we expect? Driver of network adapter? Protocol mechanisms? Protocol implementation? Interface operating system to application? Questions on Multicore Parallelisation Impact of multicore? What does it change? Should we change specific protocols for it? Do we need specific tools?
  61. Questions on Virtualisation Virtualisation Important technique for data centers, and public or private clouds Possibility of live migration of virtual machines across multiple networks or data centers, for purposes such performance optimisation, or load balancing within and across data centers Questions on Virtualisation Which overhead (latency!) does virtualisation impose? Is there sufficient benefit in migrating Virtual Machines?
More Related