1 / 57

Course Review: Part 1 EECS 122: Lecture 29

Course Review: Part 1 EECS 122: Lecture 29. Department of Electrical Engineering and Computer Sciences University of California Berkeley. Today. Cover the main points from the following topics Router Lookup and Scheduling QoS Distributed Algorithms Multicast Overlay Networks

fmetcalf
Download Presentation

Course Review: Part 1 EECS 122: Lecture 29

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Course Review: Part 1 EECS 122: Lecture 29 Department of Electrical Engineering and Computer Sciences University of California Berkeley

  2. Today • Cover the main points from the following topics • Router Lookup and Scheduling • QoS • Distributed Algorithms • Multicast • Overlay Networks • Error Correction • Physical Layer • Goal: Emphasize what is important • For topics before the midterm look at the slides of the earlier review Abhay K. Parekh: EECS122 Lecture 28

  3. input interface output interface Backplane RO C Review: Switch Architectures • Input Queued • Faster switch • Two congestion points • HOL • Output Queued • Slower switch • Backplane speedup is N • One point of congestion Abhay K. Parekh: EECS122 Lecture 28

  4. input interface output interface Backplane RO C How a router manages traffic • Fast routing table lookups • Exact and LPM • How to resolve router congestion? • Assume output queued • Focus on scheduling • Already covered packet dropping • End-to-End peformance yields QoS Abhay K. Parekh: EECS122 Lecture 28

  5. LPM in IP Routers“Patricia” trie Example Prefixes 0 1 a) 00001 b) 00010 c) 00011 d) 001 e) 0101 g f d j Skip 5 1000 f) 011 g) 100 h i h) 1010 e i) 1100 j) 11110000 a b c Nick Mckeown Abhay K. Parekh: EECS122 Lecture 28

  6. Fairness • Suppose there are N applications sharing the network • Each application j can express its satisfaction with network performance in terms of Uj(r), where r = (r1,…,rN) is the rate allocated to the applications. • Most common utility function is Uj(r) =r for all j • If the user can’t use more than a rate of C then Uj(r) =C for all r≥ C. • What is a fair allocation of the throughputs? • Fairness is vague so there are many definitions • Maximize the sum of utilities: May penalize some apps • Max-Min Fair • Know how to compute this for applications that traverse multiple links (hw) Abhay K. Parekh: EECS122 Lecture 28

  7. Network Effects on Traffic from a Single Flow Router 1 Router n Cumulative Bits bits in the network Source delay Svc function at router 1 is arrival function at router 2 Time Abhay K. Parekh: EECS122 Lecture 28

  8. How to control delay? • Overprovision the network • Maybe one day for voice and video but not today • Circuit Switch • Packet switch with more state than a datagram network • Try to be smart about managing limited resources • Per flow? • Per aggregate class? Abhay K. Parekh: EECS122 Lecture 28

  9. Scheduling • Scheduling in conjunction with packet dropping control performance within a router • Scheduling mechanisms determine how the bandwidth of an output port is shared • Mainly used to manage delay • Scheduling is only effective if packet buffers are large enough to matter Model of router queues R(f1) A1(t) D(t) A(t) A2(t) R(f2) AM(t) R(fM) Scheduling Discipline Abhay K. Parekh: EECS122 Lecture 28

  10. Generalized Processor Sharing • Each class j has a weight wj • If there is traffic to be served from class j at time t we say that it is Backloged at time t. • B(t) is the set of backlogged classes at time t • W(t) is the total weight of the backlogged classes at time t • If there is even one backlogged class, the server operates at rate C • This is called being work conserving • Serve the classes in proportion to their weights • If class j is backlogged at time t, give it service rate • Sj(t) = [wj/W(t)] * C • GPS desirable properties Abhay K. Parekh: EECS122 Lecture 28

  11. Generalized Processor Sharing Class 1 w1 = w2 Class 2 Delay =5 Delay =4 B(2)={1,2} B(4)={2} Abhay K. Parekh: EECS122 Lecture 28

  12. Adjust weights to change delay Class 1 Class 2 3w1 = w2 Delay =4 Delay =5 B(2)={1,2} B(4)={1} Abhay K. Parekh: EECS122 Lecture 28

  13. Weighted Fair Queueing • Work Conserving • Tracks GPS • Simulates the arrivals in real-time in a GPS system. • Serves entire packets (only one class at a time) • Let Fp be the time that the last bit of packet p departs the simulated GPS system • WFQ attempts to serve packets in order of Fp • Also called “Packetized Generalized Processor Sharing” (PGPS) Abhay K. Parekh: EECS122 Lecture 28

  14. QoS • QoS Building Blocks: • Application describes required level of QoS (Token Buckets) • Resource Reservation (RSVP) • Per Hop (Router) Mechanisms: • What efforts are underway to move towards QoS in the internet? • IntSrv • DiffSrv Abhay K. Parekh: EECS122 Lecture 28

  15. The Token Bucket Tokens at rate,r ρ: average rate σ: burstiness Token bucket size,s Packets Packets A(s,t) ≤σ+ρ(t-s) One byte (or packet) per token Packet buffer Nick Mckeown Abhay K. Parekh: EECS122 Lecture 28

  16. How the user/flow can conform to the Token Buckets Tokens at rate,r Token bucket sizes To network Variable bit-rate compression C r bytes bytes bytes time time A(s,t) ≤σ+ρ(t-s) time Abhay K. Parekh: EECS122 Lecture 28 Nick Mckeown

  17. Result • If all of the routers do per-flow WFQ then explicit worst case end to end delay bounds are possible for a variety of different weighting schemes. • Special Case of small packets (GPS): There are N flows and they are where the ith flow has token bucket (σj,ρj). Set the weight of flow j to ρj at each router. Then the worst case delay for flow j is less than σj/ρj • The Backlog Clearing Rate is > ρj at each router Abhay K. Parekh: EECS122 Lecture 28

  18. Resource Reservation Protocol: RSVP • Establishes end-to-end reservations over a datagram network. • Sources • Instead of sending to one receiver send to a GROUP • RSVP was designed with multicast in mind • Send TSPEC to the group • Receivers • Join a “group” associated with the sender • Initiate reservations of the network • Network • Establishes the group • Helps determine the path • Learns the TSPEC • Responds to receiver-based reservations Abhay K. Parekh: EECS122 Lecture 28

  19. RSVP Basic Operations • Two message types: PATH and RESV • Sender sends PATH message via the data delivery path • TSPEC – use token bucket • Set up the path state on each router including the address of previous hop • Receiver sends RESV message on the reverse path • Specify the reservation style, QoS desired • Queueing delay and bandwidth requirements • Source traffic characteristics (from PATH) • Set up the reservation state at each router Abhay K. Parekh: EECS122 Lecture 28

  20. Intserv Node Architecture • Kinds of Service: • Guaranteed • Controlled Load • Best Effort Routing Messages Routing RSVP RSVP messages Control Plane Admission Control Data Plane Forwarding Table Per Flow QoS Table Data In Route Lookup Classifier Scheduler Data Out Abhay K. Parekh: EECS122 Lecture 28

  21. Diffserv Architecture Its all about the domain • Ingress routers • Police/shape traffic • Set Differentiated Service Code Point (DSCP) in Diffserv (DS) field • Core routers • Implement Per Hop Behavior (PHB) for each DSCP • Process packets based on DSCP DS-2 DS-1 Egress Ingress Egress Ingress • Kinds of Service: • Premium • Assured • Best Effort Edge router Core router Abhay K. Parekh: EECS122 Lecture 28

  22. Comparison Abhay K. Parekh: EECS122 Lecture 28

  23. Distributed Algorithms • Focus on the algorithms behind protocols • How to move from Centralized to Distributed Alg. • Synchronous and Asynchronous computation • Why does the Asynchronous Bellman Ford converge? • What are the effects of a changing topology on algorithm design? • How can protocols be designed to protect against dishonest nodes Abhay K. Parekh: EECS122 Lecture 28

  24. Synchronous v/s Asynchronous Algorithms • Synchronous algorithms can be described in terms of global iterations. The time taken for a given iteration is the time taken for the slowest processor to complete that iteration: time driven • E.g. TDM or SONET • Asynchronous algorithms execute at a processor based on received messages and internal state: event driven • E.g. IP protocols which must run over heterogeneous systems • Study how this applies to Bellman Ford Abhay K. Parekh: EECS122 Lecture 28

  25. 3 2 1 3 2 1 1 4 1 4 4 6 5 1 Node 5 1 2 3 idle idle idle Synchronization Penalty Slot size is affected by the slow (1,6) link idle idle idle Node 1 1 2 3 Node 6 1 2 3 Penalty can be huge! Abhay K. Parekh: EECS122 Lecture 28

  26. 3 2 1 3 2 1 1 4 1 4 4 6 5 1 Local Synchronization Send update k after you’ve heard update k-1 from all neighbors. idle idle idle Node 3 1 2 3 2 3 Node 4 1 1 2 3 Node 5 idle idle idle Abhay K. Parekh: EECS122 Lecture 28

  27. 3 2 1 3 2 1 1 4 1 4 4 6 5 1 1 9 3 4 8 6 5 Asynchronous computation No notion of “slot size” Node 1 2 7 10 11 12 13 14 15 16 Node 6 1 2 3 Node 5 1 2 3 idle idle idle What paths will this compute?? Abhay K. Parekh: EECS122 Lecture 28

  28. Soft State • State with Time-Out • Example: A host joins a group by sending a “join” message to a “host manager”. The manager adds the host to the group for the next T seconds. If the host wants to stay in the group it must send a refresh message within T seconds to the manager. Otherwise it is dropped. • Advantage: Manager robust to host failure • Disadvantage: Too many messages • Most internet protocols use this way of communicating • Trades of simplicity of correctness with complexity of communication Abhay K. Parekh: EECS122 Lecture 28

  29. Trustworthiness • Three levels • Honest: Always in conformance of the protocol • Selfish: May lie to get better performance out of the protocol (BGP) • Malicious: Unpredictable • Internet Protocols (for the most part) assume Honest protocol agents • Unreliable infrastructure • Infrastructure has gotten more reliable, and agents have gotten less honest… Abhay K. Parekh: EECS122 Lecture 28

  30. R1 joins G [G, data] [G, data] [G, data] R0 joins G [G, data] Rn-1 joins G The Multicast service Model R0 R1 S Net . . . Rn-1 Abhay K. Parekh: EECS122 Lecture 28

  31. Multicast and Layering • Multicast can be implemented at different layers • data link layer • e.g. Ethernet multicast • network layer • e.g. IP multicast • application layer • e.g. as an overlay network like Kazaa • Which layer is best? Abhay K. Parekh: EECS122 Lecture 28

  32. Routing: Approaches • Kinds of Trees • Shared Tree • Source Specific Trees • Tree Computation Methods • Intradomain Update methods • Build on unicast Link State: MOSPF • Build on unicast Distance Vector: DVMRP • Protocol Independent: PIM • Interdomain routing: BGMP • This is still evolving… Abhay K. Parekh: EECS122 Lecture 28

  33. IPM IPM MBONE • What to do if most of the routers in the internet are not multicast enabled? • Tunnel between multicast enabled routers • Creates an “overlay” network but both operate at Level 3… • This is how multicast was first deployed IP Abhay K. Parekh: EECS122 Lecture 28

  34. Core Based Trees (CBT) • Pick a “rendevouz point” for the group called the core. • Shared tree • Unicast packet to core and bounce it back to multicast group • Tree construction is receiver-based • Joins can be tunneled if required • Only nodes on One tree per group tree involved • Reduce routing table state from O(S x G) to O(G) Abhay K. Parekh: EECS122 Lecture 28

  35. PIM • Popular intradomain method • UUNET streaming using this • Recognizes that most groups are very sparse • Why have all of the routers participate in keeping state? • Two modes • Dense mode: flood and prune • Sparse mode: Core-based shared tree approach with a twist Abhay K. Parekh: EECS122 Lecture 28

  36. Scalable Reliable Multicast (SRM) • Randomize NACKs (request repairs) • All traffic including request repairs and repairs are multicast • A repair can be sent by any node that heard the request • A node suppresses its request repair if another node has just sent a request repair for the same data item • A node suppresses a repair if another node has just sent the repair Abhay K. Parekh: EECS122 Lecture 28

  37. Application Layer Multicast • Provide multicast functionality above the IP unicast • Gateway nodes could be the hosts or multicast gateways in the network • Advantages • No multicast dial-tone needed • Performance can be optimized to application • Loss, priorities etc. • More control over the topology of the tree • Easier to monitor and control groups • Disadvantages • Scale • Performance if just implemented on the hosts (not gateways) Abhay K. Parekh: EECS122 Lecture 28

  38. A A Overlay network • A network defined over another set of networks • The overlay addresses its own nodes • Links on one layer are network segments of lower layers • Requires lower layer routing to be utilized • Overlaying mechanism is called tunneling A’ Abhay K. Parekh: EECS122 Lecture 28

  39. Kinds of Overlay Networks • Three kinds of Overlays • Only Hosts: Peer to Peer Networks (P2P) • Example: Gnutella, Napster • Only Gateway nodes: Infrastructure Overlays • Content Distribution Networks (CDNs) • Example: Akamai • Host and Gateway Nodes: • Virtual Private Networks • Overlay node structure • Regular: Chord, Pastry • Adhoc: Gnutella Abhay K. Parekh: EECS122 Lecture 28

  40. Two kinds of overlays functions • Overlay provides access to distributed resources • Overlay facilitates communication among other client applications • Two kinds of client connectivty • Direct: P2P • Not direct: Akamai • Overlay Network Operations • Select Virtual Edges (fast or slow timescales) • Overlay Routing Protocol • Edge Mapping • Resource Location Abhay K. Parekh: EECS122 Lecture 28

  41. Content Addressable Network (CAN) • Associate to each node and item a unique id in an d-dimensional space • Properties • Routing table size O(d) • Guarantee that a file is found in at most d*n1/d steps, where n is the total number of nodes • Virtual Topology related to actual topology using “binning” Abhay K. Parekh: EECS122 Lecture 28

  42. Bottom line on Overlays • Overlays are an irreversible trend in network • Overlays add new functions to the network infrastructure much faster than • by trying to integrate them in the router • relying on a infrastructure service provider on deploy the function • Disadvantages • Overlay nodes can create performance bottlenecks • New end-to-end protocols may not work since the overlay nodes don’t understand them • Generally better to improve performance by building an “underlay” and add functionality by building an overlay Abhay K. Parekh: EECS122 Lecture 28

  43. Link Functions • Functions • Construct Frame with Error Detection Code • Encode bit sequence into analog signal • Transmit bit sequence on a physical medium (Modulation) • Receive analog signal • Convert Analog Signal to Bit Sequence • Recover errors through error correction and/or ARQ Signal Adaptor Adaptor Adaptor: convert bits into physical signal and physical signal back into bits Abhay K. Parekh: EECS122 Lecture 28

  44. Link Components NRZI Abhay K. Parekh: EECS122 Lecture 28

  45. Framing • Goal: send a block of bits (frames) between nodes connected on the same physical media • This service is provided by the data link layer • Use a special byte (bit sequence) to mark the beginning (and the end) of the frame • Problem: what happens if this sequence appears in the data payload? Abhay K. Parekh: EECS122 Lecture 28

  46. Approaches • Byte/Bit oriented • Stuffing v/s frame count • Clock-based (Sonet) • Looks at the path not just the link Abhay K. Parekh: EECS122 Lecture 28

  47. Link Layer Reliable Transmission • Prevalent Approach • Detect errors using error detection codes • Use retransmission methods (e.g. Go back n) • Alternative Approach • Use error correction codes (e.g. perfect parity codes) • Algorithmic challenges: • Achieve high link utilization, and low overhead Abhay K. Parekh: EECS122 Lecture 28

  48. Error detection • The frame consists of a header and payload • If all n-bit strings are valid payload strings, the error detection must occur in the header bits • Break the header into two parts • Error detecting code part contains bits that add redundancy Payload Header n bits Payload FH EDC k bits n bits Abhay K. Parekh: EECS122 Lecture 28

  49. Error Detecting Codes • Goals: • Reduce overhead, i.e., reduce the number of redundancy bits • Increase the number and the type of bit error patterns that can be detected • Examples: • Even Parity • Rectangular Codes • Cyclic Redundancy Check (CRC) Abhay K. Parekh: EECS122 Lecture 28

  50. Hamming Distance • Given codewords A and B, the Hamming distance between them is the number of bits in A that need to be flipped to turn it into B • Example H(011101,000000)=4 • If all codewords are at least d Hamming distance apart, then up to d-1 bit errors can be detected Abhay K. Parekh: EECS122 Lecture 28

More Related