1 / 18

Extreme Networking Achieving Nonstop Network Operation Under Extreme Operating Conditions

Extreme Networking Achieving Nonstop Network Operation Under Extreme Operating Conditions. Fred Kuhns fredk@cs.wustl.edu http://www.arl.wustl.edu/arl. Jon Turner jst@cs.wustl.edu http://www.arl.wustl.edu/arl. Motivation. Internet subject to extreme traffic conditions.

Download Presentation

Extreme Networking Achieving Nonstop Network Operation Under Extreme Operating Conditions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Extreme NetworkingAchieving Nonstop Network Operation Under Extreme Operating Conditions Fred Kuhnsfredk@cs.wustl.eduhttp://www.arl.wustl.edu/arl Jon Turnerjst@cs.wustl.eduhttp://www.arl.wustl.edu/arl

  2. Motivation • Internet subject to extreme traffic conditions. • correlated user behavior; selfish and/or malicious users • Growing reliance on data networks. • higher expectations for reliability and performance • Design networks for worst-case traffic conditions. • practice constructive paranoia • provide carefully regulated reserved bandwidth services • better queueing mechanisms for traffic isolation • network mechanisms to protect web sites from DDOS • plan for continuous upgrading of network infrastructure • extensible routers that can adapt to new threats, as they appear • Technology progress making extreme defenses practical, without sacrificing performance.

  3. Extreme Network Services • Lightweight Flow Setup (LFS) • one-way unicast flow with reserved bandwidth, soft-state • no complex signaling, wire-speed setup, easy to deploy • Network Access Service (NAS) • provides controlled access to LFS • registration/authentication of hosts, users • resource usage data collection for monitoring, accounting • Reserved Tree Service (RTS) • configured, semi-private network infrastructure for information service providers • reserved bandwidth, separate queues for traffic isolation • paced upstream forwarding with source-based queues for isolation and DOS protection

  4. Can We Afford Per Flow Processing? • If it adds value, absolutely. • Per Flow State • at $50/MB (fast SRAM), 200B of flow state = 1 cent • at $1/MB (DRAM), 10KB of flow state = 1 cent • if used for 2000 hours (avg. of <5% over 5 years),costs 1 mcent per hour to cover cost of both • Per Flow Processing • to enable average of 10 instructions/byte on OC-192, need 12.5 GIPS • 10 i/b enough for header processing • 100 i/b enough for DES encryption • at $200/GIPS, a 10 Mb/s flow will cost 125 mcents/hour • by 2010, expect to do 100 inst./byte for 12.5 mc/h

  5. Resource Reservation in Internet? • Bandwidth reservation can provide dramatically better performance for some applications. • Obstacles to resource reservation in Internet. • distaste for signaling protocols • perceived complexity of IntServ+RSVP • requires end-to-end deployment • little motivation for service providers • How to get resource reservation in Internet? • keep it simple • focus on top priorities - one-way unicast flows • avoid complex signaling - leverage hardware routing mechanisms • make it useful when only partially deployed • provide motivation for ISPs to deploy it

  6. Lightweight Flow Setup • Implicit, one-way, unicast flow reservation. • to setup flow, just send packets - no advance signaling • specify flow rate(s) in packet header (using IP option) • flow detected and route selection triggered as needed • route for flow pinned until flow is released or times out • prefer routes with ample unreserved bandwidth • Stable rate reservation. • allocated independently by routers along path • congested links forward packets as datagrams • reservation request honored as bandwidth released by other flows • Transient rate reservation. • routers allocate bandwidth fairly among competing flows • direct feedback of bottleneck bandwidth to senders

  7. code length op. flags rate1 rate2 8 4 4 8 4 8 IP Option for LFS op identifies flow setup operation - release state - reserve stable rate - reserve transient rate - status report - status request - ignore allocatedrate requestedrate • Stable rate fraction updated by routers on path. • may trigger usage-based accounting • Status request flags trigger status report. • Alloc. rate stored at last hop router for status gen. • F.P. rates with 4 bit mantissa, 4 bit exponent. • specify rates from 64 Kb/s to 4 Gb/s , 6% “granularity”

  8. FlowTable RouteTable FlowTable FlowProcessor FlowProc. . . . AccessTable Implementing LFS - Input Side • If flow table entry present, use stored next hop • If no flow table entry, lookup route & create entry • store selected next hop in flow table entry • At access router • check privileges and record usage in access table • if flow setup not enabled, forward packet as datagram

  9. FlowTable RouteTable FlowTable FlowProcessor FlowProc. . . . AccessTable Implementing LFS - Output Side • If flow table entry present, use it to find queue, otherwise create an entry & allocate queue. • If stable rate specified, update entry. • keep list of unsatisfied reservation requests to process as bandwidth becomes available • If transient rate, update fair share and pacing rate.

  10. Edge Router PrivateLAN ISP Network WebSite Example Application • Web site specifies stable rate in outgoing streaming media packets • Use feedback to adjust sending rate if necessary. • Note: no action required by receivers.

  11. Regulating LFS Usage • Regulate LFS use to ensure availability to users. • user-specific privileges (limit rates, # reserved flows,...) • Record usage for monitoring, accounting. • record reservation periods, rates, # bytes delivered • User privilege and usage information stored in host/user database. • Regulation & monitoring at network access points. • for fixedaccess, just use physical interface • for roamingaccess to ISP or corporate network • registration protocol executed when host connects to network • IP tunnel for data transfers between host and access point • all data to/from host passes through that point

  12. 70 Mb/s downstream DatagramForwarding WebSite 100 Mb/s 70 Mb/s Entry-ExitPoint upstream 10 Mb/s Reserved Tree 15 Mb/s 10 Mb/s Reserved Tree Service • Reserved tree branches out to locations where users are. • Downstream packets forwarded on-tree, share reserved bandwidth pipes. • last hops use datagram forwarding • Upstream packets paced and kept in source-based queues.

  13. Extreme Router Architecture ControlProcessor Switch Fabric Dist. Q. Ctl. Dist. Q. Ctl. OutputPortProc. Dist. Q. Ctl. Dist. Q. Ctl. InputPortProc. . . . FlowLookup FlowLookup Flow/RouteLookup Flow/RouteLookup • system mgmt. • route table cfg. • setup for non-LFS flows Scalableswitch fabric Lookup routeor state forreserved flows • Distrib. queueing • traffic isolation • protect res. flows

  14. SharedOutputQueue Per SourceAggregate Queues . . . sending rate >6.5 sec. >500 MB queue length 1000 flows at avg. rate of 10 Mb/s10 Kbits per packet, 100 ms RTT Improving Datagram Service • Deficit round-robin service. • Discard policy • longest queue with hysteresis • discard front • Provides traffic isolation. • each queue gets fair share • small delays for “nice” flows • Aggregate queues based on source prefix. • avoid using up queues • limits bandwidth use from single subnet • Bandwidth hogging. • single user can take more than fair share of link bandwidth • other users’ packets delayed • Synchronization of TCP flows. • large queues and large delays

  15. wheel 1 wheel 2 wheel 3 00101010 10000010 fast forward bits 00110100 output list Super-Scalable Packet Scheduling • Scalability of QoS packet schedulers constrained by need to maintain sorted list of queues. • Use approximate radix sorting, with compensation - O(1). • timing wheels with increasing granularity and range • approximate sorting produces inter-packet timing errors • observe errors & compensate when next packet scheduled • Fast-forward bits used to skip to empty buckets. • Scheduler puts no limit on number of queues.

  16. . . . . . . Switch Fabric . . . . . . . . . Distributed Queueing • Distributed queueingregulates flow of traffic through fabric. • ensures reserved flows receive assigned bandwidth • allocates unreservedbandwidth fairly to datagram traffic • Periodic broadcast of bandwidth assignments. • per flow guarantees, without per flow info. broadcast • switch fabric “repackages” data so each port receives only relevant information • update period limited to use <5% of switch bandwidth • adds <100 KB to each input’s buffer space in 1K port router

  17. Prototype Extreme Router ControlProcessor Field Programmable Port Ext. Smart Port Card SDRAM128 MB SRAM4 MB Switch Fabric 64 MB Sys.FPGA APIC NorthBridge IPP OPP IPP IPP IPP IPP OPP OPP OPP OPP IPP OPP Pentium FPX FPX FPX FPX FPX FPX Cache ATM Switch Core Input Port Processor SPC SPC SPC SPC SPC SPC Field Programmable Port Extenders ReprogrammableApplicationDevice NetworkInterfaceDevice VCI VCI OUT TI TI TI TI TI TI Transmisson Interfaces Embedded Processors

  18. Summary • Growing reliance on data networks creates higher expectations - reliability, consistent performance. • Design for worst-case - constructive paranoia. • Technology progress making extreme defenses practical, without sacrificing performance. • Extensible, rapidly reconfigurable routers essential. • reconfigurable hardware, embedded processors • Project will develop & evaluate technologies for extreme networking . • Things that haven’t worked. • PI’s lumbar region • otherwise, too early to say

More Related