1 / 31

Network Algorithms, Lecture 1: Intro and Principles

Network Algorithms, Lecture 1: Intro and Principles. George Varghese, UCSD. Course Goal. Network Algorithms : Algorithms for key bottlenecks in a router (lookups, switching, QoS) How to attack new bottlenecks: Changing world -> measurement, security

belden
Download Presentation

Network Algorithms, Lecture 1: Intro and Principles

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Algorithms, Lecture 1: Intro and Principles George Varghese, UCSD

  2. Course Goal • Network Algorithms: Algorithms for key bottlenecks in a router (lookups, switching, QoS) • How to attack new bottlenecks: Changing world -> measurement, security • Algorithmics: Models and Principles. System thinking + Hardware + Algorithms

  3. Different Backgrounds • Balaji: Stochastic Processes, randomized algorithms, QCN, DCTCP, 2 yr at Cisco (Nuova) • Tom: Former Cisco VP and Fellow. Architect of Cisco Catalyst and Nexus • Me: Algorithmic background. DRR, IP lookups. 1 year at Cisco (NetSift) • All worked on router bottlenecks. Want to reconcile our viewpoints. Fun for you

  4. Course Grading • 4 Homeworks: 20% • Posted Weds, due Thus by 2pm • Course Project: 80% • Teams of 2 students • Ideally, original idea related to course leading to a paper; or, a detailed review of paper(s) on a topic. • 1 page abstract due by Mon, Apr 18th • Final project presentation + 5 page report • Office hours: • Balaji: Tuesday 10-11 -- George Wednesday 10-11 • TA: Mohammed, introduction

  5. Course Outline • Lectures 1 to 2: Principles and Models • Lectures 3 to 5: Lookups • Lectures 6 to 9: Switching • Lectures 10 to 11: QoS • Later lectures: measurement, security, data center switching fabrics, congestion control, etc

  6. Plan for Rest of Lecture • Part 1: Warm up example of algorithmic thinking • Part 2: Ten Principles for Network Algorithms

  7. Warm Up: Scenting an Attack • Observation: Large frequency of uncommon characters within code buried in say URL. • Goal: Flag such packets for further observation.

  8. Strawman Solution • Strawman: Increment character count and flag if any over threshold. • Problem: Second pass over array after packet

  9. Oliver Asks for Less • Idea:Relax specification to alarm if count over threshold wrt current and not total length • Problem: Corner cases for small lengths

  10. Oliver uses Algorithmic Thinking • Idea: Keep track of max relativized count and compare to length L at end. • Problem: Still 2 Reads, 1 Write to Memory. 1 Read is free (parallelism, wide memories)

  11. Relax Specification: Approximate! • Idea: Round up thresholds to powers of 2 to use shifts. Have false positives anyway. • Problem: Initialize counts per packet

  12. Lazy initialization • Idea: Use generation number and lazy count initialization. • Problem: Generation number wrap (Scrub)

  13. Part 2, Ten Principles for Network Algorithms

  14. Summary • P1: Relax Specification for efficiency • P2: Utilize degrees of freedom • P3: Shift Computation in Time • P4: Avoid Waste seen in a holistic view • P5: Add State for efficiency • P6: Exploit hardware parallelism and memories. • P7: Pass information (e.g., hints) in interfaces • P8: Use finite universe methods (bucket-sort etc) • P9: Use algorithmic thinking • P10: Optimize the Expected Case

  15. P1: Relax Specifications IntServ Spec: Serve smallest, requires sorting DRR: Throughput fair, modify round robin, O(1) Router Output Queue Scheduling 1500 800 50

  16. Forms of relaxing specification • Probabilistic: use randomization • Ethernet, RED • Approximate: • RED,TCP Round trip weights • Shift computation in space: Path MTU Calculate Min segment Size R1 E R2 4000 1500 Fragment?

  17. P2: Utilize degrees of Freedom • Have you used all information (Polya)? • TCP: Reduce rate on drop; later, on ECN set • DCTCP: Use number of ECN bits set in window Calculate Send rate R1 E R2 10 G 1 G Pass ECN Bit

  18. P2 in IP Lookups • Have you used all knobs under your control? Multiibit Trie: Many bits, 2 Reads Unibit Trie: One bit at a time P7 = 1000000*  5 Reads

  19. P2 in Algorithms (keyword search) We hold these truths Last character first Boyer Moore truths

  20. P3:Shift Computation in Time • Precompution: slow update, fast lookup • Lazy Evaluation: counter initialization • Expense Sharing (batching): timing wheels P7 = 100000* P6 = 10000* Precomputed Prefix of 100010

  21. P4: Avoid Waste • When viewing system holistically • e.g: packet copies 3 bits 2 bits Avoid waste

  22. P5: Add State • Those who do not learn from history . . . • BIC, QCN: Remember past high rate Past high Start here?

  23. P6: Leverage Hardware • Exploit Parallelism (memory and processing) • Leverage diff memories: SRAM, DRAM, EDRAM 00000001 Touched rarely Keep in Slow memory Touched often Keep in Fast memory 00000101 00000011 Large set of Counters

  24. P7: Pass information across layers • Do not hide information • Pass “hints” Add index to list Bypassing the Kernel in VIA

  25. P8: Finite Universe Techniques • Bit Maps: Saves space & time by HW parallelism • Bucket sorting: skipping over empty elements? 11 14 12 13 TimerQueue

  26. P8: Finite Universe Techniques • Cost of skipping empty buckets shared with incrementing current time. (P2) Current time = 10 11 12 13 Timerwheel 14

  27. P9: Use Algorithmic Ideas • Not blindly reusing algorithms but using ideas • Binary Search on Hash Tables: 1 2 3 4 8 01 * 101* 11100000* 10000000* Or here? Start here

  28. P10: Optimize Expected Case • Mostly worst case, sometimes expected case • Optimization should not be caught by testing! D, M E D D D->M Cache Router Instead of maintaining separate HW threads for cached and uncached, drop uncached!

  29. Summary • P1: Relax Specification for efficiency • P2: Utilize degrees of freedom • P3: Shift Computation in Time • P4: Avoid Waste seen in a holistic view • P5: Add State for efficiency • P6: Exploit hardware parallelism and memories. • P7: Pass information (e.g., hints) in interfaces • P8: Use finite universe methods (bucket-sort etc) • P9: Use algorithmic thinking • P10: Optimize the Expected Case

  30. Summing up • Course: router bottlenecks/network algorithms • Not just a recipe book but a way of thinking • Will see principles in action in later lectures • Next lecture: hardware models by Tom

More Related