1 / 46

Optimizing the ‘One Big Switch’ Abstraction in Software Defined Networks

Optimizing the ‘One Big Switch’ Abstraction in Software Defined Networks. Nanxi Kang Princeton University in collaboration with Zhenming Liu, Jennifer Rexford, David Walker. Software Defined Network. Decouple data and control plane A logically centralized control plane (controller)

raheem
Download Presentation

Optimizing the ‘One Big Switch’ Abstraction in Software Defined Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Optimizing the ‘One Big Switch’ Abstraction in Software Defined Networks Nanxi Kang Princeton University in collaboration with Zhenming Liu, Jennifer Rexford, David Walker

  2. Software Defined Network • Decouple data and control plane • A logically centralized control plane (controller) • Standard protocol • e.g., OpenFlow program Network policies Controller ... Switch ... Switch rules 2

  3. Existing control platform • Decouple data and control plane • Alogically centralized control plane (controller) • Standard protocol • e.g., OpenFlow ✔ Flexible policies ✖ Easy management 3

  4. ‘One Big Switch’ Abstraction H1 H2 H2 H3 H1 H1 H3 From H1, dstIP = 0* => go to H2 From H1, dstIP = 1* => go to H3 H1 H2 Routing policy R e.g., Shortest path routing Endpoint policy E e.g., ACL, Load Balancer H3 Automatic Rule Placement 4

  5. Challenges of Rule Placement Endpoint policy E Routing policy R H1 ... H2 H3 H1 #rules >10k Automatic Rule Placement ... TCAM size =1k ~ 2k H1 H2 ... H3 ... ... 5

  6. Past work • Nicira • Install endpoint policies on ingress switches • Encapsulate packets to the destination • Only apply when ingress are software switches • DIFANE • Palette 6

  7. Contributions • Design a new rule placement algorithm • Realize high-level network policies • Stay within rule capacity of switches • Handle policy update incrementally • Evaluation on real and synthetic policies 7

  8. Contribution • Design a new rule placement algorithm • Realize high-level network policies • Stay within rule capacity of switches • Handle policy update incrementally • Evaluation on real and synthetic policies 7

  9. Problem Statement Topology Endpoint policy E Routing policy R 0.5k ... 1k 1k 0.5k Automatic Rule Placement Stay within capacity Minimize total ... ... ... ... 8

  10. Algorithm Flow 1. 2. 3. 9

  11. Algorithm Flow 1. 2. 3. 9

  12. Single Path • Routing policy is trivial C1 C2 C3 10

  13. Endpoint policy R1: (srcIP = 0*, dstIP = 00), permit R2: (srcIP = 01, dstIP = 1* ), permit R3: (srcIP = **, dstIP = 11), deny R4: (srcIP = 11, dstIP= ** ), permit R5: (srcIP = 10, dstIP= 0* ), permit R6: (srcIP = **, dstIP= ** ), deny 11

  14. Map rule to rectangle dstIP 00 00 01 01 10 10 11 11 dstIP srcIP R1: (0*, 00),P R2: (01, 1*),P R3: (**, 11),D R4: (11, **),P R5: (10, 0*),P R6: (**, **),D 00 00 srcIP 01 01 R1 10 10 11 11 12

  15. Map rule to rectangle dstIP 00 00 01 01 10 10 11 11 dstIP srcIP R1: (0*, 00),P R2: (01, 1*),P R3: (**, 11),D R4: (11, **),P R5: (10, 0*),P R6: (**, **),D 00 00 srcIP 01 01 R1 R3 10 10 R2 11 11 R5 R4 C1= 4 13

  16. Pick rectangle for every switch R1 R3 R2 R5 R4 14

  17. Select a rectangle 00 01 10 11 • Overlapped rules: • R2, R3, R4, R6 • Internal rules: • R2, R3 • #Overlapped rules ≤ C1 R3 R1 00 R2 R5 01 R4 10 q 11 C1= 4 15

  18. Install rules in first switch 00 00 01 01 10 10 11 11 R3 R1 00 00 R2 R5 R3 01 01 R4 R2 10 10 q 11 11 R’4 C1= 4 16

  19. Rewrite policy 00 00 01 01 10 10 11 11 R3 R1 00 00 R2 R5 01 01 R1 q R4 10 10 q Fwdeverything in q Skip the original policy 11 11 R5 R4 17

  20. Overhead of rules • #Installed rules ≥ |Endpoint policy| • Non-internal rules won’t be deleted • Objective in picking rectangles • Max(#internal rules) / (#overlap rules) 18

  21. Algorithm Flow 1. 2. 3. 19

  22. Topology = {Paths} • Routing policy • Implement: install forwarding rules on switches • Gives {Paths} H1 H1 H2 H2 H2 H3 H3 H1 H1 H1 H3 20

  23. Project endpoint policy to paths • Enforce endpoint policy • Project endpoint policy to paths • Only handle packets using the path • Solve paths independently H1 H1 H2 H2 H2 H3 H3 H1 H1 H1 H3 E1 Endpoint Policy E E2 E3 E4 21

  24. What is next step ? H2 H1 H3 ? • Divide rule space across paths • Estimate the rules needed by each path • Partition rule space by Linear Programming ✔ ✔ Decomposition to paths Solve rule placement over paths 22

  25. Algorithm Flow 1. 2. Fail 3. Success 23

  26. Roadmap • Design a new rule placement algorithm • Stay within rule capacity of switches • Minimize the total number of installed rules • Handle policy update incrementally • Fast in making changes, • Compute new placement in background • Evaluation on real and synthetic policies 24

  27. Insert a rule to a path • Path 25

  28. Limited impact • Path • Update a subset of switches R R R 26

  29. Limited impact • Path • Update a subset of switches • Respect original rectangle selection R R’ 27

  30. Roadmap • Design a new rule placement algorithm • Stay within rule capacity of switches • Minimize the total number of installed rules • Handle policy update incrementally • Evaluation on real and synthetic policies • ACLs(campus network), ClassBench • Shortest-path routing on GT-ITM topology 28

  31. Path • Assume switches have the same capacity • Find the minimum #rules/switch that gives a feasible rule placement • Overhead = #rule/switch x #switches 29

  32. Path • Assume switches have the same capacity • Find the minimum #rules/switch that gives a feasible rule placement • Overhead = #rule/switch x #switches - |E| 30

  33. Path • Assume switches have the same capacity • Find the minimum #rules/switch that gives a feasible rule placement • Overhead = #rule/switch x #switches- |E| |E| 31

  34. #Extra installed rules vs. length Normalized #extra rules Path Length 32

  35. #Extra installed rules vs. length Normalized #extra rules Path Length 33

  36. Data set matters Many rule overlaps Normalized #extra rules Few rule overlaps Path Length • Real ACL policies 34

  37. Place rules on a graph • #Installed rules • Use rules on switches efficiently • Unwanted traffic • Drop unwanted traffic early • Computation time • Compute rule placement quickly 35

  38. Place rules on a graph • #Installed rules • Use rules on switches efficiently • Unwanted traffic • Drop unwanted traffic early • Computation time • Compute rule placement quickly 36

  39. Carry extra traffic along the path • Install rules along the path • Not all packets are handled by the first hop • Unwanted packets travel further • Quantify effect of carrying unwanted traffic • Assume uniform distribution of traffic with drop action 37

  40. When unwanted traffic is dropped • An example single path • Fraction of path travelled 38

  41. When unwanted traffic is dropped • An example single path • Fraction of path travelled • Unwanted traffic dropped until the switch 39

  42. Aggregating all paths • Min #rules/switch for a feasible rule placement 40

  43. Give a bit more rule space • Put more rules at the first several switches along the path 41

  44. Take-aways • Path: low overhead in installing rules. • Rule capacity is efficiently shared by paths. • Most unwanted traffic is dropped at the edge. • Fast algorithm, easily parallelized • < 8 seconds to compute the all paths 42

  45. Summary • Contribution • An efficient rule placement algorithm • Support for incremental update • Evaluation on real and synthetic data • Future work • Integrate with SDN controllers, e.g., Pyretic • Combine rule placement with rule caching 43

  46. Thanks!

More Related