1 / 39

DOT – D istributed O penFlow T estbed

DOT – D istributed O penFlow T estbed. Motivation. Mininet is currently the de-facto tool for emulating an OpenFlow enabled network However, the size of network and amount of traffic are limited by the hardware resources of a single machine

ronat
Download Presentation

DOT – D istributed O penFlow T estbed

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DOT – Distributed OpenFlowTestbed

  2. Motivation • Mininet is currently the de-facto tool for emulating an OpenFlow enabled network • However, the size of network and amount of traffic are limited by the hardware resources of a single machine • Our recent experiments with Mininet show that it can cause • Flow serialization of otherwise parallel flows • Many flows co-exist and compete for switch resources as transmission rates are limited by the CPU • Process for running parallel iperf servers and clients is not trivial

  3. Objective • Run large scale emulations of an OpenFlow enabled networks and • Avoid/reduce flow serialization and contention introduced by the emulation environment • Enable emulation of large amounts of traffic

  4. DOT Emulation • Embedding algorithm partitions the logical network into multiple physical hosts • Intra-host virtual link • Eembedded inside a single host • Cross-host link • Connects switches located at different hosts • Gateway Switch (GS) is added to each active physical host to emulate link delay of the cross-host links • The augmented network with GS is called physical network • SDN controller operates on the logical network

  5. Embedding of Logical Network Emulated Network Cross-host links Two Physical Machines Physical Host 1 Physical Host 2

  6. Embedding Cross-host Links a Virtual Switch (VS) b Physical Embedding a’ a” b’ b” Gateway switches

  7. SDN Controller’s View SDN Controller Controller’s View

  8. Software Stack of a DOT Node Virtual Interface Virtual Link Physical Link OpenFlow Switch

  9. Gateway Switch • Gateway Switch • A DOT component • One gateway switch per active physical host • Is attached with the physical NIC of the machine • Facilitates inter-physical host packet transfer • Enables emulation of delays in cross-host links • Oblivious of the forwarding protocol used in the emulated network

  10. Simulating Delay of the cross host links Link delay Physical Embedding Emulated Network (Only the cross-host links are shown) Only one of the segments of a cross-host link will simulate delay

  11. Simulating delay

  12. Simulating delay Now, GS2 has to forward the packet through particular link even if the next hop (e.g., B->E and D->E) is same. When a packet is received at a Gateway Switch through its physical interface, it should identify the remote segment through which it was previously forwarded

  13. Solution of Traffic Forwarding at the Gateway Switch • Mac Rewriting • Tagging • Tunnel with tag

  14. Approach 1: MAC Rewrite • Each GS maintains IP to MAC address mapping of all VMs • When a packet arrives at a GS through logical links, it replaces • The source MAC with its receiving port MAC • This enables the remote GS to identify the segment through which the packet has been forwarded • The destination MAC with the destination physical host’s physical NIC’s MAC • This enables unicast of the packet through physical switching fabric • When a GS receives a packet from the physical interface • It checks the source MAC to identify the corresponding segment through which it should forward the packet • Before forwarding, it replaces the source and destination MAC by inspecting the IP address field of the packet

  15. Approach 1: MAC Rewriting SDN Controller

  16. Approach 1: MAC Rewriting SDN Controller

  17. Approach 1: MAC Rewriting SDN Controller

  18. Approach 1: MAC Rewriting SDN Controller

  19. Approach 1: MAC Rewriting SDN Controller

  20. Approach 1: MAC Rewriting SDN Controller Controller’s View PB PM1 PD PM2 PC PE

  21. Approach 1: MAC Rewriting Controller’s View PB PM1 PD PM2 PC PE

  22. Approach 1: MAC Rewriting Controller’s View PB PM1 PD PM2 PC PE

  23. Approach 1: MAC Rewriting Controller’s View PB PM1 PD PM2 PC PE

  24. Approach 1: MAC Rewriting Controller’s View PB PM1 PD PM2 PC PE

  25. Approach 1: MAC Rewriting Controller’s View PB PM1 PD PM2 PC PE

  26. Approach 1: MAC Rewriting Controller’s View PB PM1 PD PM2 PC PE

  27. Approach 1: MAC Rewriting SDN Controller Controller’s View PB PM1 PD PM2 PC PE

  28. Approach 1: MAC Rewriting SDN Controller Controller’s View PB PM1 PD PM2 PC PE

  29. Approach 1: MAC Rewriting • Advantages • Packet size remains same • No change is required in the physical switching fabric • Limitations • Needs to maintain all IP to MAC address mapping in each of the GSs. • Not scalable

  30. Approach 2: Tunnel with Tag • An unique id is assigned to each cross-host link • When a packet arrives at a GS through internal logical links • It encapsulates the packet with any tunneling protocol (eg. GRE) • The destination address is the IP Address of the physical host address • An tag equal to the id of the cross-host link is assigned to the packet (using tunnel id field of GRE) • When an GS receives a packet from the physical interface • It checks the tag (tunnel id) field to identify the outgoing segment • It forwards the packet after decapsulatingthe tunnel header.

  31. Approach 2: Tunnel with Tag SDN Controller Cross-host link id #1 #2 Controller’s View PB PM1 PD PM2 PC PE

  32. Approach 2: Tunnel with Tag SDN Controller Cross-host link id #1 #2 Controller’s View PB PM1 PD PM2 PC PE

  33. Approach 2: Tunnel with Tag SDN Controller #1 Original Packet Header for encapsulation #2 Controller’s View PB PM1 PD PM2 PC PE

  34. Approach 2: Tunnel with Tag SDN Controller #1 #2 Controller’s View PB PM1 PD PM2 PC PE

  35. Approach 2: Tunnel with Tag SDN Controller Cross-host link id #1 #2 Controller’s View PB PM1 PD PM2 PC PE

  36. Approach 2: Tunnel with Tag • Advantages • No change is required in the physical switching fabric • No GS need to know IP-MAC address mapping • Rule set in GS is the order of cross-host link • Scalable solution • Limitations • Lowers the MTU • Due to the scalability issue, we choose this solution

  37. Emulating Bandwidth • Configured for each logical link • Using Linux tc command • Maximum bandwidth for a cross-host link is bounded by the physical switching capacity • Maximum bandwidth of an internal link is capped by the processing capability of the physical host

  38. DOT: Summary • Can emulates OpenFlownetwork with • Specific link delay • Bandwidth • Traffic forwarding • General OpenVSwitch • Forwards traffic as instructed by the Floodlight controller • Gateway Switches • Instances of OpenVSwitch • Forwards traffic based on pre-configured flow rules

  39. Technology used so far • OpenVSwitch : Version 1.8 • Rate limit is configured in each port • Floodlight Controller: Version 0.9 • Custom modules added • Static Network Loader, ARP Resolver • Hypervisor • Qemu-KVM • Link delays are simulated using tc(Linux traffic control)

More Related