1 / 22

Scalable Network Virtualization in Software-Defined Networks

Scalable Network Virtualization in Software-Defined Networks. Author: Dmitry Drutskoy , Eric Keller, Jennifer Rexford Publisher:IEEE Internet Computing 2013 Presenter: Yuen- Shuo Li Date : 2013/04/24. Background - Software-defined networking (SDN).

conroy
Download Presentation

Scalable Network Virtualization in Software-Defined Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scalable Network Virtualization in Software-Defined Networks Author: Dmitry Drutskoy, Eric Keller, Jennifer Rexford Publisher:IEEE Internet Computing 2013 Presenter: Yuen-Shuo Li Date: 2013/04/24

  2. Background -Software-defined networking (SDN) • SDN is an approach to networking in which control is decoupled from hardware and given to a software application called a controller. • The administrator can shape traffic from a centralized control console without having to touch individual switches and change any network switch's rules when necessary. • Essentially, this allows the administrator to use less expensive, commodity switches and have more control over network traffic flow than ever before.

  3. Background – SDN Controller Application • In SDN, a logically centralized controller manages the collection of switches through a standard interface, enabling the software to control switches from a variety of vendors. • With the OpenFlow standard the controller’s interface to a hardware switch is effectively a flow table with a prioritized list of rules. • Each rule consists of a pattern that matches bits of the incoming packets, and actions that specify how to handle these packets • e.g. dropping the packet, sending the packet to the controller... • Vendors of SDN controllers include Big Switch Networks, HP, IBM, VMWare and Juniper.

  4. Background – OpenFlow OpenFlow is a protocol that allows a server to tell network switches where to send packets. With OpenFlow, the packet-moving decisions are centralized, so that the network can be programmed independently of the individual switches and data center gear. Several established companies including IBM, Google, and HP have either fully utilized, or announced their intention to support, the OpenFlow standard. By early 2012, Google's internal network ran entirely on OpenFlow.

  5. Background – OpenFlow Controller An OpenFlow controller is an application that manages flow control in a SDN environment. All communications between applications and devices have to go through the controller. The OpenFlow protocol connects controller software to network devices so that server software can tell switches where to send packets. The controller uses the OpenFlow protocol to configure network devices and choose the best path for application traffic. Because the network control plane is implemented in software, rather than the firmware of hardware devices, network traffic can be managed more dynamically and at a much more granular level.

  6. Background – OpenFlow Switch An OpenFlow switch consists of three parts : • Flow Table:Tells the switch how to process each data flow by associating an action with each flow table entry. • Secure Channel: Connects the switch to the Controller, so commands and packets can be sent between the controller and the switch. • OpenFlowProtocol: Provides an open, standardized interface for the controller to communicate with the switch.

  7. Background – Network virtualization Network virtualization is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network. Network virtualization involves platform virtualization, often combined with resource virtualization. Network virtualization is categorized as either external, combining many networks, or parts of networks, into a virtual unit, or internal, providing network-like functionality to the software containers on a single system. Whether virtualization is internal or external depends on the implementation provided by vendors that support the technology.

  8. Background – Problem Network virtualization gives each “tenant” in a data center its own network topology and control over the flow of its traffic. SDN is a natural platform for network virtualization. Yet, supporting a large number of tenants with different topologies and controller applications raises scalability challenges.

  9. Background– Problem There are two main performance issues with virtualization in the context of SDN. The Controller must interact with switches through a SSL channel and maintain a current view of the physical infrastructure (e.g. which switches are alive). With virtualization, any interaction between a tenant’s controller application and the physical switches must go through a mapping between the virtual and physical networks.

  10. FlowN – Introduction In order to overcome these, we present FlowN. The FlowN architecture is based around two key design decisions. FlowN enables tenants to write arbitrary controller software that has full control over the address space and can target an arbitrary virtual topology. However, we use a shared controller platform rather than running a separate controller for each tenant. We make use of modern database technology for performing the mapping between virtual and physical address space. This provides a scalable solution that is easily extensible as new functionality is needed.

  11. FlowN –Full Controller Virtualization Controller virtualization system Controller Running a separate controller for each tenant seems like a natural way to support network virtualization. The virtualization system exchanges OpenFlow messages directly with the underlying switches, and exchanges OpenFlow messages with each tenant’s controller.

  12. FlowN –Full Controller Virtualization Controller virtualization system Controller Using the OpenFlow standard as the interface to the virtualization system has some advantages (e.g., tenants can select any controller platform), but introduces unnecessary overhead. Repeatedly marshalling and unmarshalling parameters in OpenFlow messages incurs extra latency. Running a complete instance of a controller for each tenant involves running a large code base which consumes extra memory. Periodically checking for liveness of the separate controllers incurs additional overhead.

  13. FlowN –Container-Based Controller Virtualization Application FlowN Controller Application virtualization system Controller Instead, FlowNis a modified NOX controller that can run multiple applications, each with its own address space, virtual topology, and event handlers. Rather than map OpenFlow protocol messages, FlowN maps between the NOX API calls. In essence, FlowNis a special NOX application that runs its own event handlers that call tenant-specific event handlers.

  14. FlowN –Container-Based Controller Virtualization Application FlowN Controller Application virtualization system Controller Each tenant’s event handlers run within its own thread. While we have not incorporated any strict resource limits, CPU scheduling does provide fairness among the threads.

  15. FlowN – Virtual Network Mapping To provide each tenant with its own address space and topology, We need to perform a mapping between virtual and physical resources. • A virtual-to-physical mapping occurs when an application modifies the flow table The virtualization layer must alter the rules to uniquely identify the virtual link or virtual switch. • e.g., adding a new flow rule. • A physical-to virtual mapping occurs when the physical switch sends a message to the controller • e.g., when a packet does not match any flow table rule.

  16. FlowN – Virtual Network Mapping These mappings are based on various combinations of input parameters and output parameters. Using a custom data structure to perform these mappings can easily become unwieldy, leading to software that is difficult to maintain and extend. This custom software would need to scale across multiple physical controllers. Depending on the complexity of the mappings, a single controller machine eventually hits a limit on the number of mappings per second that it can perform.

  17. FlowN – Mapping With a Database Instead of using an in-memory data structure with custom mapping code, FlowN uses modern database technology. Both the topology descriptions and the assignment to physical resources lend themselves directly to the relational model of a database.

  18. FlowN – Mapping With a Database Each virtual topology is uniquely identified by some key, and consists of a number of nodes, interfaces, and links. Nodes contain the corresponding interfaces, and links connect one interface to another.

  19. FlowN – Mapping With a Database FlowN stores mapping information in two tables. The first table stores the node assignments, mapping each virtual node to one physical node. The second table stores the path assignment, by mapping each virtual link to a set of physical links, each with a hop count number that increases in the direction of the path.

  20. FlowN – Mapping With a Database Because many more reads than writes in this database, we can run a master database server that handles any writes to the database. Multiple slave servers are then used to replicate the state across multiple servers. Since the mappings do not change often, caching can then be utilized to optimize for mappings that frequently occur.

  21. Experiment –Environment We built a prototype of FlowN by extending the Python NOX version 1.0 OpenFlow controller [4]. The embedder populates a MySQL version 14.14 database. We implement all schemes using the InnoDB engine that running a memcachedinstance. We run our prototype on a virtual machine running Ubuntu 10.04 LTS given full resources of three processors of a i5-2500 CPU @ 3.30GHz, 2 GB of memory, and an SSD drive (Crucial m4 SSD 64GB). We perform tests by simulating OpenFlow network operation on another VM (running on an isolated processor with its own memory space) using a modified cbench [10] to generate packets with the correct encapsulation tags..

  22. Experiment –test We measure the latency by measuring the time between when cbench generates a packet-in event and when cbench receives a response to the event.

More Related