Network Virtualization in Infrastructure-as-a-Service Cloud Computing - PowerPoint PPT Presentation

network virtualization in infrastructure as a service cloud computing n.
Skip this Video
Loading SlideShow in 5 Seconds..
Network Virtualization in Infrastructure-as-a-Service Cloud Computing PowerPoint Presentation
Download Presentation
Network Virtualization in Infrastructure-as-a-Service Cloud Computing

play fullscreen
1 / 79
Network Virtualization in Infrastructure-as-a-Service Cloud Computing
Download Presentation
Download Presentation

Network Virtualization in Infrastructure-as-a-Service Cloud Computing

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Network Virtualization in Infrastructure-as-a-Service Cloud Computing Renato Figueiredo Associate Professor Cloud and Autonomic Computing Center & Advanced Computing and Information Systems Lab University of Florida

  2. Introduction • What is cloud computing? • A definition, from NIST: “cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” • Source: • Why is virtualization at the core of clouds? • Provides basis for on-demand, shared, configurable infrastructure: servers, network, storage

  3. What can IaaS do for me? • Infrastructure-as-a-Service • Pay-as-you-go; scale up and down • EC2: Linux server US$0.08/hour; storage US$0.10/GB/mo • Commercial clouds: • Amazon EC2, Google Compute Engine, MS Azure, .. • Startups, small/medium businesses • “Cloud-bursting” – offload to a cloud provider • Science clouds: • High-performance/throughput computing • Federated/collaborative environments • Decouple provider from user configuration

  4. Background • Virtualization technologies – IaaS foundation • Virtual machines (Xen, VMware, KVM) paved the way to IaaS • Computing environment decoupled from physical infrastructure • Pay-as-you-go for computing cycles • Virtual networks are key to enable next-generation distributed, on-demand, flexible IaaS • VMs must communicate seamlessly – regardless of where they are provisioned • Traffic isolation; security, resource control

  5. Presentation Outlook • Virtualization – a broad perspective • Network virtualization in cloud computing • Use cases • Core abstractions • Core primitives • Layers • Overview of technologies and techniques • IaaS: Amazon VPC; OpenStack Quantum • Evolving standards: OpenFlow • Inter-cloud: ViNe; IPOP / GroupVPN • Demonstration

  6. Virtual machine landscape • Many alternatives in virtual machine software • VMware, Xen, KVM, Hyper-V • Many alternatives in IaaS cloud software and services that provision virtual machines • Commercial services: Amazon EC2 (Xen), Microsoft Azure (Hyper-V), Google Compute Engine (KVM), … • OSS: OpenStack, OpenNebula, Eucalyptus, Nimbus • Standards: not there yet, but efforts underway • 5-10 years ago VMs were mysterious to most; today anyone with a credit card can deploy one • Many VMs? Across multiple providers?

  7. Why virtual networks? • Example use cases: • Cloud-bursting: • Primarily use private enterprise LAN/cluster • Run additional worker VMs on a cloud provider • Extending the LAN to all VMs – seamless scheduling, data transfers • Federated “Inter-cloud” environments: • Multiple private LANs/clusters across various institutions inter-connected • Virtual machines can be deployed on different sites and form a distributed virtual private cluster

  8. Many virtual resources Independently configured Virtual Machine Monitor Isolation, multiplexing Virtual Machines Manager VMware, VirtualBox, KVM, Hyper-V, Xen Resource provider

  9. “Classic” Virtual Machines • “A virtual machine is taken to be an efficient, isolated,duplicate copy of the real machine” 2 • “A statistically dominant subset of the virtual processor’s instructions is executed directly by the real processor”2 • “…transforms the single machine interface into the illusion of many”3 • “Any program run under the VM has an effect identical with that demonstrated if the program had been run in the original machine directly”2 • Key abstractions: interpret instructions (CPU, I/O devices), and store/recall data (registers, memory) 2 “Formal Requirements for Virtualizable Third-Generation Architectures”, G. Popek and R. Goldberg, Communications of the ACM, 17(7), July 1974 3 “Survey of Virtual Machine Research”, R. Goldberg, IEEE Computer, June 1974

  10. V2 V3 V1 VMM + VN Virtualized machines and networks Virtual Infrastructure Physical Infrastructure Domain B WAN Domain C Domain A

  11. Virtual Networks • Single infrastructure, many virtual networks • E.g. one per user or project • Each isolated and independently configured • E.g. address allocation, protocols used • Multiplexing physical network resources • Network interfaces, links, switches, routers

  12. x86 instruction stream x86 instruction stream Interpreter abstraction and VMMs Trap & emulate privileged instructions Virtual CPU Natively interpret “unprivileged” instructions Physical CPU

  13. Network abstractions SEND (link, out-buffer) RECV (link, in-buffer) Network Device Network Device link Interpreter + storage Specialized physical machine: NIC, switch, router, …

  14. Virtualization: core primitives • Intercept events of interest: • VM: trap on “privileged” instructions • VN: intercept message sent or received • Emulate behavior of event in the context of virtualized resource: • VM: emulate the behavior of instruction intercepted in the contextofthe virtual machine issuing it • VN: emulate the behavior of SEND/RECVin the context of the virtual network it is bound to

  15. Examples: VMM and VPN WRITE “B” to offset “X” in virtual disk “VD” SEND Eth. frame “F” to link “L” Encrypt “F” Encapsulate in TCP packet; SEND to link “P” VM O/S Application X,VD Trap Intercept Emulate in context of VPN Virtual NIC VM Monitor Y,PD Write “B” to offset “Y” in physical disk “PD” Physical NIC Emulate in context of VD

  16. Network virtualization – where? Virtualized endpoints Software Software Network Fabric Network Device Network Device (Virtual) machine (Virtual) machine Virtualized fabric

  17. Example - VPN Virtual Private Network SEND (link, msg): Software Software VPN software Encrypt msg, Encaps. msg, SEND (Phys. Link, encapsmsg) Internet Network Device Network Device (Virtual) machine (Virtual) machine No control

  18. Example - VLAN SEND (link, msg): Software Software Virtual LAN Network Device Network Device (Virtual) machine (Virtual) machine VLAN switch: RECV portA Match VLAN tag, SEND portB Under control

  19. Network virtualization - which layer? • Networked systems are layered • Key layers: • Data-Link (OSI Layer 2): handles messaging between devices connected by a link (e.g. Ethernet) • Network (OSI Layer 3): handles messaging across multiple links (e.g. IP network) • “End-to-End” (OSI Layers 4 and up): transport, session, presentation, application • Network virtualization technologies focus on data-link and network layers • VLAN: Layer-2 • VPN: Layer-3 (or Layer-2)

  20. Virtualization layers • Will focus on typical Ethernet / IP networks • Layer-2 virtualization • VN supports all protocols layered on data link • Not only IP but also other protocols • Simpler integration • E.g. ARP crosses layers 2 and 3 • Downside: broadcast traffic if VN spans beyond LAN • Layer-3 virtualization • VN supports all protocols layered on IP • TCP, UDP, DHCP, … • Sufficient to handle many environments/applications • Downside: tied to IP • Innovative non-IP network protocols will not work

  21. Outlook • Virtualization – a broad perspective • Network virtualization in cloud computing • Use cases • Core abstractions • Core primitives • Layers • Overview of technologies and techniques • IaaS: Amazon VPC; OpenStack Quantum • Evolving standards: OpenFlow • Inter-cloud: ViNe; IPOP / GroupVPN • Demonstration

  22. Technologies and Techniques • Amazon VPC: • Virtual private network extending from enterprise to resources at a major IaaS commercial cloud • OpenFlow: • Open switching specification allowing programmable network devices through a forwarding instruction set • OpenStack Quantum: • Virtual private networking within a private cloud offered by a major open-source IaaS stack • ViNe: • Inter-cloud, high-performance user-level managed virtual network • IP-over-P2P (IPOP) • Peer-to-peer, inter-cloud, self-organizing virtual network

  23. Amazon Virtual Private Cloud • Service interface and Web console • Available for all Amazon EC2 customers • Layer-3 virtual network within EC2 infrastructure • Extensible through hardware VPN • Typical use cases: • Multi-tier applications: • Public-facing Web server, private database, application server • Extending datacenter on demand • Cloud-bursting

  24. Amazon Virtual Private Cloud • From • “[VPC] lets you provision a private, isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define.” • “You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways.” • “Additionally, you can create a Hardware Virtual Private Network (VPN) connection between your corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate datacenter.”

  25. Amazon Virtual Private Cloud IP namespace management; hardware IPsec VPN VPN router VPC LAN Internet User Isolation Private VPC Public/private EC2 infrastructure

  26. VPC Web console

  27. Amazon Virtual Private Cloud • Cost model: • No additional charge for using VPC, but if using with a VPN, there is a charge for VPN connection-hour • Charge for inbound/outbound traffic as usual • Inter-operability: • Custom AWS APIs • Uses IPsec for VPN connections • Requires IPsec VPN hardware box at user’s side • No connectivity with other cloud providers

  28. OpenFlow • Towards an open platform foundation supporting “Software-Defined Networks” (SDN) • Interface standardized by Open Networking Foundation (ONF) • Board members (as of 6/12): Google, Microsoft, Facebook, Yahoo!, Deutsche Telekom, NTT, Verizon • Dozens of members: Citrix, Huawei, Orange, IBM, Dell, HP, Oracle, Goldman-Sachs, … • Current (as of 6/12): OpenFlow Switch 1.3.0

  29. Recall our VLAN example SEND (link, msg): Software Software Virtual LAN Network Device Network Device (Virtual) machine (Virtual) machine OpenFlow switch: RECV portA OpenFlow pipeline SEND portB Under control

  30. OpenFlow Switch and Controller Controller OpenFlow Protocol Secure Channel Group Table Table miss Add, update, delete Match flow table entry Flow Table Flow Table OpenFlow output port OpenFlow ingress port Pipeline

  31. OpenFlow • Every packet that comes through an OpenFlowport is processed through flow pipeline • Processing may incur multiple tables • The rules of processing for each table are programmed by the controller through OpenFlow API • If no matching entry found, packet is forwarded to controller for processing • Main components of a flow entry in the table • Match field (e.g. Ethernet MAC src, IPv4 dest) • Priority – determines which match applies • Instructions – update action set (applied at output)

  32. OpenFlow • Provides basic primitives for virtualization • Packets are intercepted • High-throughput datapath: flow tables • Packets not matched in flow table sent to controller • Slower control path • Can use event to program flow table entries • Supports layer-2, layer-3 matching and actions • E.g. VLAN behavior • Implementations in hardware (e.g. Ethernet switches) and software (e.g. VMMs) • Enables management of network “slices”

  33. OpenFlow Virtual Networks Ctrl Ctrl Ctrl VMM switch Ctrl tag Physical host Physical switch OpenFlow-enabled Physical and VMM switches Physical host

  34. OpenFlow • Initial use cases include virtualization within a single administrative domain • Multiple virtual network slices for data center tenants • All devices OpenFlow-enabled and managed by same entity • Enables programmable software virtual network switches/routers to use interoperable APIs and mechanisms for packet capture/processing • OpenFlow standardizes the protocol upon which data path is controlled; controller itself is not prescribed • WAN virtual networks must deal with shared links that cannot be programmed by single controller

  35. Openstack Quantum • Service to establish connectivity among virtual NICs managed by Openstack cloud • From Wiki: • “Give cloud tenants an API to build rich networking topologies, and configure advanced network policies in the cloud.” • “Let anyone build advanced network services (open and closed source) that plug into Openstack tenant networks.” • Quantum plugin - manage configuration of virtual switches (at VMM) and physical switches • Plug-in may use OpenFlow to manage switches

  36. Openstack Quantum • Quantum specifies service APIs • Simple APIs for creating, managing virtual networks • Add, update, remove networks and ports • Plug/unplug attachments • Technology-agnostic • Layer-2 networking • Expose tenant-facing APIs • Enable rich network topologies • Leverage emerging network virtualization technologies

  37. Openstack Quantum Nova, Tenants API Common Plug-in Interface Quantum Plug-in Agent (Network switch) Quantum Plug-in E.g. OpenFlow plug-in Quantum Plug-in Agent (Hypervisor) E.g. agent uses Open vSwitch Hypervisor switch

  38. Quantum and OpenFlow Ctrl Ctrl Physical switch Quantum: Create VN User: Create VLAN’d virtual cluster Nova: Create VMs OpenStack Services Physical host

  39. Inter-cloud • The techniques overviewed so far focus on single-cloud virtual networking • Or providing connectivity back to customer • Inter-cloud virtual networks enable federation of multiple IaaS providers • Use cases: • Allow collaborative projects to share resources across administrative domains • Allow users to deploy virtualized resources across multiple providers

  40. Inter-cloud Virtual Networks • Challenges - shared environment • Lack of control of networking resources in Internet infrastructure • Cannot program routers, switches • Public networks – privacy is important • Often, lack privileged access to underlying resources • May be “root” within a VM, but lacking hypervisor privileges • Approach: Virtual Private Networks • End-to-end; tunneling over shared infrastructure

  41. Related Work • There exist several VPN technologies: • Enterprise VPNs (e.g. Cisco); Open-source (e.gOpenVPN); Consumer/gaming/SMB (e.g. Hamachi) • Not easily applicable to federating cloud resources • Proprietary code; difficulty in configuration/management • Research work in the context of Grid computing • VNET (Northwestern University), VIOLIN (Purdue University), Private Virtual Cluster (INRIA) • Will overview two approaches • ViNe (Tsugawa, Fortes @ UF) • IPOP/GroupVPN (my project at UF)

  42. ViNe • Focus: • Virtual network architecture that allows VNs to be deployed across multiple administrative domains and offer full connectivity among hosts independently of connectivity limitations • Internet organization: • ViNe routers (VRs) are used by nodes as gateways to overlays; akin to Internet routers used as gateways to route Internet messages • Managed virtual networks • VRs are dynamically reconfigurable • Manipulation of operating parameters of VRs enables the management of VNs

  43. ViNe Architecture • Dedicated resources in each broadcast domain (LAN) for VN processing –ViNe Routers (VRs) • No VN software needed on nodes (platform independence) • VNs can be managed by controlling/reconfiguring VRs • VRs transparently address connectivity problems for nodes • VR = computer running ViNe software • Easy deployment • Proven mechanisms can be incorporated in physical routers and firewalls. • In OpenFlow-enabled networks, flows can be directed to VRs for L3 processing • Overlay routing infrastructure (VRs) decoupled from the management infrastructure

  44. Connectivity • Firewalls and NATs pose problems • Traffic may be blocked (based on protocol, port, host) • Address translation • Firewall traversal at application layer • New network APIs (difficulty to support existing applications) • ViNe addresses connectivity problems in an application-transparent manner • Between ViNe routers

  45. Connectivity: ViNe approach Limited VR VR • VRs with connectivity limitations (limited-VRs) initiate connection (TCP or UDP) with VRs without limitations (queue-VRs) • Messages destined to limited-VRs are sent to corresponding queue-VRs • Long-lived connection possible between limited-VR and queue-VR • Generally applicable (no dependency with network equipment, firewall/NAT type, etc) Internet • Network virtualization processing only performed by VRs • Firewall traversal only needed for inter-VR communication • ViNe firewall traversal mechanism: Retrieve message Open connection Queue VR Send message

  46. ViNe routing performance • L3 processing implemented in Java • Mechanisms to avoid IP fragmentation • Use of data structures with low access times in the routing module • VR routing capacity over 880Mbps (using modern CPU cores) – Gigabit line rate (120Mbps total encapsulation overhead) • Cannot compete with multiple port network hardware • Sufficient in many cases where WAN performance is less than Gbps • Requires CPUs launched after 2006 (e.g., 2 GHz Intel Core2 microarchitecute)

  47. ViNe Management Architecture • VR operating parameters configurable at run-time • Overlay routing tables, buffer size, encryption on/off • Autonomic approaches possible ViNe Central Server • ViNe Central Server • Oversees global VN management • Maintains ViNe-related information • Authentication/authorization based on Public Key Infrastructure • Remotely issue commands to reconfigure VR operation Requests Requests Requests Requests Configuration actions Requests VR VR . . .

  48. Dealing with Network Restrictions in Clouds • To address dangers of VM privileged users • change IP and/or MAC addresses, configure NIC in promiscuous mode, use raw sockets, attack network (spoofing, proxy ARP, flooding, …) • Internal routing and NAT • IP addresses (especially public) are not directly configured inside VMs, and NAT techniques are used (many intermediate nodes/hops in LAN communication) • Sandboxing (disables L2 communication) • VMs are connected to host-only networks • VM-to-VM communication is enabled by a combination of NAT, routing and firewalling mechanisms • Packet filtering (beyond usual, VM can not be VR) • only those VM packets containing valid addresses (IP and MAC assigned by the provider) are allowed

  49. ViNe Solution • Configure all nodes to work as VRs • No need for host-to-VR L2 communication • TCP or UDP based VR-to-VR communication circumvents the source address check restriction • But… • Network virtualization software required in all nodes • Network virtualization overhead in inter- and intra-site communication • Complex configuration and operation • TinyViNe • No need to implement complex network processing – leave it to specialized resources (i.e., full-VRs) • Keep it simple, lightweight • Use IP addresses as assigned by providers • Make it easy for end users to deploy

  50. TinyViNe • TinyViNe software • Enables host-to-VR communication on clouds using UDP tunnels • TinyVR – nodes running TinyViNe software • TinyVR processing • Intercept packets destined to full-VRs • Transmit the intercepted packets through UDP tunnels • Decapsulate incoming messages through UDP tunnel • Deliver the packets