m3 security multi tenancy flexibility n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
M3: Security, Multi-tenancy & Flexibility PowerPoint Presentation
Download Presentation
M3: Security, Multi-tenancy & Flexibility

Loading in 2 Seconds...

play fullscreen
1 / 34

M3: Security, Multi-tenancy & Flexibility - PowerPoint PPT Presentation


  • 99 Views
  • Uploaded on

M3: Security, Multi-tenancy & Flexibility. Symon Perriman Matt McSpirit Technical Evangelist Technical Product Manager. Introduction to Hyper-V Jump Start. Module Agenda. Multitenancy and Security Hyper-V Extensible Switch Networking Performance Security. Flexible Infrastructure

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'M3: Security, Multi-tenancy & Flexibility' - owen-brewer


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
m3 security multi tenancy flexibility
M3: Security, Multi-tenancy & Flexibility

Symon Perriman Matt McSpirit

Technical Evangelist Technical Product Manager

module agenda
Module Agenda

Multitenancy and Security

Hyper-V Extensible Switch

Networking Performance

Security

Flexible Infrastructure

Virtual Machine Mobility

Network Virtualization

hyper v extensible switch

ISOLATION AND MULTITENANCY

Hyper‑V Extensible Switch
  • New feature
  • Handles network traffic among virtual machines, external network, and host operating system

Hyper–V host

  • Benefits
  • Layer 2 virtual interface
  • Managed programmatically
  • Extensible by partners or customers

Virtual machine

Virtual machine

Virtual machine

Networkapplication

Networkapplication

Networkapplication

Virtual network adapter

Virtual networkadapter

Virtual networkadapter

Hyper‑VExtensible Switch

Physical networkadapter

Physical switch

hyper v extensible switch1
Hyper-V Extensible Switch

The Hyper-V Extensible Switch allows a deeper integration with customers’ existing network infrastructure, monitoring and security tools

DHCP Guard Protection

PVLANS

  • Windows PowerShell & WMI Management

ARP/ND Poisoning Protection

Virtual Port ACLs

Trunk Modeto Virtual Machines

Monitoring & Port Mirroring

6

hyper v extensible switch2
Hyper-V Extensible Switch

Hyper-V Extensible Switch is an open platform that lets multiple vendors provide extensions that are written to

standard Windows API frameworks

Packet Inspection

CiscoNexus 1000VUCS VM-FEX

  • Multiple Partner Extensions

5nineSecurity Manager

Packet Filtering

NECOpenFlow

InMonsFlow

Network Forwarding

Intrusion Detection

7

vmware comparison
VMware Comparison

The Hyper-V Extensible Switch is open and extensible, unlike VMware’s vSwitch, which is closed, and replaceable

1 The vSphere Distributed Switch (required for PVLAN capability) is available only in the Enterprise Plus edition of vSphere 5.1 and is replaceable (By Partners such as Cisco/IBM) rather than extensible.

2 ARP Spoofing, DHCP Snooping Protection & Virtual Port ACLs require the App component of VMware vCloud Network & Security (vCNS) product or a Partner solution, all of which are additional purchases

3 Trunking VLANs to individual vNICs, Port Monitoring and Mirroring at a granular level requires vSphere Distributed Switch, which is available in the Enterprise Plus edition of vSphere 5.1

vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/cisco-nexus-1000V/overview.html, http://www-03.ibm.com/systems/networking/switches/virtual/dvs5000v/, http://www.vmware.com/technical-resources/virtualization-topics/virtual-networking/distributed-virtual-switches.html, http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Network-Technical-Whitepaper.pdf, http://www.vmware.com/products/vshield-app/features.htmland http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/data_sheet_c78-492971.html

networking performance
Networking Performance

The Hyper-V Extensible Switch takes advantage of hardware innovation to drive the highest levels of networking performance within virtual machines

DynamicVMq

Dynamically span multiple CPUs when processingvirtual machine network traffic

IPsec Task Offload

Offload IPsec processing from within virtual machine,to physical network adaptor, enhancing performance

SR-IOV Support

Map virtual function of an SR-IOV capable physical network adaptor, directly to a virtual machine

single root i o virtualization sr iov
Single-Root I/O Virtualization (SR-IOV)
  • Reduces latency of network path
  • Reduces CPU utilization for processing network traffic
  • Increases throughput
  • Direct device assignment to virtual machines without compromising flexibility
  • Supports Live Migration

Hyper-V Switch

Root Partition

Virtual Machine

Routing

VLAN Filtering

Data Copy

Virtual Function

Physical NIC

Virtual NIC

SR-IOV Physical NIC

VMBUS

Network I/O path without SR-IOV

Network I/O path with SR-IOV

slide11

SR-IOV Enabling & Live Migration

Turn On IOV

Live Migration

Post Migration

  • Enable IOV (VM NIC Property)
  • Break Team
  • Reassign Virtual Function
    • Assuming resources are available

Virtual Machine

  • Virtual Function is “Assigned”
  • Remove VF from VM

Software Switch

(IOV Mode)

Software Switch

(IOV Mode)

Network Stack

  • Team automatically created
  • Migrate as normal
  • Traffic flows through VF
  • Software path is not used

SR-IOV Physical NIC

Physical NIC

SR-IOV Physical NIC

Virtual Function

Virtual Function

Software NIC

Software NIC

“TEAM”

“TEAM”

VM has connectivity even if

  • Switch not in IOV mode
  • IOV physical NIC not present
  • Different NIC vendor
  • Different NIC firmware
physical security
Physical Security

BitLocker ensures your data stays secure, even when your Hyper-V hosts, clusters, and storage reside in less-physically-secure locations

  • BitLocker

CSV 2.0

Local Disk

Traditional Cluster Disk

vmware comparison1
VMware Comparison

Unlike VMware, Hyper-V’s SR-IOV support ensures the highest performance without sacrificing key features such as Live Migration

  • 1 VMware vSphere and the vSphere Hypervisor support VMq only (NetQueue)
  • 2 VMware’s SR-IOV implementation does not support vMotion, HA or Fault Tolerance. DirectPath I/O, whilst not identical to SR-IOV, aims to provide virtual machines with more direct access to hardware devices, with network cards being a good example. Whilst on the surface, this will boost VM networking performance, and reduce the burden on host CPU cycles, in reality, there are a number of caveats in using DirectPath I/O:
    • Very small Hardware Compatibility List
    • No Memory Overcommit
    • No vMotion (unless running certain configurations of Cisco UCS)
    • No Fault Tolerance
    • No Network I/O Control
    • No VM Snapshots (unless running certain configurations of Cisco UCS)
    • No Suspend/Resume (unless running certain configurations of Cisco UCS)
    • No VMsafe/Endpoint Security support
  • SR-IOV also requires the vSphere Distributed Switch, meaning customers have to upgrade to the highest vSphere edition to take advantage of this capability. No such restrictions are imposed when using SR-IOV in Hyper-V, ensuring customers can combine the highest levels of performance with the flexibility they need for an agile infrastructure.

vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.1.pdf

virtual machine mobility
Virtual Machine Mobility

LiveMigration

Faster, unrestricted, simultaneous VM live migrations between cluster nodes with no downtime

migrate virtual machines without downtime

VIRTUAL MACHINE MOBILITY

Migrate virtual machines without downtime

Storage handle moved

Modified pages transferred

Live migration setup

Memory pages transferred

  • Improvements
  • Faster and simultaneous migration
  • Live migration outside a clustered environment
  • Store virtual machines on a File Share
  • Live migration based on server message block (SMB) share

Modified memory pages

Configuration data

Memory content

VM

VM

VM

MEMORY

MEMORY

IP connection

Target host

SMB network storage

virtual machine mobility1
Virtual Machine Mobility

LiveMigration

Faster, unrestricted, simultaneous VM live migrations between cluster nodes with no downtime

Live StorageMigration

Move the virtual hard disks of running virtual machines to a different storage location with no downtime

move virtual machine storage without downtime

VIRTUAL MACHINE MOBILITY

Move virtual machine storage without downtime

Reads and writes go to the source VHD

Disk contents are copied to new destination VHD

Disk writes are mirrored; outstanding changes are replicated

Reads and writes go to new destination VHD

  • Benefits
  • Manage storage in a cloud environment with greater flexibility and control
  • Move storage with no downtime
  • Update physical storage available to a virtual machine (such as SMB-based storage)
  • Windows PowerShell cmdlets
  • Live migration of storage
  • Move virtual hard disks attached to a running virtual machine

Virtual machine

Computer running Hyper‑V

VHD

VHD

Source device

Target device

virtual machine mobility2
Virtual Machine Mobility

LiveMigration

Faster, unrestricted, simultaneous VM live migrations between cluster nodes with no downtime

Shared-Nothing Live Migration

Move Virtual Machines between Hyper-V hosts with nothing but a network cable

Live StorageMigration

Move the virtual hard disks of running virtual machines to a different storage location with no downtime

migrate virtual machines without downtime1

VIRTUAL MACHINE MOBILITY

Migrate virtual machines without downtime

Live Migration Completes

Disk writes are mirrored; outstanding changes are replicated

Disk contents are copied to new destination VHD

Reads and writes go to the source VHD. Live Migration Begins

Reads and writes go to the source VHD

Live Migration Continues

  • Benefits
  • Increase flexibility of virtual machine placement
  • Increase administrator efficiency
  • Reduce downtime for migrations across cluster boundaries
  • Shared-nothing live migration

Virtual

machine

Virtual

machine

  • Destination Hyper‑V

Source Hyper‑V

Live Migration

Configuration data

Memory content

MEMORY

MEMORY

Modified memory pages

IP connection

VHD

VHD

Source device

Target device

network virtualization
Network Virtualization

SecureIsolation

Isolate network traffic from different business units or customers on a shared infrastructure without VLANs

SeamlessIntegration

Transparently integrate these private networks into a preexisting infrastructure on another site

FlexibleMigrations

Move VMs as needed within your virtual infrastructure while preserving their virtual network assignments

dynamic vlan reconfiguration is cumbersome
Dynamic VLAN Reconfiguration is Cumbersome

Aggregation

Switches

VLAN tags

ToR

ToR

VMs

Topology limits VM placement and requires reconfiguration of production switches

hyper v network virtualization
Hyper-V Network Virtualization

Server Virtualization

  • Run multiple virtual serverson a physical server
  • Each VM has illusion it is running as a physical server

Hyper-V Network Virtualization

  • Run multiple virtual networks on a physical network
  • Each virtual network has illusion it is running as a physical network

Blue Network

Red Network

Blue VM

Red VM

Virtualization

Physical

Network

Physical

Server

virtualize customer addresses
Virtualize Customer Addresses

Provider Address Space (PA)

Datacenter Network

System Center

Virtualization Policy

BlueCorp

Blue

10.0.0.5

10.0.0.7

192.168.4.11

192.168.4.22

Host 1

Host 2

RedCorp

Blue1

Red1

Blue2

Red2

Red

10.0.0.5

10.0.0.7

CA

PA

10.0.0.5

10.0.0.5

10.0.0.7

10.0.0.7

Customer Address Space (CA)

hyper v nv concepts
Hyper-V NV Concepts
  • Customer Network
    • One or more virtual subnets forming an isolation boundary
    • A customer may have multiple Customer Networks
      • e.g. Blue R&D and Blue Sales
  • Virtual Subnet
    • Broadcast boundary

Hoster Datacenter

Red Corp

Blue Corp

Customer

Network

Blue R&D Net

Red HR Net

Blue Sales Net

Blue Subnet1

Blue Subnet5

Red Subnet2

Virtual

Subnet

Red Subnet1

Blue Subnet2

Blue Subnet3

Blue Subnet4

standards based encapsulation nvgre
Better network scalability by sharing PA among VMs

Explicit Virtual Subnet ID for better multi-tenancy support

Standards-Based Encapsulation - NVGRE

192.168.2.22

192.168.5.55

GRE Key 5001

MAC

10.0.0.5 

10.0.0.7

192.168.2.22

192.168.5.55

GRE Key 6001

MAC

10.0.0.5 

10.0.0.7

Different subnets

192.168.2.22

192.168.5.55

10.0.0.5

10.0.0.5

10.0.0.7

10.0.0.7

10.0.0.5

10.0.0.7

10.0.0.5 

10.0.0.7

10.0.0.5

10.0.0.7

10.0.0.5

10.0.0.7

hyper v nv architecture
Network Virtualization is transparent to VMs

Management OS traffic is NOT virtualized; only VM traffic

Hyper-V Switch and Extensions operate in CA space

Hyper-V NV Architecture

Data Center Policy

  • Blue
  • VM1: MAC1, CA1, PA1
  • VM2: MAC2, CA2, PA3
  • VM3: MAC3, CA3, PA5
  • Red
  • VM1: MACX, CA1, PA2
  • VM2: MACY, CA2, PA4
  • VM3: MACZ, CA3, PA6

VM1

VM1

CA1

Windows Server 2012

CA1

Management

Live Migration

Hyper-V Switch

SystemCenterHost Agent

Cluster

Storage

System Center

NIC

NIC

VSID ACL Isolation

Switch Extensions

Network Virtualization

Datacenter

IP Virtualization

Policy Enforcement

Routing

Host Network Stack

PA1

PAX

PA2

PA Y

Host 1

Host 2

PA1

CA1

CAX

CA2

CA Y

AA1

AAX

VM1

VMX

VM2

VMY

packet flow blue1 sending to blue2
Packet Flow: Blue1 Sending to Blue2

where is 10.0.0.7 ?

10.0.0.7

10.0.0.7

Blue2

Red2

ARP for 10.0.0.7

10.0.0.5

10.0.0.5

Blue1

Red1

VSID5001

VSID6001

  • Hyper-V Switch broadcasts ARP to:
    • All local VMs on VSID 5001
    • Network Virtualization filter

VSID5001

Hyper-V Switch

VSID6001

VSID ACL Enforcement

OOB: VSID:5001

ARP for 10.0.0.7

Hyper-V Switch

VSID ACL Enforcement

Network Virtualization

Network Virtualization filter responds

to ARP for IP 10.0.0.7 on VSID 5001with Blue2 MAC

IP Virtualization

Policy Enforcement

Routing

NIC

NIC

Network Virtualization

192.168.4.22

IP Virtualization

Policy Enforcement

Routing

MACPA2

192.168.4.11

MACPA1

ARP is NOT broadcast to the network

packet flow blue1 sending to blue21
Packet Flow: Blue1 Sending to Blue2

10.0.0.7

10.0.0.7

Blue1 learns MAC of Blue2

Blue2

Red2

10.0.0.5

10.0.0.5

Use MACB2 for 10.0.0.7

Blue1

Red1

VSID5001

VSID6001

VSID5001

Hyper-V Switch

VSID6001

VSID ACL Enforcement

OOB: VSID:5001

Use MACB2 for 10.0.0.7

Hyper-V Switch

VSID ACL Enforcement

Network Virtualization

IP Virtualization

Policy Enforcement

Routing

NIC

NIC

Network Virtualization

192.168.4.22

IP Virtualization

Policy Enforcement

Routing

MACPA2

192.168.4.11

MACPA1

ARP is NOT broadcast to the network

packet flow blue1 sending to blue22
Packet Flow: Blue1 Sending to Blue2

10.0.0.7

10.0.0.7

Blue2

Red2

sent from Blue1

10.0.0.5

10.0.0.5

Blue1

Red1

MACB1MACB2 10.0.0.5  10.0.0.7

MACB1MACB2 10.0.0.5  10.0.0.7

MACB1MACB2 10.0.0.5  10.0.0.7

VSID5001

VSID6001

in Hyper-V switch

VSID5001

Hyper-V Switch

VSID6001

VSID ACL Enforcement

OOB: VSID:5001

Hyper-V Switch

VSID ACL Enforcement

Network Virtualization

in Network Virtualization filter

IP Virtualization

Policy Enforcement

Routing

OOB: VSID:5001

NIC

NIC

Network Virtualization

192.168.4.22

IP Virtualization

Policy Enforcement

Routing

MACPA2

NVGRE on the wire

192.168.4.11

MACB1MACB2 10.0.0.5  10.0.0.7

MACPA1 MACPA2 192.168.4.11  192.168.4.22 5001

MACPA1

packet flow blue2 receiving from blue1
Packet flow: Blue2 receiving from Blue1

10.0.0.7

10.0.0.7

Blue2

Red2

received by Blue2

10.0.0.5

10.0.0.5

Blue1

Red1

MACB1MACB2 10.0.0.5  10.0.0.7

MACB1MACB2 10.0.0.5  10.0.0.7

MACB1MACB2 10.0.0.5  10.0.0.7

VSID5001

VSID6001

in Hyper-V switch

VSID5001

Hyper-V Switch

VSID6001

VSID ACL Enforcement

OOB: VSID:5001

Hyper-V Switch

VSID ACL Enforcement

Network Virtualization

in Network Virtualization filter

IP Virtualization

Policy Enforcement

Routing

OOB: VSID:5001

NIC

NIC

Network Virtualization

192.168.4.22

IP Virtualization

Policy Enforcement

Routing

MACPA2

192.168.4.11

MACB1MACB2 10.0.0.5  10.0.0.7

MACPA1 MACPA2 192.168.4.11  192.168.4.22 5001

MACPA1

NVGRE on the wire

vmware comparison2
VMware Comparison

Only Hyper-V provides key VM migration features in the box, with no additional licensing costs

1 Live Migration (vMotion) is unavailable in the vSphere Hypervisor – vSphere 5.1 required

2 Live Migration (vMotion) and Shared Nothing Live Migration (Enhanced vMotion) is available in Essentials Plus & higher editions of vSphere 5.1

3 Within the technical capabilities of the networking hardware

4 Live Storage Migration (Storage vMotion) is unavailable in the vSphere Hypervisor

5 Live Storage Migration (Storage vMotion) is available in Standard, Enterprise & Enterprise Plus editions of vSphere 5.1

6 VXLAN is a feature of the vCloud Networking & Security Product, which is available at additional cost to vSphere 5.1. In addition, it requires the vSphere Distributed Switch, only available in vSphere 5.1 Enterprise Plus.

vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/vsphere/buy/editions_comparison.html, http://www.vmware.com/files/pdf/products/vcns/vCloud-Networking-and-Security-Overview-Whitepaper.pdfhttp://www.vmware.com/products/datacenter-virtualization/vcloud-network-security/features.html#vxlan