1 / 20

Windows Server 2012 Hyper-V Networking

Windows Server 2012 Hyper-V Networking. Carlos Mayol y Oscar Bonaque Premier Field Engineers (PFE’s) Microsoft. TechNet. Server Virtualization Hyper-V 2012. ----- Updated to 8000. Server Virtualization. Server Virtualization Hyper-V 2012. NIC Teaming and Hyper-V.

deiter
Download Presentation

Windows Server 2012 Hyper-V Networking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Windows Server 2012 Hyper-V Networking Carlos Mayol y Oscar Bonaque Premier Field Engineers (PFE’s) Microsoft TechNet

  2. Server Virtualization Hyper-V 2012 -----Updated to 8000

  3. Server Virtualization Server Virtualization Hyper-V 2012

  4. NIC Teaming and Hyper-V Do notrequireSwitchconfiguration Switch Independent StaticorDynamicTeaming (LACP) RequiresSwitchconfiguration Switch Dependent

  5. NIC Teaming and Hyper-V (Balancing Modes Summary) • Best for: Hyper-V • Sends on all active members, receives on all active members, traffic from same port always on same NIC • Each Hyper-V port will be bandwidth limited to not more than one team member’s bandwidth • Each VM (Hyper-V port) is associated with a single NIC, this also allows maximum use of dVMQs for better performance over all

  6. Host Network configurations Non-converged Converged Option1 Converged Option2 VM1 VMN VM1 VMN VM1 VMN Manage Live Migration Cluster Live Migration Storage Cluster Manage Live Migration Storage Storage Cluster Manage • Non converged configuration can be accomplish with multiple physical NICs or using partition software at hardware level, normally equipped on Blade chassis systems like: • Dell NPAR • HP FLEXfabric • Cisco FEX 10GbE each HBA/ 10GbE 10GbE 1GbE 1GbE 1GbE 10GbE each RDMA Traffic 10GbE each

  7. Converged Networks • QoS Windows Server 2012 • Bandwidth management • Classification and tagging • Priority based flow control • Bandwidth mechanisms: • DCB (Data Center Bridging) • QoS Software Hyper-V Switch • Bandwidth options: • Absolute: bits per second • Weight: an integer in the range between 1 and 100 (Minimum bandwidth) • Best Practices for Minimum Bandwidth configurations: • Keep the sum of the weights around or under 100 • Assign a relatively large weight to critical workloads even if they don’t require that percentage of bandwidth • Gap the weight assignment to differentiate the level of service to be provided (5, 3, 1) • Make sure that traffic that is not specifically filtered out is also accounted for with a weight assignment

  8. Server Virtualization Hyper-V 2012 Demo: Converged Networks

  9. Dynamic Switch Ports Primordial pool By default, every vSwitch is placed in the default Primordial pool for theEthernet resource pool vEthernet (DMZ) vEthernet (Public) Dynamic Switch Port functionality allows a VM to request to connect to one or more virtual switches in a poll of virtual switches Public pool DMZ pool vEthernet (DMZ) vEthernet (Public) Resource pool configuration using PowerShell (New-VMResourcePool) • Two-part process • Create the Ethernet resource pool • Add the vSwitch to the resource pool Note: Properly configured Ethernet resource pools on Hyper-V hosts should allow for a proper automatic connection when a VM migrates because the virtual machine network configuration is now part of the virtual machine configuration

  10. Server Virtualization Hyper-V 2012 Demo: Dynamic Switch Ports

  11. Virtual Switch Expanded Functionality DHCP/Router Guard ARP/ND Poisoning (Spoofing) Protection Network Traffic Monitoring Per VM Bandwidth Management QoS VM Settings Network Adapter • Pseudo QoS to limit VM Network adapter bandwidth • MAC Spoofing protection • Prevents VMs acting as DHCP servers or sent Router Advertisements • Port Mirroring (Source or destination) AdvancedFeatures • IP6 ND Spoofing attacks protection • Netmon inside the VM required Can be managedusingHyper-V powershell module Set-VMNetworkAdapter -ComputerNamelocalhost -VMName VM1 -PortMirroringSource • Set-VMNetworkAdapter -Name “Network Adapter“ –VMName VM -MaximumBandwidth 20000000 Set-VMNetworkAdapter -ComputerNamelocalhost -VMName VM1 -DhcpGuardOn Set-VMNetworkAdapter -ComputerNamelocalhost -VMName VM1 -MacAddressSpoofingOn

  12. Per VM Bandwidth Management QoS Switch bandwidth mode is defined during creation • VM bandwidth modes, where? • UI = Absolute values (Mpbs) • PS = Absolute or Weight This is an outbound traffic limit!

  13. Server Virtualization Hyper-V 2012 Demo: VM Bandwidth limit VM Network Monitor

  14. Dynamic Virtual Machine Queue Supported on Requires support from NIC vendors VMQ spreads interrupts for virtual environments the way RSS does for native workloads Dynamic VMQ reassigns available queues based on changing networking demands of the VMs All Hyper-V customers should be using VMQ on their 10Gb NICs. Customers without VMQ and with I/O loads in VMs may see each VM’s CPU0 run hot. Can be configured with Powershell: Get-NetAdapterVmq and NetAdapterVmq

  15. Single Root I/O Virtualization (SR-IOV) • Remaps interrupts and provides Direct Memory Access to virtual machines • Requires support in the Hyper-V server chipset (BIOS firmware) and in a Network Adapter (NIC) (driver + firmware) in the host Virtual Function Reduces Network Latency Host Reduces CPU overhead Root Partition Virtual Machine • Virtual Functions (VF) in the SR-IOV-capable adapter are mapped directly to the virtual machine • VM network traffic bypasses the vSwitch Hyper-V Switch • Very similar to basic RDMA functionality Routing VLAN Filtering Data Copy • SR-IOV is supported in VM mobility scenarios • Not enabled if a destination Host does not support SR-IOV SR-IOV Physical NIC

  16. Network Isolation Physical separation Physical switches and adapters for each type of traffic Network Virtualization Isolation through encapsulation. Independence from physical address space. Layer 2: VLAN Tag is applied to packets which is used to control the forwarding Layer 2: Private VLAN (PVLAN) Primary and Secondary tags are used to isolate clients while still giving access to shared services.

  17. Network Isolation Vlan challenges Cumbersome configuration when VMs need to be moved within the Datacenter can result in network outage VLANs today Physical Switch support limitations Limited scalability. Up to 4094 VLANs VLANs cannot span multiple subnets

  18. Private VLAN (PVLAN) Isolation VLAN pairs used to provide isolation with small numbers of VLANs. Primary VLAN Promiscuous Secondary VLANs Isolated Community Microsoft

  19. Network Virtualization • Customer Address (CA) space is based on their network infrastructure • Provider Address (PA) space is assigned by a hoster based on the physicaladdress space in the datacenter (not visible to the VM) Microsoft

  20. Questions?

More Related