1 / 92

WSV 315: Best Practices & Implementing Hyper-V on Clusters

WSV 315: Best Practices & Implementing Hyper-V on Clusters. Jeff Woolsey Principal Group Program Manager Windows Server, Hyper-V. Agenda. Hyper-V Architecture Hyper-V Security Server Core : Introducing SCONFIG Enabling Hyper-V with Server Core

Lucy
Download Presentation

WSV 315: Best Practices & Implementing Hyper-V on Clusters

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WSV 315: Best Practices & Implementing Hyper-V on Clusters Jeff Woolsey Principal Group Program Manager Windows Server, Hyper-V

  2. Agenda • Hyper-V Architecture • Hyper-V Security • Server Core: Introducing SCONFIG • Enabling Hyper-V with Server Core • New Processor Capabilities and Live Migration • Hyper-V R2 & SCVMM 2008 R2 Live Migration & HA and Maintenance Mode • Designing a Windows Server 2008 Hyper V & System Center Infrastructure • SCVMM 2008 R2 • Microsoft Hyper-V Server 2008 R2 • Best Practices & Tips and Tricks

  3. Architecture

  4. Provided by: Hyper-V Architecture OS ISV / IHV / OEM Microsoft Hyper-V VM Worker Processes Microsoft / XenSource Child Partitions Parent Partition Applications Applications Applications Applications User Mode WMI Provider VM Service Windows Server 2008 Non-Hypervisor Aware OS Windows Kernel Windows Kernel Linux Windows Server 2003, 2008 VSP IHV Drivers Kernel Mode Linux VSC VSC Emulation VMBus VMBus VMBus Windows hypervisor Ring -1 “Designed for Windows” Server Hardware

  5. Hyper-V Security

  6. Security Assumptions • Guests are untrusted • Trust relationships • Parent must be trusted by hypervisor • Parent must be trusted by children • Code in guests can run in all available processor modes, rings, and segments • Hypercall interface will be well documented and widely available to attackers • All hypercalls can be attempted by guests • Can detect you are running on a hypervisor • We’ll even give you the version • The internal design of the hypervisor will be well understood

  7. Security Goals • Strong isolation between partitions • Protect confidentiality and integrity of guest data • Separation • Unique hypervisor resource pools per guest • Separate worker processes per guest • Guest-to-parent communications over unique channels • Non-interference • Guests cannot affect the contents of other guests, parent, hypervisor • Guest computations protected from other guests • Guest-to-guest communications not allowed through VM interfaces

  8. Isolation • We’re Serious • No sharing of virtualized devices • Separate VMBus per vm to the parent • No sharing of memory • Each has its own address space • VMs cannot communicate with each other, except through traditional networking • Guests can’t perform DMA attacks because they’re never mapped to physical devices • Guests cannot write to the hypervisor • Parent partition cannot write to the hypervisor

  9. Server Core: Introducing SCONFIG

  10. Windows Server Core • Windows Server frequently deployed for a single role • Must deploy and service the entire OS in earlier Windows Server releases • Server Core: minimal installation option • Provides essential server functionality • Command Line Interface only, no GUI Shell • Benefits • Less code results in fewer patches and reduced servicing burden • Low surface area server for targeted roles • Windows Server 2008 Feedback • Love it, but…steep learning curve Windows Server 2008 R2 Introducing “SCONFIG”

  11. Windows Server Core • Server Core: CLI

  12. Installing Hyper-V Role on Core • Install Windows Server and select Server Core installation

  13. Enable SCONFIG • Log on and type sconfig

  14. Easy Server Configuration

  15. Rename Computer • Type 2 &enter computer name and password when prompted

  16. Join Domain • Type 1 & D or W and provide name & password

  17. Add domain account • Type 3 & <username> and <password> when prompted

  18. Add Hyper-V Role • ocsetup Microsoft-Hyper-V • Restart when prompted

  19. Manage Remotely…

  20. New Processor Capabilities and Live Migration

  21. 64 Logical Processor Support • Overview • 4x improvement over Hyper-V R1 • Hyper-V can take advantage of larger scale-up systems with greater amount of compute resources • Support for up to 384 Concurrently Running Virtual Machines & Support for up to 512 Virtual Processors PER SERVER • 384 single virtual processor vms OR • 256 dual virtual processor vms (512 Virtual Processors) OR • 128 quad virtual processor vms (512 Virtual Processors) OR • any combination so long as you're running up to 384 VMs and up to 512 Virtual Processors

  22. Processor Compatibility Mode • Overview • Allows live migration across different CPU versions within the same processor family (i.e. Intel-to-Intel and AMD-to-AMD). • Does NOT enable cross platform from Intel to AMD or vice versa. • Configure compatibility on a per-VM basis. • Abstracts the VM down to the lowest common denominator in terms of instruction sets available to the VM. • Benefits • Greater flexibility within clusters • Enables migration across a broader ranger of Hyper-V host hardware

  23. Forward & Backward Compatibility • How Does it Work? • When a VM is started the hypervisor exposes guest visible processor features • With Processor Compatibility Enabled, the guest processors is normalized and the following processors features are “hidden” from the VM.

  24. Frankencluster • Hardware: • 4 Generations of Intel VT Processors • 4 Node Cluster using 1 Gb/E iSCSI • Test: • Created script to continuously Live Migrate VMs every 15 seconds Result: 110,000+ Migrations in a week!

  25. More on Processor Compatibility • What about application compatibility? • How do applications work with these processors features hidden? • Any apps not work? • What about performance? • What’s the default setting?

  26. Cluster Shared Volumes (CSV)

  27. SAN Management Complexity

  28. Delivering Innovation

  29. Cluster Shared Volumes • All servers “see” the same storage

  30. CSV Compatibility • No special hardware requirements • No file type restrictions • No directory structure or depth limitations • No special agents or additional installations • No proprietary file system • Uses well established traditional NTFS • Doesn’t suffer from VMFS limitations like: • VMFS limited to 2 TB LUNs It just works…

  31. CSV & Live Migration • Create VM on target server Copy memory pages from the source to the target via Ethernet • Final state transfer • Pause virtual machine • Move storage connectivity from source host to target host via Ethernet • Run new VM on source; Delete VM on target Host 1 Host 2 Blue = Storage Yellow = Networking Shared Storage

  32. Failover Cluster Configuration Program (FCCP) • New for Windows Server 2008 Failover Clustering • Customers have the flexibility to design failover cluster configurations • If the server hardware and components are logo’d and… • it passes the cluster validation tool, it’s supported! • Or customers can identify cluster-ready servers via the FCCP • OEMs have pre-tested these configurations and list them on the web • Microsoft recommends customers purchase FCCP-validated servers • Look for solutions with this tagline: • “Validated by Microsoft Failover Cluster Configuration Program”

  33. Hyper-V Networking

  34. Hyper-V Networking • Two 1 Gb/E physical network adapters at a minimum • One for management • One (or more) for VM networking • Dedicated NIC(s) for iSCSI • Connect parent to back-end management network • Only expose guests to internet traffic

  35. Hyper-V Network Configurations • Example 1: • Physical Server has 4 network adapters • NIC 1: Assigned to parent partition for management • NICs 2/3/4: Assigned to virtual switches for virtual machine networking • Storage is non-iSCSI such as: • Direct attach • SAS or Fibre Channel

  36. Hyper-V Setup & Networking 1

  37. Hyper-V Setup & Networking 2

  38. Hyper-V Setup & Networking 3

  39. Each VM on its own Switch… VM Worker Processes Parent Partition Child Partitions Applications Applications Applications User Mode WMI Provider VM 3 Windows Server 2008 VM 1 VM 2 VM Service Windows Kernel Linux Kernel Windows Kernel VSC VSC VSC Kernel Mode VSP VMBus VMBus VMBus VMBus VSP VSP Windows hypervisor Ring -1 “Designed for Windows” Server Hardware Mgmt NIC 1 VSwitch 1 NIC 2 VSwitch 2 NIC 3 VSwitch 3 NIC 4

  40. Hyper-V Network Configurations • Example 2: • Server has 4 physical network adapters • NIC 1: Assigned to parent partition for management • NIC 2: Assigned to parent partition for iSCSI • NICs 3/4: Assigned to virtual switches for virtual machine networking

  41. Hyper-V Setup, Networking & iSCSI

  42. Now with iSCSI… VM Worker Processes Parent Partition Child Partitions Applications Applications Applications User Mode WMI Provider VM 3 Windows Server 2008 VM 1 VM 2 VM Service Windows Kernel Linux Kernel Windows Kernel VSC VSC VSC Kernel Mode VMBus VMBus VMBus VMBus VSP VSP Windows hypervisor Ring -1 “Designed for Windows” Server Hardware Mgmt NIC 1 iSCSI NIC 2 VSwitch 2 NIC 3 VSwitch 3 NIC 4

  43. Networking: Parent Partition

  44. Networking: Virtual Switches

  45. New in R2: Core Deployment • There’s no GUI in a Core Deployment, how do I configure which NICs are bound to switches or kept separate for the parent partition?

  46. No Problem… • Hyper-V R2 Manager includes option to set bindings per virtual switch…

  47. Hyper-V R2: 10Gb/E Ready

  48. Networking: Chimney Support • TCP/IP Offload Engine (TOE) support • Overview • TCP/IP traffic in a VM can be offloaded to a physical NIC on the host computer. • Benefits • Reduce CPU burden • Networking offload to improve performance • Live Migration is supported with Full TCP Offload

  49. Networking • Virtual Machine Queue (VMQ) Support • Overview • NIC can DMA packets directly into VM memory • VM Device buffer gets assigned to one of the queues • Avoids packet copies in the VSP • Avoids route lookup in the virtual switch (VMQ Queue ID) • Allows the NIC to essentially appear as multiple NICs on the physical host (queues) • Benefits • Host no longer has device DMA data in its own buffer resulting in a shorter path length for I/O (performance gain)

More Related