1 / 63

Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure. Agenda. Introductions Support and Advantages Summary Reference Architectures & Performance Test Results Best Practices Sample Case Study: Deploying Exchange 2007 on VI Availability and Recovery strategies

lave
Download Presentation

Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Design, Deploy, and OptimizeExchange 2007 on VMware Infrastructure

  2. Agenda • Introductions • Support and Advantages Summary • Reference Architectures & Performance Test Results • Best Practices • Sample Case Study: Deploying Exchange 2007 on VI • Availability and Recovery strategies • Customer Success Stories

  3. Support and AdvantagesSummary

  4. Changes in Support Options • Scenario 1: • Support through Microsoft Server Virtualization Validation Program • ESX 3.5 U2, Windows Server 2008, Exchange 2007 • Scenario 2: • Support through server OEM • http://www.vmware.com/support/policies/ms_support_statement.html • Scenario 3: • Support through MS Premier contract • http://support.microsoft.com/kb/897615/en-us • Scenario 4: • Support through VMware GSS • Best effort support with MS escalation path (TSA Net)

  5. Summarizing Key Benefits • 5 key benefits of a VI3 platform: • Trim the fat from Exchange • Improve sizing and provisioning • Flexibility with Exchange building blocks • Improve availability • Simplify disaster recovery • Additional information: • http://www.vmware.com/files/pdf/Advantages_Virtualizing_Exchange_2007_final_April_2008.pdf

  6. Reference Architectures & Performance Test Results

  7. Exchange 2007 Performance Analysis Jetstress Storage performance assessment for Exchange provided by Microsoft Uses Exchange libraries to simulate multi-threaded Exchange-like workload across storage configuration LoadGen Exchange deployment performance assessment provided by Microsoft Runs end-to-end tests from client to measure typical Exchange activities SendMail, Logon, CreateTask, RequestMeeting, etc.

  8. VMware/EMC/Dell Reference Architecture

  9. VMware/EMC/Dell Performance Results • 1,000 “heavy” users • CLARiiON CX3 • Dell PowerEdge 2950 • VMware ESX 3.0.2 • Mailbox virtual machine • 2 vCPU • 7GB RAM • Comparable performance between native and virtual

  10. VMware/NetApp Reference Architecture

  11. VMware/NetApp Results • 6,000 users • 3 x 2,000 user VMs • IBM LS41 blade • 8 cores, 32GB RAM • NetApp FAS iSCSI storage • ESX 3.5.0 • Exchange 2007 SP1 • Jetstress and LoadGen comparable across native and virtual

  12. VMware/EMC 16,000 Users on Single Server

  13. VMware/EMC 16,000 User Results • 16,000 users • 4 x 4,000 user VMs • Dell R900 • 16 cores, 128GB RAM • EMC CLARiiON CX3 • ESX 3.5.0 • Exchange 2007 SP1 • 1.3 million messages/day • 40% CPU average

  14. VMware/HP Lab Configuration Mailbox Server - DL580 G4: Four- 3.2GHz Dual-Core processors (eight cores) 32GB memory (PC5300) installed in four memory controllers Dual-Port Emulex A8803A PCI-E Host Bus Adapter (HBA) Two- 72GB 10k Small factor Serial Attached SCSI (SAS) host operating system (OS) Two- 72GB SAS for guest VM OS RAID 1 disk arrays for host OS disk and guest VM OS disk Two integrated NC371i- 1 Gb network interfaces VT enabled in BIOS Hyperthreading enabled

  15. VMware/HP JetStress Results

  16. VMware/HP LoadGen: Mailbox Counter Results

  17. VMware/HP Building Block CPU Performance

  18. Summarizing Performance • Performance has been validated by VMware and Partners • Minimal CPU overhead observed (5-10%) • No impact on disk I/O latency • RPC latency comparable • No virtualization performance degradation observed • New Exchange 2007 workload performs extremely well on VI3 • Can exceed native scalability using building blocks

  19. Best Practices

  20. Virtual CPUs • Considerations • Unavailable pCPUs can result in VM “ready time.” • Idle vCPUs will compete for system resources. Best Practices for vCPUs • Do not over-commit pCPUs when running Exchange VMs. • Do not over-allocate vCPUs; try to match the exact workload. • If the exact workload is unknown, start with fewer vCPUs initially and increase later if necessary. • The total number of vCPUs assigned to all VMs should be less than or equal to the total number of cores on the ESX Server (in production).

  21. Virtual Memory • ESX Memory Management Features • Memory pages can be shared across VMs that have similar data (e.g. same guest OS) • Memory can be over-committed, (i.e. allocating more memory to VMs than is physically available on the ESX Server) • A memory balloon technique wherein virtual machines that do not need all they have been allocated give memory to virtual machines that are using all of their allocated memory.

  22. Virtual Memory • Memory Overhead • A fixed system-wide overhead for the service console (about 272 Mb for ESX 3.x…0 Mb for ESXi). • A fixed system-wide overhead for the Vmkernel, depending on number and size of device drivers. • Additional overhead for each VM. The virtual machine monitor for each VM requires some memory for its code and data. • A memory overhead table can be found in the VMware Resource Management Guide for ESX 3.5.

  23. Virtual Memory • VM Memory Settings • Configured = memory size of VM assigned at creation. • Reservation = guaranteed lower bound of memory that the host reserves for the VM and cannot be reclaimed for other VMs. • Touched memory = memory actually used by the VM. Guest memory is only allocated on demand by ESX Server. • Swappable = VM memory that can be reclaimed by the balloon driver or worst case by ESX Server swapping.

  24. Virtual Memory • Best Practices • Available physical memory for Exchange VMs = total physical memory minus system-wide overhead, VM overhead, and a user-defined “memory buffer”. • Do not over-commit memory until VC reports that steady state usage is below the amount of physical memory on the server. • Set the memory reservation to the configured size of the VM, resulting in a per-VM vmkernel swap file of zero bytes. The guest OS within the VM will still have its own separate page file. • Do not disable the balloon driver (installed with VMware Tools). • To minimize guest OS swapping, the configured size of the VM should be greater than the average memory usage of Exchange running in the guest. Follow Microsoft guidelines for memory and swap/page file configuration of Exchange VMs.

  25. Storage • Storage Virtualization Concepts • Storage array – consists of physical disks that are presented as logical disks (storage array volumes or LUNs) to the ESX Server. • Storage array LUNs – formatted as VMFS volumes. • Virtual disks –presented to the guest OS; can be partitioned and used in guest file systems.

  26. Storage • Best Practices • Deploy Exchange VMs on shared storage – allows VMotion, HA, and DRS. Aligns well with mission-critical Exchange deployments, often installed on shared storage management solutions. • Ensure heavily-used VMs not all accessing same LUN concurrently. • Storage Multipathing – Setup a minimum of four paths from an ESX Server to a storage array (requires at least two HBA ports). • Create VMFS file systems from VirtualCenter to get best partition alignment

  27. VMFS and RDM Trade-offs • VMFS • Volume can host many virtual machines (or can be dedicated to one virtual machine). • Increases storage utilization, provides better flexibility, easier administration and management. • RDM • Maps a single LUN to one virtual machine so only one virtual machine is possible per LUN. • More LUNs are required, so it is easier to hit the LUN limit of 256 that can be presented to ESX Server. • Although not required, RDM volumes can help facilitate swinging Exchange to standby physical boxes in certain support scenarios. • Leverage array level backup and replication tools that integrate with Exchange databases • Required for third party clustering software (e.g. MSCS). Cluster data and quorum disks should be configured with RDM. • Experimental support for Site Recovery Manager. • Large 3rd party ecosystem with V2P products to aid in certain support situations. • Does not support Quorum disks required for third party clustering software. • Full support for Site Recovery Manager

  28. Storage • Multiple VMs per LUN • The number of VMs allocated to a VMFS LUN influences the final architecture.

  29. Networking • Virtual Networking Concepts • Virtual Switches – work like Ethernet switches; support VLAN segmentation at the port level. VLANs in ESX Server allow logical groupings of switch ports to communicate as if all ports were on the same physical LAN segment. • Virtual Switch Tagging (VST mode): virtual switch port group adds and removes tags. • Virtual Machine Guest Tagging (VGT mode): an 802.1Q VLAN trunking driver is installed in the virtual machine. • External Switch Tagging (EST mode): external switches perform VLAN tagging so Ethernet frames moving in and out of the ESX Server host are not tagged with VLAN IDs.

  30. Networking • Virtual Networking Concepts (cont.) • Port groups – templates for creating virtual ports with a particular set of specifications. In ESX Server, there are three types of port group / virtual switch connections: • Service console port group: ESX Server management interface • VMkernel port group: VMotion, iSCSI and/or NFS/NAS networks • Virtual machine port group: virtual machine networks • NIC Teaming – A single virtual switch can be connected to multiple physical Ethernet adapters using the VMware Infrastructure feature called NIC teaming. This provides redundancy and/or aggregation.

  31. Networking • Best Practices • Ensure Host NICs run with intended speed and duplex settings. • Use same virtual switch to connect VMs on the same host, helping to eliminate physical network chatter (e.g. mailbox and GC). • Keep Production network traffic separate from VMotion and Admin traffic. (e.g. use VLAN technology to logically separate the traffic). • Team all the NICs on the ESX server – VMotion and Admin networks are not typically used heavily, while Production traffic is nearly constant with Exchange, one practice is to: • Connect to trunk ports on the switch • Use VLAN tagging to direct the traffic at the switch level to allow better utilization of bandwidth. • This practice frees up the majority of capacity for Production traffic when the VMotion and Admin VLANs are not being heavily used.

  32. Networking

  33. Resource Management & DRS • Best Practices • VMotion and automated DRS are not currently supported for MSCS cluster nodes. Cold migration is the best option for these roles. • Affinity rules • “Keep Virtual Machines Together": if the VMs are known to communicate a lot with each other (e.g. mailbox server and GC). • "Separate Virtual Machines": If the VMs stress/saturate the same system resource (CPU, memory, network or storage) • "Separate Virtual Machines": If the VMs rely on each other for availability and recovery (e.g. mailbox server separate from transport dumpster, CCR nodes separate from File Share Witness). • When configuring an ESX cluster • Consider VMotion compatibility between systems • Consider mix of VM configurations and workloads

  34. Sample Case Study:Deploying Exchange 2007 on VI

  35. Step 1 – Collect Current Messaging Stats Use the Microsoft Exchange Server Profile Analyzer to collect information from your current environment. • Example: • 1 physical location • 16,000 users • Mailbox profiles • Average - 10 messages sent/40 received per day • Average message size of 50KB • 500MB mailbox quota

  36. Step 2 – Define User Profile • Understanding Exchange 2007 Workload Requirements • Knowledge worker profiles for Outlook users • (http://technet.microsoft.com/en-us/library/aa998874(EXCHG.80).aspx)

  37. Step 3 – Design the Mailbox Server VM • http://technet.microsoft.com/en-us/library/bb738142(EXCHG.80).aspx • CPU Requirements • 1000 Average profile users per processor core • 500 Heavy profile users per processor core • Up to 8 processor cores maximum Memory Requirements Storage Requirements • Planning Storage Configurations (Microsoft TechNet) • Exchange 2007 Mailbox Server Role Storage Requirements Calculator

  38. Mailbox Server “Building Blocks” • The Building Block Approach • VMware-recommended Best Practice • Pre-sized VMs with predictable performance patterns • Improved performance when scaling up (memory page sharing) • Flexibility and simplicity when scaling out (deployment advantages) Building block CPU and RAM sizing for mailboxes with “average” profile http://www.microsoft.com/technet/prodtechnol/exchange/2007/plan/hardware.mspx

  39. Sample 4,000-User Building Block Configuration CPU: 4 vCPU Memory: 16 GB Storage: SCSI Controller 0 Network: NIC 1

  40. Step 4 – Design Peripheral Server Roles • Server Role Ratios (Processor Cores) Memory Requirements

  41. Sample Resource Summary for 16,000 average users • Resource Requirements by Server Role Resources required to support 16,000 average profile mailboxes

  42. Sample Hardware Layout for 16,000 average users • Exchange VM Distribution ESX Host Specifications

  43. ESX Host Architecture • Characteristics (each host) • Sized for app requirements plus overhead • Supports 8K mailboxes • Can be used as a “building block” to scale out even further

  44. Step 5 – Prepare the VMware Infrastructure • http://www.vmware.com/support/pubs/vi_pages/vi_pubs_35.html. • VMware Infrastructure Administration • Advanced VMware Infrastructure Features (VMotion, HA, DRS, etc.) • ESX Host Installation and Configuration • Virtual Networking • Storage

  45. Step 6 – Create Templates and Deploy • http://www.vmware.com/pdf/vc_2_templates_usage_best_practices_wp.pdf • Steps • Create Templates • Install Guest Operating System • Patch and Install Extras (i.e. PowerShell) • Customize and Deploy

  46. Step 7 – Install and Configure Exchange • Deployment Steps Microsoft Exchange Deployment Guide • Prepare the Topology • Install Client Access Server(s) • Prepare Schema • Install Hub Transport(s) • Install Mailbox Server(s)

  47. Step 8 – Performance Monitoring • Ongoing Performance Monitoring and Tuning • Performance counters of particular interest to Exchange administrators.

  48. Step 9 – Move Mailboxes

  49. Sample Availability & Recovery Options

  50. Simple Standalone Server Model with HA/DRS • Characteristics • MSCS required? – No • MS License Requirement – Windows/Exchange Standard Edition • Recovery time – Reboot • Transport Dumpster enabled? – No • Protects from – hardware failure only

More Related