1 / 44

Microsoft Exchange Best Practices

Microsoft Exchange Best Practices. Amy Styers – styers_amy@emc.com Commercial Microsoft Solutions Consultant. What are “Best Practices”?. Best practices are accepted truths and wisdom based on: Manufacturer’s recommendations Historical evidence Analytical data Lessons learned

okamoto
Download Presentation

Microsoft Exchange Best Practices

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Microsoft Exchange Best Practices Amy Styers – styers_amy@emc.com Commercial Microsoft Solutions Consultant

  2. What are “Best Practices”? • Best practices are accepted truths and wisdom based on: • Manufacturer’s recommendations • Historical evidence • Analytical data • Lessons learned • Proof points • Best practices are general recommendations that: • Provide guidance and considerations in the design stages

  3. F Best Practices are Based on The Audience • Depend on audience, requirements complexity, sophistication • Some we understand intuitively • Some we don’t. What do these represent? • Be flexible • What may be good for one implementation, it may not be good for another

  4. Rule 1 - Understand Exchange I/O Profiles • Understand the various user profiles and how this information can be gained • Also be aware of what else generates IO which needs to be considered in the design This is the foundation to properly sizing Exchange

  5. Rule 1 - Understand Exchange I/O Profiles • Use the System Monitor tool to monitor Physical Disk\Disk Transfers/sec counter over the peak hours of server activity. • This will allow for handling of random and “bursty” moments • Monday averaging between 8:30 or 11am is a typical peak average • Be aware of online maintenance activity. In some cases it can be very IO intensive and can generate as much activity as normal Exchange operations IOPS Peak

  6. Rule 2 – Size Exchange based on both I/O requirement and mailbox capacity • Old Way of Sizing (Just Capacity) • Number of Users x Mailbox Size x Growth = ~1234 GB • Buy enough disk to satisfy the capacity requirement (1234 GB / disk size) • With larger disk sizes, there may not be enough spinning disks for good performance • NEW Way of Sizing (Capacity and Performance) • Use enough physical disk spindles to satisfy total user IOPS and total user mailbox capacity • Total IOPS required / Disk Spindle IOPS • Also know this • Be aware of how I/O read/write ratios in Exchange 2007 have changed vs. Exchange 2003 • Understand the tradeoff of large mailboxes • Apply as much detailed information to your design

  7. Rule 2 – Size Exchange based on both I/O requirement and mailbox capacity • Server memory configurations are key in Exchange 2007 design. • Read/write ratios are typically 1:1 • Be aware of your total server memory configuration requirements

  8. Rule 2 – Size Exchange based on both I/O requirement and mailbox capacity Paying for more I/O than needed? Required for Capacity 146 GB 15K RAID 1/0 Exchange 2003 Exchange 2007 146 GB 10K RAID 5 Required for IOPS 300 GB 10K RAID 1/0 Required for IOPS Excess Capacity

  9. Rule 2 – Size Exchange based on both I/O requirement and mailbox capacity • IOPS: the number of Input Output operations (I/O’s) per second • %R: Percentage of I/O’s that are Reads • WP: RAID Write Penalty (RAID 1 = 2, RAID 5 = 4) • %W: Percentage of I/O’s that are Writes • Factor other variable such as archiving, journaling, virus protection, remote devices, and risk Don’t solely rely on automated tools in sizing your Exchange environment. Place detailed effort into your calculations and provide supporting factual evidence on your designs rather than providing fictional calculations

  10. Rule 3- RAID Protection Type With Exchange 2007, technically any RAID type can satisfy the IO, as long as there are enough spindles to deal with the increased write ratio. However consider the following Understand what each type provide Performance, reliability Cost, practicality Risk, exposure RAID 1 Best practice Always works, predictable and best performance RAID 5 & RAID 6 Writes do impact reads Rebuilds impact reads and writes

  11. Rule 4- Spindle and Storage Best Practices Avoid sharing physical spindles over multiple servers Best: an Exchange server has its own physical spindles, i.e. the building block approach Possible: Exchange servers share physical spindles Possible, with care: Exchange servers share physical spindles with BCV volumes used for their own backup Possible, but not good: Exchange servers share physical spindles with other applications with predictable, known, and similar workloads. Very bad: Exchange servers share physical spindles with applications with unpredictable and incompatible workloads, like data warehouses. If unable to dedicate spindles to Exchange, be aware of what else is running on the physical spindle, and account for that activity

  12. Rule 4- Spindle and Storage Best Practices • Performance problems may arise from sharing physical disk resources, with other I/O intensive applications and databases. • Exchange database write characteristics are very random and small in size • Avoid sharing Exchange physical disks with other application like different IO characteristics like Oracle • These recommendations are true for both DAS and SAN topologies. Understand the storage technology options available your Exchange environment Understand the business requirements first Exchange data Other applications Not dedicating physical disk to exchange can bring unpredictable performance Exchange data Dedicating physical disk to exchange allows you to maintain predictable levels of performance

  13. Exchange on direct attached storage (DAS/JBOD) Exchange always ran on direct attached storage, but… Reliability issues Lack of write cache No rebuild priorities No remote replication, while Exchange was becoming business critical DAS and virtualization don’t mix well – Virtualization is a key market trend Enterprise Storage Options Messaging environments are mission critical applications Treat them as such Understand your technology options Understand your technology cost Acquisition cost Long term of ownership ( all related cost ) Rule 4- Spindle and Storage Best Practices

  14. Rule 5 – Design Methodology for Exchange • Iterate over all possible solutions, until the right solution has been found • Platform independent • Considers total solution, including software • Parameters of an Exchange configuration determine the possibilities • List accuracy of possibility • Be aware of Exchange object requirements such as ESG configuration requirements • Exchange should be properly planned before disks are laid out. EMC recommends the use of Building Blocks outlined in our solutions and ESRP, this makes the design, install, backup & troubleshooting much predictable

  15. Myths Around Sharing Logs and Data on Same Physicals Microsoft does not allow it! FALSE OK when you can recover from a double RAID 1 device failure http://www.microsoft.com/technet/prodtechnol/exchange/guides/StoragePerformance/fa839f7d-f876-42c4-a335-338a1eb04d89.mspx Separate logs and data because of performance, it is TRUE for solutions without write cache Sometimes write cache is lost after a board failure, requiring separation of logs and data volumes also SG2 Data SG1 Data SG1 Logs SG2 Logs Rule 5 – Design Methodology for Exchange

  16. Rule 5 – Design Methodology for Exchange • Share logs and data on physical spindles (Symmetrix) • Better usage of available spindles • Do NOT share logs and data on same logical volume (META) • Logs and DB for the same SG should not reside on the same spindles • Subsystem Cache Will Not Guarantee Performance • The quantity or intelligence of cache does not override normal disk sizing and layout advice • Provide enough spindles • Optimizer is not the magical spell either • Is not the magic that will automatically "create" a good configuration over time • Exchange, being such a random application, should be planned out properly before disks are laid out

  17. Rule 5 – Design Methodology for Exchange • Use Striped Metavolumes / MetaLUNs • Much better than concatenated in performance tests • Symmetrix (Metavolumes) • Multiples of four and handled well by Symm Disk Adapters • Example: 4 member meta, 8 member meta • Striped metavolumes need a BCV to expand • CLARiiON (MetaLUNs) • Great performance, simple to expand • Host Volume Sets Require Windows Dynamic Disks • Present limitations in clusters, replication • Stay with Basic Disk + + + + + + (2) RAID10 (8+8)’s to make MetaLUN

  18. Rule 6 – Consider All Elements in the Design • Server • Align using 64 KB offset with Diskpart ( Widows 2003 ) • Set allocation unit size on DB and Log LUNs to 64 KB • Set HBA QDepth to 128 • EMC PowerPath for load balancing • Array • Keep default 8 KB page size on CLARiiON • 8 KB sector within new DMX3 cache slot provides better fit • Don’t change LUN offsets • HBA and FA ports • Fan-out two HBA to at least four FA ports • FA gives highest priority to queuing I/O requests from host • During a write burst a FA can be overloaded • More FA avoids overload • It is NOT needed to dedicate four FA ports • Share FA ports with other Exchange servers • Do NOT share Exchange with high bandwidth servers on the same FA slices/CPU • Increase queue depth to 128 ( or higher when possible ) • Server has set HBA queue depth, and HBA has to set the queue depth to 128 • Use PowerPath to: • Give priority to log writes • Throttle writes to avoid starvation of reads

  19. Understand Bottlenecks/Weakest Link Rule 6 – Consider All Elements in the Design • Application (users, layout, tuning, AD, isolation) • Volume Manager (format, alignment, stripe size) • Host Bus (HBA queue depth, PowerPath) • Disk (spindles, metavolumes, replication, archiving)

  20. Rule 7: Test and Measuring Performance • Besides EMC Workload Analyzer/Performance Manager and NaviAnalyzer use the following tools: • ExBPA • Understand where you are today, use a base line information. • JetStress • Suggested by Microsoft for design testing – simulates I/O work load • LoadGen • Simulates email load based on profiles • Perfmon • Help you understand current I/O characteristics TEST !!! Never go live without proper testing

  21. JetStress generates the same throughput as Exchange in a production environment Each thread generates a mix of transactions (insert, delete, replace and seek) to mimic an Exchange workload Increasing thread count increases the throughput to match what is expected in production Mailbox count, and I/O profile are used to calculate expected IO/s Is only used to determine when a test has exceeded the expected IO/s and rated “successful” There is NO difference between 50,000 users at 0.1 Expected IOPS 5,000 5,000 users at 1.0 Expected IOPS 5,000 There is a difference in real life! Rule 7: Test and Measuring Performance

  22. Use JetStress to verify the performance and stability JetStress is used to validate the performance of a disk subsystem prior to putting an Exchange server into production. JetStress helps verify disk performance by simulating Exchange Watch out for very high user counts based upon JetStress Mailbox size is used to create the initial databases The stroking distance is always 100% Always do JetStress before deployment Any mistake, any imbalance will be made visible with the opportunity to correct before implementing in production Compare against EMC ESRP results Watch out for caching effect Test with LoadGen after JetStress testing Microsoft Exchange Load Generator (LoadGen) is a simulation tool to measure the impact of MAPI, OWA, IMAP, POP and SMTP clients on Exchange servers LoadGen can require a lot of hardware, but is able to reproduce results Rule 7: Test and Measuring Performance

  23. Rule 7: Test and Measuring Performance Understand what the latency acceptable limits are: Do validate latest acceptable latency limits

  24. Rule 7: Test and Measuring Performance Understand what the latency acceptable limits are: Do validate latest acceptable latency limits

  25. Rule 8: Help Plan the Entire Architecture • Storage is usually an afterthought • Getting into Exchange and Active Directory Planning sessions will help you understand the environment better • Poorly Planned Active Directory = Exchange Performance Problems may manifest • EMC offers services that can evaluate the quality of a customers Active Directory deployment • Exchange Insight Workshop • Migration Assessment • Migration Design • Migration Implementation • Tie it all together • Business requirements • Storage Sizing • Performance • Back up • Restore • Recovery • Distance Replication • Security • Management • Archiving

  26. Rule 9 - Backup Best Practices Avoid unprotected clones and mirrored clones with VSS Recovery after a drive failure is VERY difficult Same when drive fails while mirroring from M1BCV to M2BCV RAID 5 clones are the best solution in combination with VSS Recovery after drive failure is simple Backup/restore operation can continue with a bad drive, but slow Protected clones are possible too Uses two mirror positions SNAPs are possible, but consider the following Change rate Activity during ESEUTIL, on-line maintenance and backup Sequential read is not optimal – long term consequences Understand the difference between recovery and restore Backup and restore have same granularity with hardware VSS

  27. Complicated management 50 SG have 100 LUNs to manage Trend to have fewer SGs ( 8 to 16 ) Map backup order over time Avoid contention by carefully mapping out backup and eseutil processes. Very hard to do. More SGs make it more complicated One sequential stream per clone 1:1 map from source to clone Concurrent backups Have as many backup threads as possible to get to the required throughput. Sequential with random seeks to the clone drive Still fast enough to meet the windows. Uses a big pool of clone devices Make sure you understand the importance of backups Don’t just recommend “no backups” due to technology limitations Backups are subject to regulatory requirements Rule 9 - Backup Best Practices

  28. O H O O O O O H H H H H H H H H H Rule 9 - Backup Best Practices Include data de-duplicationas part of the backup strategy Take advantage of VSS technology and off-load the backup mechanism off the production servers • Break data into atom (sub-file, variable-length segments of data) • Send and store each atom only once • Avamar backup repository At the source—De-duplication before data is transported across the network At the target—Assures coordinated de-duplication across sites, servers, and over time Granular—Small, variable-length segments guarantee most effective de-duplication …up to 500 times daily data reduction

  29. Rule 10 – Leverage Server Virtualization Technology Server Virtualization technology is a great fit for Exchange • Position VMware whenever possible • Provides great TCO to Exchange deployments – much better than DAS • Understand the latest testing on virtualization technology Result Server hardware trends driving Exchange virtualization Server hardware changes - Exchange 2007 requires 64-bit servers - Now ship with multi-core 2/4/(soon 6) CPUs, 256GB RAM - Intel/AMD hardware-assisted virtualization - Huge potential for under-utilization - Opportunity to consolidate and reduce costs Changes in ESX 3.5 and beyond - Increased guest OS memory (64GB) - Increased physical RAM on ESX (256GB) - Network improvements lower CPU utilization - NUMA optimizations improve multiple VM performance - Improved storage efficiency Result ESX 3.5 is ready for Exchange 2007

  30. Virtualizing Microsoft Applications Historically, some customers feared virtualization of enterprise applications. High utilization of resources limited by 32-bit system Large amount of IO traffic Server memory limitations But, hardware has changed… From dual-core to multi-core technology increase available resources More memory per server Applications like Exchange have changed… better uses multiple cores, 64-bit Reduced I/O more server roles to consider Microsoft Application are excellent candidates for virtualization! Benefits of virtualization, such as encapsulation/portability, hardware availability, and change control are now accessible in the world of Microsoft applications Rule 10 – Leverage Server Virtualization Technology

  31. Dispelling Myths - Performance Many still convinced Exchange should never be virtualized Exchange 2007 performance testing proves otherwise What performance is important to Exchange administrators? Low latency within defined thresholds Exceptional user experience Available headroom for future growth Constant latency while scaling to multiple, concurrent mailbox servers Minimal storage latencies for large mailbox counts Flexibility to meet these requirements under changing workloads Let’s take a look !!! Rule 10 – Leverage Server Virtualization Technology

  32. 16,000 “heavy” users on single server CLARiiON CX3-80 Replication Manager Dell R900 16 core 128GB RAM Virtual machines: 4,000 users each 4 vCPU 12GB RAM Whitepaper: Rule 10 – Leverage Server Virtualization Technology

  33. Leverage Dynamic Adaptability with Virtual LUN Technology Unanticipated virtual machine growth New performance needs changed Alter performance of the virtual machine file system without disruption Move between disk or RAID types Non-disruptive to virtual machines and applications Adjust for unplanned changes Tier virtual machine groups Efficient usage of storage Worry-free adaptability APP APP APP APP APP APP APP APP APP APP APP OS OS OS OS OS OS OS OS OS OS OS SATA II SATA II SATA II SATA II SATA II FC FC FC SATA II FC FC FC FC FC FC Rule 10 – Leverage Server Virtualization Technology Virtual Machines VMware ESX Servers Virtual LUN Technology

  34. High IO’s per user Virtual Provisioning Candidates Low Small Mailbox size Large Rule 10 – Leverage Server Virtualization Technology • In general Exchange is not a good candidate for thin LUN’s • Tier 1 application • Heavy, bursty IO activity • Latency intolerant • Preference for RAID 1/0 • Often reaches max storage quickly • But some Exchange installations may qualify • Smaller user counts • Low IO’s per user • Mailboxes that won’t reach their max for more than a year • VP configuration • Dedicated Storage Pool • Periodic provisioning • NQM cruise control with mixed use • Saves management and capital costs Provided by CX partner engineering

  35. Maximize Resources - QoS Manager and DRS Concerned about introducing contention with consolidation? DRS balances host resources Monitors CPU & memory utilization Leverages policy-based VMotion NQM enforces application storage performance policies Throughput, Bandwidth, Response Together offer end-to-end policy-based performance management This is advance functionality you will not find in JBOD or DAS APP APP APP APP APP APP OS OS OS OS OS OS Rule 10 – Leverage Server Virtualization Technology High Priority Medium Priority Low Priority VMware ESX Server Available Performance Applications

  36. Core Storage Best Practices

  37. Core Storage • 1. Align all Microsoft Exchange-related disks using a value of 64. • This aligns all of the Exchange-related NTFS partitions on a 64-KB boundary. With the release of Windows 2008, this issue has been addressed and corrected, so it is no longer necessary to perform this task in disk configured in Windows 2008.

  38. Core Storage • 2. Format all Microsoft Exchange-related NTFS partitions using 64-KB Allocation Unit (AU) cluster size. • While this cluster size has been shown to have no effect on normal Microsoft Exchange database operations (transaction activity), studies have shown that a 64-KB cluster size increases performance with certain Microsoft Exchange and NTFS-related operations, such as Exchange backups and Exchange check-summing activities associated with VSS-related operations.

  39. Core Storage • 3. Isolate the Microsoft Exchange database workload from other I/O-intensive applications or workloads. • This ensures the highest levels of performance for Microsoft Exchange and makes troubleshooting easier in the event of a disk-related Microsoft Exchange performance problem.

  40. Core Storage • 4. Separate logs and databases onto different disks and RAID groups. • This may pose performance issues as database, and log files have very different I/O characteristics. It can also be an issue if placing log files and databases from the same ESG in a given volume group, as certain recovery options can be impaired. If desired, it is possible to combine logs, and databases of separate Exchange storage groups within the same physical spindle (typically in the Symmetrix.) Do NOT share logs and data on same logical volume (META). Microsoft has acknowledged this recommendation, and has updated this KB article.

  41. Core Storage • 5. Size and configure the environment for spindle performance as a primary consideration, with spindle or storage capacity as a secondary concern. In other words, size for performance first and then capacity requirements and performance results • Microsoft has released a new Exchange 2010 sizing tool

  42. Core Storage • 6. Tuning the array storage system parameters is important in obtaining best performance. The following list details the optimal parameters for Exchange: • Cache page size of 8 KB • Maximized write cache size • Read and write cache enabled for all LUNs

  43. Core Storage • 6. Use a Building Block approach for planning storage for Exchange

More Related