1 / 27

Improving File System Performance in a Virtual Environment

Improving File System Performance in a Virtual Environment. Virtualization Deep Dive Day 2/20/2009 Bob Nolan Raxco Software bnolan@perfectdisk.com. Topic Background. Hardware is getting bigger and faster CPU clock speeds > 3+Ghz Multi-core 4GB RAM 500GB-2TB+ capacity hard drives

colton
Download Presentation

Improving File System Performance in a Virtual Environment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improving File System Performance in a Virtual Environment Virtualization Deep Dive Day 2/20/2009 Bob Nolan Raxco Software bnolan@perfectdisk.com

  2. Topic Background • Hardware is getting bigger and faster • CPU clock speeds > 3+Ghz Multi-core • 4GB RAM • 500GB-2TB+ capacity hard drives • Limiting factor is still disk I/O • Anything that speeds up access to disk improves performance

  3. Virtualization • Uses host resources to run virtual guests • Multiple guests can strain host resources and impact performance • Failure to optimize resources can be crippling • Proactive problem management is best approach

  4. NTFS in a Virtual Machine • Uses slightly more resources in a VM • Maintains a bitmap of VM disk • Allocates free space • Fragments files and free space • Degrades performance with use

  5. Logical vs Physical Clusters • Logical Clusters • File system level • Every partition starts at logical cluster 0 • No idea of hard drive technology in use • IDE • SCSI • RAIDx • # platters or read/write heads

  6. Logical vs Physical Clusters • Physical Clusters • Hard drive level • Hard drive controller translates logical-to-physical and positions heads

  7. Cluster Size and Performance • Smaller clusters • less wasted space • Worse performance – especially large files • Larger clusters • more wasted space • Better performance – especially large files

  8. Fragmentation Causes • What causes fragmentation? • Occurs when files are created, extended or deleted • Happens regardless of how much free space is available (After XP/SP2 installation – 944 files/2943 fragments) • More than one Logical I/O request has to be made to the hard drive controller to access a file

  9. Fragmentation Impacts • What does fragmentation do to my system? • Slows down access to files • Extra CPU/Memory/Disk resource usage • Some applications may not run • Slow system boot/shutdown • Audio/Video record/playback drops frames or “skips”

  10. Measuring Impact of Fragmentation • Measuring the performance loss in reading a fragmented file

  11. Defragmenting - Results • What does defragmenting do? • Locates logical pieces of a file and brings them together • Faster to access file and takes less resources • Improves read performance • Consolidates free space into larger pieces • New files get created in 1 piece • Improves write performance

  12. Measuring Impact of Fragmentation • Measuring the performance difference in reading a contiguous file

  13. Defragmenting - Issues to Consider • Free Space • How much is enough? • Where is free space located? • Inside MFT Reserved Zone • Outside of MFT Reserved Zone • Consolidation of free space

  14. Advanced Defrag Technology • Complete Defrag of All Files • Free Space Consolidation • Single Pass Defragmentation • File Placement Strategy • Free Space Requirement • Minimal Resource Usage • Large Drive Support • Easy to Schedule and Manage • OS Certification • Robust/Easy Reporting

  15. Defrag Completeness • Data Files • Directories • System Files • Pagefile • Hibernate File • NTFS metadata

  16. Free Space Consolidation • Allows new files to be created contiguously • Maintains file system performance longer • Requires less frequent defrag passes • Reduces split I/O’s

  17. Free Space Consolidation • Defragmenting files improves read performance • Free space consolidation improves write performance • Reduces wasted seeks by over 50%

  18. CaseStudy-Auto Company Problems • Overall poor workstation performance • Slow boot times • Increased help desk calls • Increased backup time on servers

  19. Case Study ROI • 4000 Windows XP workstations • 400 servers • $30/hr end user cost • $40/hr system admin/help desk cost • Saved 20 seconds per day per Ws • Reduced help desk by 20% (800 hrs. ann.) • Cut backup time 65%

  20. CaseStudy ROI • Saved 22 hrs per day- 4840 hrs annually • $145,200 annual productivity savings • $32,000 help desk savings • ~$20,000 backup savings • 66 days to recover investment • Proactively maintains optimal disk performance

  21. Conclusion • To improve file system/drive performance • Use appropriate disk technology • Use the most appropriate file system • Use the most appropriate cluster size • Align on cluster boundaries • Make sure free space is consolidated • When you defragment, make sure that it is being done effectively.

  22. Resource Usage • Run in the background • Low Memory Usage • Low CPU Usage

  23. Volume Shadow Copy (VSS) • VSS and defragmentation • Multiple of 16k cluster size • Default cluster size is 4k because NTFS compression hasn’t been modified to support greater than 4k cluster size. • BitLocker (Vista) also restricted to 4k cluster size

  24. DiskPar/DiskPart • Want to avoid crossing track boundaries • Align on 64k for best MS SQL performance • Win2k8 – default is 64k when creating volumes • Contact storage vendor – i.e. EMC recommends 64k

  25. Performance Measuring Tools • Windows Performance Monitor • Split I/O Count (fragmentation) • Disk Queue Length (<= 2/spindle) • hIOMon – www.hiomon.com • Device AND File based metrics • SQLio – Microsoft • Stress Test I/O subsystem

  26. ClusterSize Recommendations * You can’t use compression if cluster size greater than 4k

More Related