1 / 24

Module 9 : scaling the environment

Module 9 : scaling the environment. Agenda. CP storage in a production environment Understanding IO by Tier Designing for multiple CPs Storage sizing (lab) Cluster design considerations. Understanding disk IO and Unidesk storage tiers. Unidesk Storage Arch. Boot Volume Tier.

yale
Download Presentation

Module 9 : scaling the environment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Module 9: scaling the environment

  2. Agenda • CP storage in a production environment • Understanding IO by Tier • Designing for multiple CPs • Storage sizing (lab) • Cluster design considerations

  3. Understanding disk IO and Unidesk storage tiers

  4. Unidesk Storage Arch.

  5. Boot Volume Tier • The VM and associated files are created/located here (.vmx, vswap, etc) • VMDK comprised of three things: • Windows Page file for the desktop • A Composited (layered) registry • Basic windows boot files needed before the file system loads • Very little IO associated with the Boot volume • First few seconds of boot • Windows Paging

  6. CP and Layer Tier • Layers are stored in directory structure underneath the CachePoint • Layers are stored as VMDKs • OS and App layers are stored as independent non-persistent disks • Personalization layers are stored as Independent Persistent disk • Highest IO volume

  7. Archive Tier • Used to store personalization layer backups • Backups are configured per persistent desktop • Backup schedule and frequency can be unique to each desktop • Not every desktop on a CachePoint needs to be backed up • Very little IO as it’s simply a storage area • Drives such as SATA or slower speed SAS drives are typically used

  8. Tested Desktop Configuration • Windows 7 Professional 32 bit • Office 2010 Pro Plus • VMware vCenter Client • Google Chrome and IE8 for browsing • Tweetdeck • Skype • Adobe Reader and Flash • Single vCPU • 2 GB of RAM – (later changed to 1GB to show increase in IO due to paging)

  9. IOPS Analysis by Tier

  10. IOPS Analysis by Tier

  11. IO Analysis by Tier

  12. IOPS by Tier Summary • Testing shows that the majority of IO happens at the CachePoint/Layer datastores • Even during a BIC, most IO happens at the Layering volumes • Use high performance disk for CachePoint/Layers and less expensive disk for Boot Volume datastores • If these tiers are not combined into a single datastore • It is supported to have both CachePoint/Layers and Boot Volume on the same datastore • Although this will use more of the expensive disk over the long run

  13. Appliance placement & datastore considerations

  14. Appliance Placement? Host 2 Host 1 Host 3 Host 4 MA DRS MCP DRS DRS CP 1 CP 2 CP 3 CP 4

  15. Storage location is what matters Host 2 Host 1 Host 3 Host 4 DRS DRS DRS DRS CP1 MA MCP CP4 CP2 CP3 VMFS1 VMFS2 VMFS3 VMFS4 VMFS5 MA CP1 CP2 CP3 CP4 MCP

  16. Individual CP Storage CP1 Boot images Layer\CP Storage Archive These can be merged into one datastore VMFS1 VMFS-Y VMFS-X CP1 PERS Desktop1 PERS vswap BOOT PERS Desktop2 vswap BOOT PERS APP OS PERS APP PERS Desktop3 PERS vswap BOOT APP PERS PERS APP PERS Desktop4 vswap BOOT PERS APP PERS PERS APP Desktop5 PERS vswap BOOT PERS PERS PERS Desktop6 PERS vswap BOOT PERS PERS PERS

  17. Putting it all together (3 Tiers) Host 2 Host 1 Host 3 Host 4 DRS DRS CP1 MA MCP CP4 CP2 CP3 LAYERS1 LAYERS3 LAYERS2 LAYERS4 MGMT1 MA MCP Boot 1 Boot 3 Boot 2 Boot 4 Arch4 Arch3 Arch2 Arch1

  18. Putting it all together (2 Tiers) Host 2 Host 1 Host 3 Host 4 DRS DRS CP1 MA MCP CP4 CP2 CP3 LAYERS1 LAYERS2 LAYERS3 LAYERS4 MGMT1 MA MCP Arch4 Arch3 Arch2 Arch1

  19. Sizing the datastores • Uses the following variables and basic assumptions to define writable storage needs • Desktop memory • User space/personalization layer size • Shared layer size estimate • Backup settings • Number of desktops and desktops per CP • Sizing tool from Unidesk is a simple spreadsheet and completely adjustable

  20. Lab: Sizing Unidesk storage

  21. Common Limitations? • VMware cluster size limit for sharing disks (up to ESXi 5.0) was 8 nodes with active VMs sharing a VMDK on a specific VMFS volume • Typical VMFS volume on rotating disk is god about 65-75 desktops (if you handle the IO load) • With SSD VMFS is fine to 110-120 • CPs have been tested to 250-300 desktops per CP well above typical VMFS usage • NFS will allow you have fewer datastores (does not have the VMFS locking issues) though you must still handle IO and have enough CPs

  22. How could NFS change this design? Host 2 Host 1 Host 3 Host 4 DRS DRS CP1 MA MCP CP4 CP2 CP3 LAYERS1 LAYERS2 LAYERS3 LAYERS4 75 Desktops Per VMFS volume MGMT1 MA MCP Arch4 Arch3 Arch2 Arch1

  23. How could NFS change this design? Host 2 Host 1 Host 3 Host 4 DRS DRS CP1 MA MCP CP4 CP2 CP3 150 Desktops Per CP? Reduced number of datastores? NFS1 CP1 CP2 MA MCP Arch4 NFS Archive Arch3 Arch2 Arch1

  24. Open Q&A

More Related