240 likes | 400 Views
Module 9 : scaling the environment. Agenda. CP storage in a production environment Understanding IO by Tier Designing for multiple CPs Storage sizing (lab) Cluster design considerations. Understanding disk IO and Unidesk storage tiers. Unidesk Storage Arch. Boot Volume Tier.
E N D
Agenda • CP storage in a production environment • Understanding IO by Tier • Designing for multiple CPs • Storage sizing (lab) • Cluster design considerations
Boot Volume Tier • The VM and associated files are created/located here (.vmx, vswap, etc) • VMDK comprised of three things: • Windows Page file for the desktop • A Composited (layered) registry • Basic windows boot files needed before the file system loads • Very little IO associated with the Boot volume • First few seconds of boot • Windows Paging
CP and Layer Tier • Layers are stored in directory structure underneath the CachePoint • Layers are stored as VMDKs • OS and App layers are stored as independent non-persistent disks • Personalization layers are stored as Independent Persistent disk • Highest IO volume
Archive Tier • Used to store personalization layer backups • Backups are configured per persistent desktop • Backup schedule and frequency can be unique to each desktop • Not every desktop on a CachePoint needs to be backed up • Very little IO as it’s simply a storage area • Drives such as SATA or slower speed SAS drives are typically used
Tested Desktop Configuration • Windows 7 Professional 32 bit • Office 2010 Pro Plus • VMware vCenter Client • Google Chrome and IE8 for browsing • Tweetdeck • Skype • Adobe Reader and Flash • Single vCPU • 2 GB of RAM – (later changed to 1GB to show increase in IO due to paging)
IOPS by Tier Summary • Testing shows that the majority of IO happens at the CachePoint/Layer datastores • Even during a BIC, most IO happens at the Layering volumes • Use high performance disk for CachePoint/Layers and less expensive disk for Boot Volume datastores • If these tiers are not combined into a single datastore • It is supported to have both CachePoint/Layers and Boot Volume on the same datastore • Although this will use more of the expensive disk over the long run
Appliance Placement? Host 2 Host 1 Host 3 Host 4 MA DRS MCP DRS DRS CP 1 CP 2 CP 3 CP 4
Storage location is what matters Host 2 Host 1 Host 3 Host 4 DRS DRS DRS DRS CP1 MA MCP CP4 CP2 CP3 VMFS1 VMFS2 VMFS3 VMFS4 VMFS5 MA CP1 CP2 CP3 CP4 MCP
Individual CP Storage CP1 Boot images Layer\CP Storage Archive These can be merged into one datastore VMFS1 VMFS-Y VMFS-X CP1 PERS Desktop1 PERS vswap BOOT PERS Desktop2 vswap BOOT PERS APP OS PERS APP PERS Desktop3 PERS vswap BOOT APP PERS PERS APP PERS Desktop4 vswap BOOT PERS APP PERS PERS APP Desktop5 PERS vswap BOOT PERS PERS PERS Desktop6 PERS vswap BOOT PERS PERS PERS
Putting it all together (3 Tiers) Host 2 Host 1 Host 3 Host 4 DRS DRS CP1 MA MCP CP4 CP2 CP3 LAYERS1 LAYERS3 LAYERS2 LAYERS4 MGMT1 MA MCP Boot 1 Boot 3 Boot 2 Boot 4 Arch4 Arch3 Arch2 Arch1
Putting it all together (2 Tiers) Host 2 Host 1 Host 3 Host 4 DRS DRS CP1 MA MCP CP4 CP2 CP3 LAYERS1 LAYERS2 LAYERS3 LAYERS4 MGMT1 MA MCP Arch4 Arch3 Arch2 Arch1
Sizing the datastores • Uses the following variables and basic assumptions to define writable storage needs • Desktop memory • User space/personalization layer size • Shared layer size estimate • Backup settings • Number of desktops and desktops per CP • Sizing tool from Unidesk is a simple spreadsheet and completely adjustable
Lab: Sizing Unidesk storage
Common Limitations? • VMware cluster size limit for sharing disks (up to ESXi 5.0) was 8 nodes with active VMs sharing a VMDK on a specific VMFS volume • Typical VMFS volume on rotating disk is god about 65-75 desktops (if you handle the IO load) • With SSD VMFS is fine to 110-120 • CPs have been tested to 250-300 desktops per CP well above typical VMFS usage • NFS will allow you have fewer datastores (does not have the VMFS locking issues) though you must still handle IO and have enough CPs
How could NFS change this design? Host 2 Host 1 Host 3 Host 4 DRS DRS CP1 MA MCP CP4 CP2 CP3 LAYERS1 LAYERS2 LAYERS3 LAYERS4 75 Desktops Per VMFS volume MGMT1 MA MCP Arch4 Arch3 Arch2 Arch1
How could NFS change this design? Host 2 Host 1 Host 3 Host 4 DRS DRS CP1 MA MCP CP4 CP2 CP3 150 Desktops Per CP? Reduced number of datastores? NFS1 CP1 CP2 MA MCP Arch4 NFS Archive Arch3 Arch2 Arch1