1 / 17

VDI Architecture

VDI Architecture. Chris House, VCP Senior Network Analyst DataCenter Team. Quick facts. 1,500 Windows XP SP2 “sessions” 1 or more statically assigned to users 2 datacenters, 750 sessions per datacenter 2 VirtualCenter servers (1 per datacenter) 38 ESX servers (19 per datacenter/VC)

kimimela
Download Presentation

VDI Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VDI Architecture Chris House, VCP Senior Network Analyst DataCenter Team

  2. Quick facts • 1,500 Windows XP SP2 “sessions” • 1 or more statically assigned to users • 2 datacenters, 750 sessions per datacenter • 2 VirtualCenter servers (1 per datacenter) • 38 ESX servers (19 per datacenter/VC) • 4 ESX clusters (2 per datacenter/VC) • 15 datastores per cluster/server • 25 sessions per datastore • 2 HP StorageWorks XP1024 Fibre-Channel SAN Arrays (1 per datacenter)

  3. Concept • Concurrent VDI and blade PC deployment • Complete desktop replacement for office and clinical staff • Reduce PC maintenance costs/time • Provide fastest possible access for roaming users to desktop and apps • All sessions run 24x7 • Users encouraged to leave apps running • Sessions statically assigned (no pooling) • Support multiple sessions per user (software testing) • Wyse dual-monitor & HP thin terminals, 19-inch LCD monitors, biometric keyboards, Windows XP Embedded • Manually failover sessions to remote datacenter if disaster

  4. Sessions • Windows XP SP2 • 10GB C: drive • 512MB RAM (pagefile 768-1536MB), 1 vCPU, 1 NIC • Sessions never suspended/shut down • Will reboot every 2 weeks once idle for 8 hours • Reimaged as needed using Altiris (PXE boot) • “My Documents” redirected to network drive • Sessions not backed up

  5. Session Deployment • Sessions deployed once using ESX service console shell script (bash) • VM name & hostname same as custom MAC address • 00:50:56:0A for VDC sessions • 00:50:56:0D for BDC sessions • Deployment process • Create directory, customize new VMX • vmkfstools clone sysprep-ready VMDK • register with server and power on for sysprep

  6. Location-based computing • Sessions aware of user location within the enterprise • Printing & applications require user location to function properly • Specific printers dynamically installed at RDP login depending on user’s location • “Roaming Printer Agent” • Custom solution developed in-house • Microsoft Terminal Services API used to obtain thin terminal hostname, correlate to a location

  7. Application deployment • Initially considered SVS/Appstream • Currently use Novell ZENWorks, Altiris • Application objects assigned to users in eDirectory • Icons for ZEN applications presented on desktop • Application installed at first launch • Moving to ZEN 10 for Active Directory

  8. Broker: • Custom broker developed in-house • Brokers RDP connections to VDI and other endpoints: • VDI • CCI (HP blade PCs) • Desktop PCs • Assignments made in Active Directory • Based on “Remote Operator” property of computer object in AD • Supports multiple endpoints per user

  9. Clusters & Servers • 38 x ESX 3.0.1 Enterprise • 2 x VirtualCenter 2.0.2 • 40 sessions per server (DRS balances load) • 4 clusters, 9-10 servers per cluster • HA & DRS enabled • 375 sessions per cluster • HP BL480c G1 c-Class blades • Dual Intel Xeon Quad-Core processors @ 2.666GHz • 24 GB memory • 4 x 1000MB NICs • Dual-port 4GB FC HBA

  10. Network • 4 x 1000MB NICs • c-Class enclosures with two Cisco switches • ESX vSwitch vmnics uplink to ports on both enclosure switches (two nics per vSwitch) • Cisco switches have 2 x 1000MB Etherchannel uplinks, link state tracking enabled • vSwitch0 • Service Console (VLAN for servers) • VMKernel (VLAN for VMotion) • vSwtich1 • Multiple Virtual Machine portgroups (separate VLANs)

  11. Storage • Dual-port 4GB Qlogic FC HBA • c-Class enclosures with two 4/12 Brocade 4GB switches, single uplinks to core switches • HBA connects to each switch • Metro has dual-fabric FC ‘star’ layout (core & edge switches, 32 total) • 15 datastores per cluster • 25 sessions per datastore • 263GB datastore size • 9-10 hosts per datastore

  12. Storage • 2 x HP StorageWorks XP1024 arrays • 7.89TB dedicated to local datacenter VM disks • 7.89TB dedicated on remote array for synchronous continuous replication of local VM disks • 15.78TB/array, 31.5TB total • Dedicated array ports for VDI ESX (4 per fabric, 8 total) • Datastores balanced across all paths (8) • 18 RAID5(7D+1P) disk groups per array • Local VDI LDEVs • Replicated VDI LDEVs from remote array • 1 LUN per datastore (no extents) • Datastore LUNs are XP LUSEs of 17 LDEVs (from different disk groups - 136 spindles per LUN)

  13. Achievements • Staff appreciate lightning-fast access to applications and improved mobility • Performance on par with and sometimes faster than PCs/BladePCs (server-class hardware) • Decreased time to access information – improves patient safety • Very convenient • Information secure in Datacenter • Reduced PC support calls • Reduced electricity usage through fewer PCs/CRTs • Streamlined infrastructure • Safe from ESX server failure with HA • Safe from runaway CPU/memory hogging apps in sessions affecting servers with DRS load balancing • Service desk has more tools to manage sessions (VM reset and View console through VC WebAccess) • Able to perform server maintenance during the day by utilizing VMotion to shuffle sessions around

  14. Issues • Multimedia limits of RDP • streaming video/Flash • high-res images/animation • Currently using Wyse multimedia redirector, supports only few codecs • USB support • VDI performance suffers when all sessions perform high disk I/O at same time • Windows Updates – disabled, failing retries • Antivirus – randomized across week • Large software deployment/updates – Epic • Re-imaging

  15. VMware vClient / VDM3 • Metro invited to private beta at Palo Alto HQ • Several features of VDM could be beneficial for our environment • Herculean effort to move existing environment to VDM • rip & replace would be quickest (versus migration) • What to do with 1,500 displaced users? • Conflicts exist between current session management processes/configuration and VDM’s methods = retraining

  16. Future • Software vendors working on multimedia improvements for RDP • Considering switching to RGS or another protocol • Introducing session pooling • Eliminating array replication or switching to asynchronous • Finding cheaper storage that performs • Continuing to evaluate VDM

  17. Questions? housecs@metrogr.org http://communities.vmware.com/people/chouse

More Related