additional info some are still draft n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Additional Info (some are still draft ) PowerPoint Presentation
Download Presentation
Additional Info (some are still draft )

Loading in 2 Seconds...

play fullscreen
1 / 119

Additional Info (some are still draft ) - PowerPoint PPT Presentation


  • 108 Views
  • Uploaded on

Additional Info (some are still draft ). Tech notes that you may find useful as input to the design. A lot more material can be found at the Design Workshop. Internal Cloud: Gartner model and VMware model. Gartner take: Virtual infrastructure On-demand, elastic, automated/dynamic

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

Additional Info (some are still draft )


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
    Presentation Transcript
    additional info some are still draft
    Additional Info (some are still draft)

    Tech notes that you may find useful as input to the design.

    A lot more material can be found at the Design Workshop

    internal cloud gartner model and vmware model
    Internal Cloud: Gartner model and VMware model
    • Gartner take:
      • Virtual infrastructure
      • On-demand, elastic, automated/dynamic
      • Improves agility and business continuity

    Self-service provisioning portal

    Service catalog

    Chargeback system

    Capacity management

    Ext. cloud connector

    Life cycle management

    Service governor/infrastructure authority

    Identity and access management

    Configuration and change management

    Enterprise service management

    Orchestrator

    Performance management

    Virtual infrastructure management

    Virtual infrastructure

    Physical infrastructure

    cluster settings
    Cluster: Settings
    • For the 3 sample sizes, here is my personal recommendation
      • DRS fully automated. Sensitivity: Moderate
      • Use anti-affinity or affinity rules only when needed.
        • More things for you to remember.
        • Gives DRS less room to maneuver
      • DPM enabled. Choose hosts that support DPM
        • Do not use WOL. Use DPM or IPMI
      • VM Monitoring enabled.
        • VM monitoring sensitivity: Medium
        • HA will restart the VM if the heartbeat between the host and the VM has not been received within a 60 second interval
      • EVC enabled. Enable you to upgrade in future.
      • Prevent VMs from being powered on if they violate availability constraints  better availability
      • Host isolation response: Shut down VM
        • See http://www.yellow-bricks.com/vmware-high-availability-deepdiv/
        • Compared with “Leave VM Powered on”, this prevent data/transaction integrity risk. The risk is rather low as the VM itself has lock
        • Compared with “Power off VM”, this allows graceful shutdown. Some application needs to run consistency check after a sudden power off.
    drs dpm evc
    DRS, DPM, EVC

    In our 3 sizes, here are the settings:

    • DRS: Fully Automated
    • DRS sensitivity: Leave it at default (middle. 3 Star migration)
    • EVC: turn on.
      • It does not reduce performance.
      • It is a simple mask.
    • DPM: turn on. Unless HW vendor shows otherwise
    • VM affinity: use sparingly. It adds complexity as we are using group affinity.
    • Group affinity: use (as per diagram in design)

    Why turn on DPM

    • Power cost is real concern

    Singapore example: S$0.24 per kWh x (600 W + 600 W) x 24 hours 365 days x 3 years / 1000 W = $5100

    This is quite close of buying 1 server

    For every 1W of power consumed, we need minimum 1W of power for aircond + UPS + lighting

    vmware vmmark
    VMware VMmark
    • Use VMmark as the basis for CPU selection only, not entire box selection.
      • It is the official benchmark for VMware, and it uses multiple workload
      • Other benchmark are not run on vSphere, and typically test 1 workload
      • VMmark does not include TCO. Consider entire cost when choosing HW platform
    • Use it as a guide only
      • Your environment is not the same.
      • You need head room and HA.
    • How it’s done
      • VMmark 2.0 uses 1 - 4 vCPU
      • MS Exchange, MySQL, Apache, J2EE, File Server, Idle VM
    • Result page:
      • VMmark 2.0 is not compatible with 1.x results
      • www.vmware.com/products/vmmark/results.html

    This slide needs update

    vmmark sample benchmark result hp only
    VMmark: sample benchmark result (HP only)

    I’m only showing result from 1 vendor as vendor comparison is more than just VMmark result.

    IBM, Dell, HP, Fujitsu, Cisco, Oracle, NEC have VMmark results

    Look at this number. 20 tiles = 100 Active VM

    This number is when comparing with same #Tiles

    ± 10% is ok for real-life sizing. This is benchmark

    Opteron 8439, 24 cores

    Xeon 5570, 8 cores

    Opteron 2435, 12 cores

    Xeon 5470, 8 cores

    This tells us that Xeon 5500 can run 17 Tiles, at 100% utilisation.

    Each Tile has 6 VM, but 1 is idle. 17 x 5 VM = 85 active VM in 1 box.

    At 80% Peak utilisation, that’s ~65 VM.

    ms clustering
    MS Clustering

    ESX Port Group properties

    • Notify Switches = NO
    • Forged Transmits = Accept.

    Win08 does not support NFS

    Storage Design

    • Virtual SCSI adapter
      • LSI Logic Parallel for Windows Server 2003
      • LSI Logic SAS for Windows Server 2008

    ESXi changes

    • ESXi 5.0 uses a different technique to determine if RDM LUNs are used for MSCS cluster devices, by introducing a configuration flag to mark each device as "perennially reserved" that is participating in a MSCS cluster.

    Unicast mode reassigns the station (MAC) address of the network adapter for which it is enabled and all cluster hosts are assigned the same MAC address, you cannot have ESX send ARP or RARP to update the physical switch port with the actual MAC address of the NICs as this break the the unicast NLB communication

    symantec applicationha
    Symantec ApplicationHA

    Can install agent to multiple VM simultaneously

    Additional Roles for security

    It does not cover Oracle yet

    Presales contact for ASEAN: Vic

    vmware ha and drs
    VMware HA and DRS

    Read Duncan’s yellowbrick first.

    • Done? Read it again. This time, try to internalise it. See speaker notes below for an example.

    vSphere 4.1

    • Primary Nodes
      • Primary nodes hold cluster settings and all “node states” which are synchronized between primaries. Node states hold for instance resource usage information. In case that vCenter is not available the primary nodes will have a rough estimate of the resource occupation and can take this into account when a fail-over needs to occur.
      • Primary nodes send heartbeats to primary nodes and secondary nodes.
      • HA needs at least 1 primary because the “fail-over coordinator” role will be assigned to this primary, this role is also described as “active primary”.
      • If all primary hosts fail simultaneously no HA initiated restart of the VMs will take place. HA needs at least one primary host to restart VMs. This is why you can only take four host failures in account when configuring the “host failures” HA admission control policy. (Remember 5 primaries…)
      • The first 5 hosts that join the VMware HA cluster are automatically selected as primary nodes.  All the others are automatically selected as secondary nodes. A cluster of 5 will be all Primary.
      • When you do a reconfigure for HA the primary nodes and secondary nodes are selected again, this is at random. The vCenter client does not show which host is a primary and which is not.
    • Secondary Nodes
      • Secondary nodes send their state info & heartbeats to the primary nodes only.
    • HA does not knows if the host is isolated or completely unavailable (down).
      • The VM lock file is the safety net. In VMFS, the file is not visible. In NFS, it is the .lck file.

    Nodes send a heartbeat every 1 second. The mechanism to detect possible outages.

    vsphere 4 1 ha and drs
    vSphere 4.1: HA and DRS

    Best Practices

    • Avoid using advance settings to decrease slot size as it might lead to longer down time. Admission control does not take fragmentation of slots into account when slot sizes are manually defined with advanced settings.

    What can go wrong in HA

    • VM Network lost
    • HA network lost
    • Storage Network lost
    vmware ha and drs 1
    VMware HA and DRS

    Split Brain >< Partitioned Cluster

    • A large cluster that spans across racks might experience partitioning. Each partition will think they are full cluster. So long there is no loss is storage network, each partition will happily run their own VM.
    • Split Brain is when 2 hosts want to run a VM.
    • Partitioned can happen when the cluster is separated by multiple switches. Diagram below shows a cluster of 4 ESX.
    ha admission control policy of cluster
    HA: Admission Control Policy (% of Cluster)

    Specify a percentage of capacity that needs to be reserved for failover

    • You need to manually set it so it is at least equal to 1 host failure.
    • E.g. you have a 8 node cluster and wants to handle 2 node failure. Set the % to be 25%

    Complexity arises when nodes are not equal

    • Different RAM or CPU
    • But this also impact the other Admission Control option. So always keep node size equal, especially in Tier 1.

    Total amount of reserved resource < (Available Resources – Reserved Resources)

    If no reservation is set a default of 256 MHz is used for CPU and 0MB + overhead for MEM

    Monitor the thresholds with vCenter on the Cluster’s “summary” tab

    snapshot
    Snapshot

    Only keep for maximum 1-3 days.

    • Delete or commit as soon as you are done.
    • A large snapshot may cause issue when committing/deleting.

    For high transaction VM, delete/commit as soon as you are done verifying

    • E.g. databases, emails.

    3rd party tool

    • Snapshots taken by third party software (called via API) may not show up in the vCenter Snapshot Manager. Routinely check for snapshots via the command-line.

    Increasing the size of a disk with snapshots present can lead to corruption of the snapshots and potential data loss.

    • Check for snapshot via CLI before you increase
    vmotion
    vMotion

    Can be encrypted. At a cost certainly. If vMotion network is isolated, then there is no need.

    May lose 1 ping.

    Inter-cluster vMotion is not the same with intra-cluster

    • Involves additional calls into vCenter, so hard limit
    • Lose VM cluster properties (HA restart priority, DRS settings, etc.)
    esxi network configuration with ucs
    ESXi: Network configuration with UCS
    • If you are using Cisco UCS blade
      • 2x 10G or 4x 10G depending on blade model and mezzanine card
    • All mezzanine card models support FCoE
      • Unified I/O
      • Low Latency
    • The Cisco Virtualized Adapter (VIC) supports
      • Multiple virtual adapters per physical adapter
      • Ethernet & FC on the same adapter
      • Up to 128 virtual adapters (vNICs)
      • High Performance 500K IOPS
      • Ideal for FC, iSCSIand NFS

    Once you decide it’s Cisco,discuss the detail with Cisco.

    storage drs and drs
    Storage DRS and DRS

    Interactions:

    • Storage DRS placement may impact VM-host compatibility for DRS
    • DRS placement may impact VM-datastore compatibility for Storage DRS

    Solution: datastore and host co-placement

    • Done at provisioning time by Storage DRS
    • Based on an integrated metric for space, I/O, CPU and memory resources
    • Overcommitted resources get more weights in the integrated metric
    • DRS placement proceeds as usual

    But easier to architect it properly. Map ESX Cluster to Datastore Cluster manually.

    Datastore 3

    Datastore 2

    Datastore 1

    unified fabric with fabric extender
    Multiple points of management

    FC

    Ethernet

    Blade switches

    High cable count

    Unified fabric with Fabric extender

    Single point of management

    Reduced cables

    Fiber between racks

    Copper in racks

    End of Row Deployment

    Fabric Extender

    Unified Fabric with Fabric Extender
    storage io control
    Storage IO Control

    Suggested Congestion Threshold values

    One: Avoid different settings for datastores sharing underlying resources

    • Use same congestion threshold on A, B
    • Use comparable share values(e.g. use Low/Normal/High everywhere)

    SIOC

    SIOC

    Datastore A

    Datastore B

    Physical drives

    nas nfs
    NAS & NFS
    • Two key NAS protocols:
      • NFS (the “Network File System”). This is what we support.
      • SMB (Windows networking, also known as “CIFS”)
    • Things to know about NFS
      • “Simpler” for person who are not familiar with SAN complexity
      • To remove a VM lock is simpler as it’s visible.
        • When ESX Server accesses a VM disk file on an NFS-based datastore, a special .lck-XXX lock file is generated in the same directory where the disk file resides to prevent other ESX Server hosts from accessing this virtual disk file.
        • Don’t remove the .lck-XXX lock file, otherwise the running VM will not be able to access its virtual disk file.
      • No SCSI reservation. This is a minor issue
      • 1 Datastore will only use 1 path
        • Does Load Based Teaming work with it?
        • For 1 GE, throughput will peak at 100 MB/s. At 16 K block size, that’s 7500 IOPS.
      • The Vmkernel in vSphere 5 only supports NFS v3, not v4. Over TCP only, no support for UDP.
      • MSCS (Microsoft Clustering) is not supported with NAS.
      • NFS traffic by default is sent in clear text since ESX does not encrypt it.
        • Use only NAS storage over trusted networks. Layer 2 VLANs are another good choice here.
      • 10 Gb NFS is supported. So is Jumbo Frames, and configure it end to end.
      • Deduplication can save sizeable amount. See speaker notes
    iscsi
    iSCSI
    • Use Virtual port storage system instead of plain Active/Active
      • I’m not sure if they cost much more.
    • Has 1 additional Array Type over traditional FC: Virtual port storage system
      • Allows access to all available LUNs through a single virtual port.
      • These are active-active Array, but hide their multiple connections though a single port. ESXi multipathing cannot detect the multiple connections to the storage. ESXi does not see multiple ports on the storage and cannot choose the storage port it connects to. These array handle port failover and connection balancing transparently. This is often referred to as transparent failover
      • The storage system uses this technique to spread the load across available ports.
    iscsi 1
    iSCSI
    • Limitations
      • ESX/ESXi does not support iSCSI-connected tape devices.
      • You cannot use virtual-machine multipathing software to perform I/O load balancing to a single physical LUN.
      • A host cannot access the same LUN when it uses dependent and independent hardware iSCSI adapters simultaneously.
      • Broadcom iSCSI adapters do not support IPv6 and Jumbo Frames. [e1: still true in vSphere 5??]
      • Some storage systems do not support multiple sessions from the same initiator name or endpoint. Multiple sessions to such targets can result in unpredictable behavior.
    • Dependant and Independent
      • A dependent hardware iSCSI adapter is a third-party adapter that depends on VMware networking, and iSCSI configuration and management interfaces provided by VMware. This type of adapter can be a card, such as a Broadcom 5709 NIC, that presents a standard network adapter and iSCSI offload functionality for the same port. The iSCSI offload functionality appears on the list of storage adapters as an iSCSI adapter
    • Error correction
      • To protect the integrity of iSCSI headers and data, the iSCSI protocol defines error correction methods known as header digests and data digests. These digests pertain to the header and SCSI data being transferred between iSCSI initiators and targets, in both directions.
      • Both parameters are disabled by default, but you can enable them. Impact CPU. Nehalem processors offload the iSCSI digest calculations, thus reducing the impact on performance
    • Hardware iSCSI
      • When you use a dependent hardware iSCSI adapter, performance reporting for a NIC associated with the adapter might show little or no activity, even when iSCSI traffic is heavy. This behavior occurs because the iSCSI traffic bypasses the regular networking stack
    • Best practice
      • Configure jumbo frames end to end.
      • Use NIC with TCP segmentation offload (TSO)
    iscsi nfs caveat when used together
    iSCSI & NFS: caveat when used together

    Avoid using them together

    iSCSI and NFS have different HA models.

    • iSCSI uses vmknics with no Ethernet failover – using MPIO instead
    • NFS client relies on vmknics using link aggregation/Ethernet failover
    • NFS relies on host routing table.
    • NFS traffic will use iSCSI vmknic and results in links without redundancy
    • Use of multiple session iSCSI with NFS is not supported by NetApp
    • EMC supports, but best practice is to have separate subnets, virtual interfaces
    slide28
    NPIV

    What it is

    • Allow a single Fibre Channel HBA port to register with the Fibre Channel fabric using several worldwide port names (WWPNs). This ability makes the HBA port appear as multiple virtual ports, each having its own ID and virtual port name. Virtual machines can then claim each of these virtual ports and use them for all RDM traffic.
    • Note that is WWPN, not WWNN
      • WWPN – World Wide Port Name
      • WWNN – World Wide Node Name
      • Single port HBA typically has a single WWNN and a single WWPN (which may be the same).
      • Dual port HBAs may have a single WWNN to identify the HBA, but each port will typically have its own WWPN.
      • However they could also have an independent WWNN per port too.

    Design consideration

    • Only applicable to RDM
    • VM does not get its own HBA nor FC driver required. It just gets an N-port, so it’s visible from the fabric.
    • HBA and SAN switch must support NPIV
    • Cannot perform Storage vMotion or VMotion between datastores when NPIV is enabled. All RDM files must be in the same datastore.
      • Still in place in v5

    First one is WW Node Name

    Second one is WW Port Name

    2 tb vmdk barrier
    2 TB VMDK barrier

    You need to have > 2 TB disk within a VM.

    • There are some solutions, each with pro and cons.
    • Say you need a 5 TB disk in 1 Windows VM.
    • RDM (even with physical compatibility) and DirectPath I/O do not increase virtual disk limit.

    Solution 1: VMFS or NFS

    • Create a datastore of 5 TB.
    • Create 3 VMDK. Present to Windows
    • Windows then combine the 3 disk into 1 disk.
    • Limitation
      • Certain low level storage-softwares may not work as they need 1 disk (not combined by OS)

    Solution 3: iSCSI within the Guest

    • Configure the iSCSI initiator in Windows
    • Configure a 5 TB LUN. Present the LUN directly to Windows, bypassing the ESX layer. You can’t monitor it.
    • By default, it will only have 1 GE. NIC teaming requires driver from Intel. Not sure if this supported.
    storage queue depth
    Storage: Queue Depth

    When should you adjust the queue depth?

    • If a VM generates more commands to a LUN than the LUN queue depth; Adjust the device/LUN queue.
      • Generally with fewer, very high IO VMs on a host, larger queues at the device driver will improve performance.
    • If the VM’s queue depth is lower than the HBA’s; Adjust the vmkernel.

    Be cautious when setting queue depths

    • With too large of device queues, the storage array can easily be overwhelmed and its performance may suffer with high latencies.
    • Device driver queue depths is global and set per LUN setting.
      • Change the device queue depth for all ESX hosts in the cluster

    Calculating the queue depth:

    • To verify that you are not exceed the queue depth for an HBA use the following formula:
      • Max. queue depth of the HBA = Device queue setting * # of LUNs on HBA

    Queue are at multiple levels

    • LUN queue for each LUN at ESXi host.
    • If the above queue is full, then kernel queue will be filled up
    • LUN queue at array level for each LUN
      • If this queue does not exist, then the array writes straight into disk.
    • Disk queue
      • The queue at the disk level, if there is no LUN queue
    sizing the storage array
    Sizing the Storage Array
    • For RAID 1 (it has IO Penalty of 2)
      • 60 Drives= ((7000 x 2 x 30%) + (7000 x 70%)) / 150 IOPS
    • Why RAID 5 has 4 IO Penalty?
    storage performance monitoring
    Storage: Performance Monitoring

    Get a baseline of your environment during a “normal” IO time frame.

    • Capture as many data points as possible for analysis.
    • Capture data from the SAN Fabric, the storage array, and the hosts.

    Which statistics should be captured

    • Max and average read/write IOps
    • Max and average read/write latency (ms)
    • Max and average Throughput (MB/sec)
    • Read and write percentages
    • Random vs. sequential
    • Capacity – total and used
    fibre channel multi switch fabric

    TR

    RC

    N_Port 0

    N_Port 1

    TR

    RC

    Fabric Switch 1

    Node A

    Node B

    Node C

    E_Port

    RC

    N_Port 2

    TR

    RC

    TR

    TR

    RC

    RC

    TR

    N_Port 1

    N_Port 0

    E_Port

    TR

    RC

    Node G

    Node E

    Fabric Switch 2

    Node H

    RC

    N_Port 2

    TR

    TR

    TR

    TR

    TR

    F_Port

    F_Port

    F_Port

    F_Port

    RC

    RC

    RC

    RC

    Node F

    Node D

    F_Port

    F_Port

    F_Port

    F_Port

    RC

    RC

    RC

    RC

    TR

    TR

    N_Port 3

    N_Port 3

    TR

    TR

    TR

    TR

    RC

    RC

    Fibre Channel Multi-Switch Fabric
    backup vadp vs agent based
    Backup: VADP vs Agent-based

    ESX has 23 VM. Each VM is around 40 GB.

    • All VMs are idle, so this CPU/Disk are purely on back up.
    • CPU Peak is >10 GHz (just above 4 cores)
    • But Disk Peak is >1.4 Gbps of IO, almost 50% of a 4 Gb HBA.

    After VAPD, both CPU and Disk drops to negligible

    vadp adoption status
    VADP: Adoption Status

    This is as at June 2010.

    Always check with vendor for the most accurate data

    partition alignment
    Partition alignment

    Affects every protocol, and every storage array

    • VMFS on iSCSI, FC, & FCoE LUNs
    • NFS
    • VMDKs & RDMs with NTFS, EXT3, etc

    VMware VMFS partitions that align to 64KB track boundaries give reduced latency and increased throughput

    • Check with storage vendor if there are any recommendations to follow.
    • If no recommendations are made, use a starting block that is a multiple of 8 KB.

    Responsibility of Storage Team.

    • Not vSphere Team

    On NetApp :

    • VMFS Partitions automatically aligned. Starting block in multiples of 4k
    • MBRscan and MBRalign tools available to detect and correct misalignment

    FS 4KB-1MB

    Cluster

    Cluster

    Cluster

    VMFS 1MB-8MB

    Block

    Array 4KB-64KB

    Chunk

    Chunk

    Chunk

    tools array specific integration
    Tools: Array-specific integration
    • The example below is from NetApp. Other Storage partners have integration capability too.
    • Always check with respective product vendor for latest information.
    tools array specific integration 1
    Tools: Array-specific integration
    • Management of the Array can be done from vSphere client. Below is from NetApp
    • Ensure storage access is not accidently given to vSphere admin by using RBAC
    data recovery
    Data Recovery

    No integration with tape

    • Can do manual

    If a third-party solution is being used to backup the deduplication store, those backups must not run while the Data Recovery service is running. Do not back up the deduplication store without first powering off the Data Recovery Backup Appliance or stopping the datarecovery service using the command service datarecovery stop.

    Some limits

    • 8 concurrent jobs on the appliance at any time (backup & restore).
    • An appliance can have at the most 2 dedupe store destinations due to the overhead involved in deduping.
    • VMDK or RDM based deduplication stores of up to 1TB or CIFS based deduplication stores of up to 500GB.
    • No IPv6 addresses
    • No multiple backup appliances on a single host.

    VDR cannot back up VMs

    • that are protected by VMware Fault Tolerance.
    • with 3rd party multi-pathing enabled where shared SCSI buses are in use.
    • with raw device mapped (RDM) disks in physical compatibility mode.
    • Data Recovery can back up VMware View linked clones, but they are restored as unlinked clones.

    Using Data Recovery to backup Data Recovery backup appliances is not supported.

    • This should not be an issue. The backup appliance is a stateless device, so there is not the same need to back it up like other types of VMs.
    vmware data recovery
    VMware Data Recovery

    We assume the following requirements

    • Back up to external array, not the same array.
      • External Array can be used for other purpose too. So the 2 arrays are backing up each other.
      • How to ensure Write performance as the array is shared?
    • 1x a day back up. No need multiple back up per day on the same VM.

    Consideration

    • Bandwidth: Need dedicated NIC to the Data Recovery VM
    • Performance: Need to reserve CPU/RAM for the VM?
    • Group like VM together. It maximises dedupe
    • Destination: RDM LUN presented via iSCSI to the Appliance. See picture below (hard disk 2)
      • Not using VMDK format to enable LUN level operation
      • Not using CIFS/SMB as Dedupliation Store is 0.5 TB vs 1 TB on RDM/VMDK
    • Space calculation: need to find a tool to help estimate the disk requirements.
    mapping datastore vm
    Mapping: Datastore – VM

    Criteria to use when placing a VM into a Tier:

    • How critical is the VM? Importance to business.
    • What are its performance and availability requirements?
    • What are its Point-in-Time restoration requirements?
    • What are its backup requirements?
    • What are its replication requirements?

    Have a document that lists which VM resides on which datastore group

    • Content can be generated using PowerCLI or Orchestrator, which shows datastores and their VMs.
      • Example tool: Quest PowerGUI
    • While rarely happen, you can’t rule out if datastore metadata get corrupted.
      • When that happens, you want to know what VMs are affected.

    A VM normally change tiers throughout its life cycle

    • Criticality is relative and might change for a variety of reasons, including changes in the organization, operational processes, regulatory requirements, disaster planning, and so on.
    • Be prepared to do Storage vMotion.
      • Always test it first so you know how long it takes in your specific environment
      • VAAI is critical, else the traffic will impact your other VMs.
    slide43
    RDM
    • Use sparingly.
      • VMDK is more portable, easier to manage, and easier to resize.
      • VMDK and RDM have similar performance.
    • Physical RDM
      • Can’t take snapshot.
      • No Storage vMotion. But can do vMotion.
      • Physical mode specifies minimal SCSI virtualization of the mapped device, allowing the greatest flexibility for SAN management software.
      • VMkernel passes all SCSI commands to the device, with one exception: the REPORT LUNs command is virtualized so that the VMkernel can isolate the LUN to the owning virtual machine.
    • Virtual RDM
      • Specifies full virtualization of the mapped device. Features like snapshot, etc works
      • VMkernel sends only READ and WRITE to the mapped device. The mapped device appears to the guest operating system exactly the same as a virtual disk file in a VMFS volume. The real hardware characteristics are hidden.
    human experts vs storage drs
    Human Experts vs Storage DRS

    2 VMware performance engineers vs Storage DRS competing to balance the following:

    • 13 VMs: 3 DVD store, 2 Swingbench, 4 mail servers, 2 OLTP, 2 web servers
    • 2 ESX hosts and 3 storage devices (different FC LUNs in shades of blue)

    Storage DRS provides lowest average latency, while maintaining similar throughput. Why human expert lost?

    • Too many numbers to crunch, too many dimensions to the analysis. Human took a couple of hours to think this through.

    Why bother anyway 

    IOPS

    Latency (ms)

    StorageDRS

    StorageDRS

    Green: Average Latency (ms)

    alternative backup method
    Alternative Backup Method

    VMware ecosystem may provide new way of doing back up.

    • Example below is from NetApp

    NetApp SnapManager for Virtual Infrastructure (SMVI)

    • In Large Cloud, SMVI server should sit on a separate VM from with vCenter.
      • While it has no performance requirement, it is best from Segregation of Duty point of view.
      • Best practice is to keep vCenter clean & simple. vCenter is playing much more critical role in larger environment where plug-ins are relying on vCenter up time.
    • Allows for consistent array snapshots & replication.
    • Combine with other SnapManager products (SM for Exchange, SM for Oracle, etc) for application consistency
      • Exchange and SQL work with VMDK
      • Oracle, SharePoint, SAP require RDM
    • Can be combined with SnapVault for vaulting to disk.
    • 3 levels of data protection :
      • On disk array snapshots for fast backup (seconds) & recovery (up to 255 snapshot copies of any datastore can be kept with no performance impact)
      • Vaulting to separate array for better protection, slightly slower recovery
      • SnapMirror to offsite for DR purposes
    • Serves to minimize backup window (and frozen vmdk when changes are applied)
    • Option to not create a vm snapshot to create crash consistent array snapshots
    support multi switch link aggr

    Support

    multi-switch

    Link

    aggr?

    One VMKernel port

    & IP subnet

    Yes

    Use multiple links with

    IP hash load balancing on

    the NFS client (ESX)

    Use multiple VMKernel

    Ports & IP subnets

    Use multiple links with

    IP hash load balancing on

    The NFS server (array)

    Use ESX routing table

    Storage needsmultiple

    sequential IP addresses

    Storage needs multiple

    sequential IP addresses

    vmotion performance on 1 gbe vs 10 gbe
    vMotion Performance on 1 GbEVs 10 GbE
    • Idle/Moderately loaded VM scenarios
      • Reductions in duration when using 10 GbE vs 1 GbE on both vSphere 4.1 and vSphere 5

    Consider switch from 1 GbE to 10 GbE vMotion network

    • Heavily loaded VM scenario
      • Reductions in duration when using 10 GbE vs 1 GbE
    • 1 GbE on vSphere 4.1: Memory copy convergence issues lead to network connection drops
    • 1 GbE on vSphere 5 : SDPS kicked-in resulting in zero connection drops

    vMotion in vSphere 5 never fails due to memory copy convergence issues

    Duration of vMotion

    (lower the better)

    impact on database server performance during vmotion

    vMotion duration : 15 sec

    Impact on Database Server Performance During vMotion

    Impact during guest trace period

    vMotion duration : 23 sec

    Performance impact minimal during the memory trace phase in vSphere 5

    Throughput was never zero in vSphere 5 (due to switch-over time < half a second)

    Time to resume to normal level of performance about 2 seconds better in vSphere 5

    Impact during switch-over period

    Time (in seconds)

    Impact during switch-over period

    Impact during guest trace period

    Time (in seconds)

    network settings
    Network Settings

    Load-Based Teaming

    • We will not use as we are using 1 GE in this design.
    • If you use 10 GE, the default settings is a good starting point. It gives VM 2x the share versus hypervisor.

    NIC Teaming

    • If the physical switch can support, then use IP-Hash
      • Need a Stacked-Switch. Basically, they can be managed as if they are 1 bigger switch. Multi-chassis EtherChannel Switch is another name.
      • IP-Hash does not help if the source and address are constant. For example, vMotion always use 1 path only as source-destination pair is constant. Connection from VMkernel to NFS server is contant,
    • If the physical switch can’t support, then use Source Port
      • You need to manually balance this, so not all VM go via the same port.

    VLAN

    • We are using VST. Physical switch must support VLAN trunking.

    PVLAN

    • Not using in this design. Most physical switches are PVLAN aware already.
    • Packets will be dropped or security can be compromised if physical switch is not PVLAN aware.

    Beacon Probing

    • Not enabled, as my design only has 2 NIC per vSwitch. ESXi will flood both NIC if it has 2 NIC only.

    Review default settings

    • Change Forged Transmit to Reject.
    • Change MAC address changes to Reject
    slide52
    VLAN

    Native VLAN

    • Native VLAN means the switch can receive and transmit untagged packets.
    • VLAN hopping occurs when an attacker with authorized access to one VLAN creates packets that trick physical switches into transmitting the packets to another VLAN that the attacker is not authorized to access. Attacker send forms an ISL or 1Q trunk port to switch by spoofing DTP messages, getting access to all VLANs. Or attacker can send double tagged 1Q packets to hop from one VLAN to another, sending traffic to a station it would otherwise not be able to reach.
    • This vulnerability usually results from a switch being misconfigured for native VLAN, as it can receive untagged packets.

    Local vSwitches do not support native VLAN. Distributed vSwitch does.

    • All data passed on these switches is appropriately tagged. However, because physical switches in the network might be configured for native VLAN, VLANs configured with standard switches can still be vulnerable to VLAN hopping.
    • If you plan to use VLANs to enforce network security, disable the native VLAN feature for all switches unless you have a compelling reason to operate some of your VLANs in native mode. If you must use native VLAN, see your switch vendor’s configuration guidelines for this feature.

    VLAN 0: the port group can see only untagged (non-VLAN) traffic.

    VLAN 4095:the port group can see traffic on any VLAN while leaving the VLAN tags intact.

    distributed switch
    Distributed Switch

    Design consideration

    • Version upgrade
    • ?? Upgrade procedure
    vnetwork standard switch a closer look
    vNetwork Standard Switch: A Closer Look

    vSS defined on a per host basis from Home  Inventory  Hosts and Clusters.

    Uplinks (physical NICs)attached to vSwitch.

    Port Groups are policy definitions for a set or group of ports.e.g. VLAN membership,port security policy,teaming policy, etc

    vNetwork Standard Switch (vSwitch)

    vnetwork distributed switch a closer look
    vNetwork Distributed Switch: A Closer Look

    vDS operates off the local cache – No operational dependency on vCenter server

    • Host local cache under /etc/vmware/dvsdata.db and /vmfs/volumes/<datastore>/.dvsdata
    • Local cache is a binary file. Do not hand edit

    DV Uplink Port Groupdefines uplink policies

    DV Uplinks abstractactual physicalnics(vmnics) on hosts

    DV Port Groups span all hosts covered by vDS

    and are groups of portsdefined with the same policye.g. VLAN, etc

    vmnics on each host

    mapped to dvUplinks

    nexus 1000v vsm
    Nexus 1000V: VSM

    VM properties

    • Each requires a 1 vCPU, 2 GB RAM. Must be reserved, so it will impact the cluster Slot Size.
    • Use “Other Linux 64-bit" as the Guest OS.
    • Each needs 3 vNIC.
    • Requires the Intel e1000 network driver. Because No VMware Tools installed?

    Availability

    • 2 VSMs are deployed in an active-standby configuration, with the first VSM functioning in the primary role and the other VSM functioning in a secondary role.
    • If the primary VSM fails, the secondary VSM will take over.
    • They do not use VMware HA mechanism.

    Unlike cross-bar based modular switching platforms, the VSM is not in the data path.

    • General data packets are not forwarded to the VSM to be processed, but rather switched by the VEM directly.
    nexus 1000v vsm has 3 interface for mgmt
    Nexus 1000V: VSM has 3 Interface for “mgmt”

    Control Interface

    • VSM – VEMs communication, and VSM – VSM communication
    • Handles low-level control packets such as heartbeats as well as any configuration data that needs to be exchanged between the VSM and VEM. Because of the nature of the traffic carried over the control interface, it is the most important interface in Nexus 1000V
    • Requires very little bandwidth (<10 KBps) but demands absolute priority.
    • Always the first interface on the VSM. Usually labeled "Network Adapter 1" in the VM network properties.

    Management Interface

    • VSM – vCenter communication.
    • Appears as the mgmt0 port on a Cisco switch. As with the management interfaces of other Cisco switches, an IP address is assigned to mgmt0.
    • Does not necessarily require its own VLAN. In fact, you could use the same VLAN with vCenter

    Packet Interface

    • carry network packets that need to be coordinated across the entire Nexus 1000V. Only two type of control traffic: Cisco Discovery Protocol and Internet Group Management Protocol (IGMP) control packets.
    • Always the third interface on the VSM and is usually labeled "Network Adapter 3" in the VM network properties.
    • Bandwidth required for packet interface is extremely low, and its use is very intermittent. If Cisco Discovery Protocol and IGMP features are turned off, there is no packet traffic at all. The importance of this interface is directly related to the use of IGMP. If IGMP is not deployed, then this interface is used only for Cisco Discovery Protocol, which is not considered a critical switch function
    vnetwork distributed portgroup binding

    VMware ESX

    vNetwork Distributed Portgroup Binding

    Port Binding: Association of a virtual adapter with a dvPort

    Static Binding: Default configuration

    • Port bound when vnic connects to portgroup

    Dynamic binding

    • Use when #VM adapters > #dvPorts in a portgroup and all VMs are not active

    Ephemeral binding

    • Use when #VMs > #dvPorts and port history is not relevant
    • Max Ports is not enforced

    DVPort created on proxySwitch and bound to vnic

    ProxySwitch

    Use static binding for best performance and scale

    network stack comparison
    Network Stack Comparison

    Good attributes of FCoE

    • Has less overhead than FCIP or iSCSI. See diagram below.
    • FCoE is managed like FC at initiator, target, and switch level
    • Mapping FC frames over Ethernet Transport
    • Enables Fibre Channel to run over a lossless Ethernet medium
    • Single Adapter, less device proliferation, lower power consumption
    • No gateways required
    • NAS certification: FCoE CNAs can be used to certify NAS storage. Existing NAS devices listed on VMware SAN Compatibility Guide do not require recertification with FCoE CNAs.

    Mixing of technologies always increase complexity

    FCP

    SCSI

    FCIP

    iSCSI

    TCP

    FC

    IP

    FCoE

    Ethernet

    Physical Wire

    SCSI

    iSCSI

    FCIP

    FCoE

    FC

    physical switch setup
    Physical Switch Setup

    Spanning Tree Protocol

    • vSwitch won’t create loops
    • vSwitch can’t be linked.
    • vSwitch does not take incoming packet from pNIC and forward as outgoingpacket to another pNIC

    Recommendations

    • Leave STP on in physical network
    • Use “portfast” on ESX facing ports
    • Use “bpduguard” to enforce STP boundary

    VM0

    VM1

    MAC b

    MAC c

    MAC a

    vSwitch

    vSwitch

    Physical Switches

    1 ge switch
    1 GE switch

    Sample from Dell.com (US site, not Singapore)

    Around US$5 K. Need a pair.

    48 ports

    • Each ESXi needs around 7 – 13 ports (inclusive of iLO port)
    10 ge switch
    10 GE switch

    Sample from Dell.com (US site, not Singapore)

    Around US$10 – 11 K. Need a pair.

    24 ports

    • Each ESXi only need 2 port
    • iLO port can connect to existing GE/FE switch

    Compared with 1 GE switch,Price is very close. Might be even cheaper in TCO

    multi security zones w vshield edge to protect vapp network
    Multi security zones (w/ vShield Edge to protect vApp Network)

    vCD “logical” View

    vSphere “operational” View

    Reminder: this is self-service (UI / API)

    Web1

    Web1

    vApp

    vApp Network

    Org Network

    Organization

    PG

    PG

    PG

    DB

    DB

    vSphere vNetwork

    External Network

    vCD will deploy this

    two tier application w vshield app to protect backend
    Two-tier application (w/ vShield App to protect backend)

    vCD “logical” View

    vSphere “operational” View

    Reminder: this is NOT self-service (today)

    Web1

    Web1

    vApp

    Back-end

    enclave

    Front-end

    enclave

    Org Network

    Organization

    PG

    PG

    DB

    DB

    vSphere vNetwork

    External Network

    vShield Admin config (today)

    vshield edge in short
    vShield Edge in short

    vCD “logical” View

    vSphere “operational” View

    Security Zone 2

    NAT

    DHCP

    LB

    Security Zone 1

    Virtual

    Appliance

    Firewall

    Routing

    VPN

    vNIC

    vNIC

    PortGroup

    PortGroup

    L2Network-B

    L2Network-A

    vSphere vNetwork

    vShield Edge

    vshield app in short
    vShield App in short

    vCD “logical” View

    vSphere “operational” View

    Security Zone 2

    Security Zone 1

    Firewall

    vNIC

    vNIC

    PortGroup

    L2Network

    vSphere vNetwork

    vShield App

    Kernel

    Module

    security compliance pci dss
    Security Compliance: PCI DSS

    PCI applies to all systems “in scope”

    • Segmentation defines scope
    • What is within scope? All systems that Store, Process, or Transmit cardholder data, and all system components that are in or connected to the cardholder data environment (CDE).

    The DSS is vendor agnostic

    • Does not seem to cover virtualisation.

    Relevant statements from PCI DSS

    • “If network segmentation is in place and will be used to reduce the scope of the PCI DSS assessment, the assessor must verify that the segmentation is adequate to reduce the scope of the assessment.” - (PCI DSS p.6)
    • “Network segmentation can be achieved through internal network firewalls, routers with strong access control lists or other technology that restricts access to a particular segment of a network.” – PCI DSS p. 6
    • “At a high level, adequate network segmentation isolates systems that store, process, or transmit cardholder data from those that do not. However, the adequacy of a specific implementation of network segmentation is highly variable and dependent upon such things as a given network's configuration, the technologies deployed, and other controls that may be implemented. “– PCI DSS p. 6
    • “Documenting cardholder data flows via a dataflow diagram helps fully understand all cardholder data flows and ensures that any network segmentation is effective at isolating the cardholder data environment.” – p.6
    security compliance pci dss 1
    Security Compliance: PCI DSS

    Added complexity from Virtualisation

    • System boundaries are not as clear as their non-virtual counterparts
    • Even the simplest network is rather complicated
    • More components, more complexity, more areas for risk
    • Digital forensic risks are more complicated
    • More systems are required for logging and monitoring
    • More access control systems
    • Memory can be written to disk
    • VM Escape?
    • Mixed Mode environments

    Sample Virtualized CDE

    vnetwork appliances
    vNetwork Appliances

    Advantages

    • Flexible deployment
    • Scales naturally as more ESX hosts are deployed

    Architecture

    • Fastpath agent filter packets in datapath, transparent to vSwitch
    • Optionally forward packets to VM (slowpath agent)

    Solutions

    • VMware vShield, Reflex, Altor, Checkpoint, etc.

    Heavyweight filtering in “Slow Path” agent

    Lightweight filtering in “Fast Path” agent

    vshield
    vShield
    • Setup Perimeter services
      • Install vShield Edge
      • External – Internal
      • Provision Services
      • Firewall
      • NAT, DHCP
      • VPN
      • Load Balancer
    • Setup Internal Trust Zones
      • Install vShield App
      • vDS / dvfilter setup
      • Secure access to shared services
      • Create interior zones
      • Segment internal net
      • Wire up VMs

    Org vDC

    DB

    DMZ

    APP

    Shared Services

    vShield Edge

    vShield App

    vShield App

    vShield App

    Virtual Distributed Switch

    vSphere

    vSphere

    vSphere

    vSphere

    INTERNET

    vshield and fail safe
    vShield and Fail-Safe

    http://www.virtualizationpractice.com/blog/?p=9436

    security
    Security

    Steps to delete “Administrator” from vCenter

    • Move it to the “No Access” role. Protect it with alarm if this is modified.
    • All other plug-in or mgmt products that use Administrator will break

    Steps to delete “root” from ESX

    • Replaced with another ID. Can’t be tied to AD?
    • Manual warns of removing this user.

    Create another ID with root group membership

    • vSphere 4.1 now support MS AD integration
    vcm s free vsphere compliance checker download
    VCM’s Free vSphere Compliance Checker (Download)

    ESX related hardening rules

    5 ESX Hosts

    VM shell related hardening rules

    http://www.vmware.com/products/datacenter-virtualization/vsphere-compliance-checker/overview.html

    windows vm monitoring
    Windows VM monitoring
    • Use the new Perfmon counters provided.
    • The built-in from Windows is misleading in virtual environment
    time keeping and time drift
    Time Keeping and Time Drift
    • Critical to have the same time for all ESX and VM.
    • All VM & ESX to get time from the same 1 internal NTP server
      • Synchronize the NTP Server with an external stratum 1 time source
    • The Internal NTP server to get time from a reliable external server or real atomic clock
      • Should be 2 sources
    • Do not virtualise the NTP server
      • As a VM, it may experience time drift if ESXi host is under resource constraint
    • Physical candidates for NTP Server:
      • Back up server (with vStorage API for Data Protection)
      • Cisco switch
    • See MS AD slide for specific MS AD specific impact.
    linux
    Linux
    • New features in ext4 filesystem:
      • Extents reduce fragmentation
      • Persistent preallocation
      • Delayed allocation
      • Journal checksumming
      • fsck is much faster
    • RHEL 6 & ext4 properly align filesystems
    • Tips: use the latest OS
      • Constant Improvements
        • Built-in paravirtual drivers
        • Better timekeeping
        • Tickless kernel. On-demand timer interrupts. Systems stay totally idle
      • Hot-add capabilities
        • Reduces need to oversize “just in case”
        • Might need to tweak udev. See VMware KB 1015501
      • Watch for jobs that happen at the same time (across VM)
        • Monitoring (every 5 minutes)
        • Log rotation (4 AM)
      • Don’t need sysstat & sar running. Use vCenter metrics instead
    guest optimization swap file location
    Guest Optimization Swap File Location

    Swap file for Windows guests should be on separate dedicated drives

    • Cons:
      • This requires another vmdk file. Management overhead as it has to be resized when RAM changes too.
    • Pro:
      • No need to back up
      • Keep the application traffic and OS disk traffic separate from the page file traffic thereby increasing performance.
    • Swap partition equal to 1.5x RAM
      • 1.5x is the default recommendation for best performance (knowing nothing about the application).
      • Monitor the page file usage to see how much of it is actually being used, in the old days whatever memory was installed was what they were committed to and making a change was an act of congress, look to leverage the virtual flexibility and modify for best usage.
      • http://support.microsoft.com/kb/889654 Microsoft limits on page file’s .

    Microsoft’s memory recommendations and definition of physical address extension explained http://support.microsoft.com/?kbid=555223

    capacity planner
    Capacity Planner
    • Version 2.8 does not yet have the full feature for Desktop Cap Plan. Wait for next upgrade.
      • But you can use it on case by case basis, to collect those demanding desktop.
    • Default setting of paging threashold does not take into account server RAM.
      • Best practice for the Paging threshold is 200 Pg/sec/GB.  So, you have 48GB RAM x 200= 9600 Pgs/sec.
      • Reason is that this paging value provides for the lowest latency access to memory pages.
      • You might get high paging if back up job run.
    • Create project if you need to separate result (e.g. per data center)
    • Win08 has firewall on. Need to turn off using command line.
    • To be verified in 2.8: You can't change prime time.  It's based on the local time zone.
    slide84
    P2V

    Avoid if possible. Best practice is to install from template (which was optimised for virtual machine)

    • Remove unneeded devices after P2V

    MS does not support P2V of AD Domain Controller.

    Static servers are good candidate for P2V:

    • Web servers, print servers

    Servers with retail licence/key will require Windows reactivation. Too many hardware changes.

    Resize

    • Relative CPU comparison
    • MS Domain Controller: 1 vCPU, 2 GB is enough.
    many solutions depend on vcenter server

    Site Recovery Manager

    vCloud Director

    Operations

    vCenter Server

    Configuration Manager

    View Server and Composer

    Chargeback

    CapacityIQ

    Many Solutions Depend on vCenter Server

    85

    orchestrator integrated workflow environment
    Orchestrator Integrated Workflow Environment

    Automation: A way to perform frequently repeated process without manual intervention.

    • Basic building block: a shell script, a Perl script, a PowerShell script
    • Example: given a list of hostnames, add ESX to VC.

    Orchestration: A way to manage multiple automated processes across and among heterogeneous systems.

    • Example - Add ESX hosts from a list to VC, update CMDB with successfully added ESX hosts, then send email notification.

    Example

    • If a datastore on a host is more than 95% utilized, open a change control ticket then perform s-vMotion and send email notification
    vcenter chargeback manager deployment options cont
    vCenter Chargeback Manager deployment options – cont.

    For vCD and VSM data collector

    • Deploy at least 2 data collectors for vCD and VSM each for high availability

    CBM instance can be installed/upgraded at the time of vCD install/upgrade or later

    vCenter Server

    vCenter Chargeback Web Interface

    vCenter Database

    Chargeback Data Collector

    Chargeback Load Balancer

    vCenter Server

    vCenter Chargeback Database

    vCenter Database

    vCenter Chargeback Servers

    vCenter Server

    vCenter Database

    vr framework
    VR Framework

    SRM UI

    Primary Site

    Secondary Site

    Site Pairing

    VC

    VC

    SRM

    SRM

    VRMS

    VRMS

    ESX

    VM

    ESX

    VM

    VM

    ESX

    VR Service

    ESX

    ESX

    ESX

    VR Server

    NFC Service

    VR Filter

    srm architecture with vsphere replication
    SRM Architecture with vSphere Replication

    [Protected Site]

    [Recovery Site]

    vSphere Client

    vSphere Client

    SRM Plug-In

    SRM Plug-In

    SRM Server

    vCenter Server

    vCenter Server

    SRM Server

    vRMS

    vRMS

    vRS

    ESX

    ESX

    ESX

    ESX

    ESX

    vRA

    vRA

    vRA

    Replication

    Storage

    Storage

    Storage

    VMFS

    VMFS

    VMFS

    VMFS

    service provider
    Service Provider

    [Customer A]

    [DRaaS Provider]

    [Customer B]

    vCenter

    SRM Server

    Storage

    NFS

    NFS

    vRMS

    Replication

    vRS

    ESX

    ESX

    ESX

    ESX

    ESX

    vRA

    vRA

    vRA

    vCenter

    SRM Server

    vRMS

    Storage

    VMFS

    VMFS

    vRMS

    vCenter

    SRM Server

    ESX

    ESX

    ESX

    ESX

    ESX

    vRA

    vRA

    vRA

    vRS

    Replication

    vRMS

    Storage

    vCenter

    SRM Server

    NFS

    NFS

    branch office
    Branch Office

    [Remote Site A]

    [Central Office]

    ESX

    vRA

    SRM Server

    SRM Server

    Replication

    vCenter

    vCenter

    vRMS

    vRMS

    vRS

    ESX

    Why is talking to this VRMS?

    Replication

    vRA

    ESX

    ESX

    ESX

    Replication

    Storage

    [Remote Site B]

    VMFS

    VMFS

    decision trees
    Decision Trees

    Develop decision trees that is tailored to the organisation. Below are 2 examples.

    vsphere replication performance
    vSphere Replication Performance

    1 vSphere Replication “replication server” appliance can process up to 1 Gbps of sustained throughput using approximately 95% of 1 vCPU.

    • 1 Gbps is much larger than most WAN bandwidth

    For a VM protected by VR the impact on application performance is 2 - 6% throughput loss

    ms sql server 2008 licensing
    MS SQL Server 2008: Licensing
    • Always refer to official statement from vendor web site.
      • Emails, spoken words, SMS from a staff (e.g. Sales Manager, SE) is not legally binding

    Licensing a Portion of the Physical Processors

    If you choose not to license all of the physical processors, you will need to know the number of virtual processors supporting each virtual OSE (data point A) and the number of cores per physical processor/socket (data point B). Typically, each virtual processor is the equivalent of one core

    vSphere 4.1 introduce multi-core. Will you save more $? Need to check with MS reseller + official MS documents

    sql server 2008 r2
    SQL Server 2008 R2
    • Get the Express from http://www.microsoft.com/express/Database/
    • In most cases, the Standard edition will be sufficient.
    • vCenter 4.1 and Update Manager 4.1 does not support the Express edition.
      • Hopefully Update 1 will?
    windows support
    Windows Support

    http://www.windowsservercatalog.com/default.aspx

    Interesting. It is the other way around.

    vSphere 4.1 passed the certification for Win08 R2. So Microsoft supports Win03 too.

    It is version specific. Check for vSphere 5

    sql server general best practices
    SQL Server: General Best Practices
    • Follow Microsoft Best Practices for SQL Server deployments
    • Defrag SQL Database(s) – http://support.microsoft.com/kb/943345
    • Preferably 4-vCPU, 8+GB RAM for medium/larger deployments
    • Design back-end to support required workload (IOPS)
    • Monitor Database & Log Disks -Disks Reads/Writes, Disk Queues
    • Separate Data, Log, TempDB etc., IO
    • Use Dual Fibre Channel Paths to storage
      • Not possible in vmdk
    • Use RAID 5 for database & RAID 1 for logs in read-intensive deployments
    • Use RAID 10 for database & RAID 1 for logs for larger deployments
    • SQL 2005 TempDB (need to update to 2008)
      • Move TempDB files to dedicated LUN
      • Use RAID 10
      • # of TempDB files = # of CPU cores (consolidation)
      • All TempDB files should be equal in size
      • Pre-allocate TempDB space to accommodate expected workload
        • Set file growth increment large enough to minimize TempDB expansions.
        • Microsoft recommends setting the TempDB files FILEGROWTH increment to 10%
    what is sql database mirroring
    What is SQL Database Mirroring?

    Database-level replication over IP…, no shared storage requirement

    Same advantages as failover clustering (service availability, patching, etc.)

    At least two copies of the data…, protection from data corruption (unlike failover clustering)

    Automatic failover for supported applications (DNS alias required for legacy)

    Works with SRM too. VMs recover according to SRM recovery plan

    vmware ha with database mirroring for faster recovery
    VMware HA with Database Mirroring for Faster Recovery

    Highlights:

    • Can use Standard Windows and SQL Server editions
    • Does not require Microsoft clustering
    • Protection against HW/SW failures and DB corruption
    • Storage flexibility (FC, iSCSI, NFS)
    • RTO in few seconds (High Safety)
    • vMotion, DRS, and HA are fully supported!

    Note:

    • Must use High Safety Mode for Automatic Failover
    • Clients applications must be aware of Mirror or use DNS Alias
    ms sharepoint 2010
    MS SharePoint 2010

    Go for 1 VM = 1 Role

    java application
    Java Application

    RAM best practice

    • Size the virtual machine’s memory to leave adequate space
      • For the Java heap
      • For the other memory demands of the Java Virtual Machine code
      • For any other concurrently executing process that needs memory from the same guest operating system
      • To prevent swapping in the guest OS
    • Do not reserve RAM 100% unless HA Cluster is not based on Host Failure.
      • This will impact HA Slot Size
    • Consider the VMware vFabric as it takes advantage of vSphere.

    Others

    • Use the Java features for lower resolution timing as supplied by your JVM (Windows/Sun JVM example: -XX:+ForceTimeHighResolution)
    • Use as few virtual CPUs as are practical for your application
    • Avoid using /pmtimer in boot.ini for Windows with SMP HAL
    slide102
    SAP

    No new benchmark data on Xeon 5600.

    • Need to check latest Intel data.

    Regarding the vSphere benchmark.

    • It’s a standard SAP SD 2-tier benchmark. In real life, we should split DB and CI instance, hence cater for more users
    • vSphere 4.0, not 4.1
    • SLES 10 with MaxDB
    • Xeon 5570, not 5680 or Xeon 7500 series.
    • SAP ERP 6.0 (Unicode) with Enhancement Package 4

    Around 1500 SAPS per core

    • Virtual at 93% to 95% of Native Performance. For sizing, we can take 90%of physical result.
    • Older UNIX servers (2006 – 2007) are good candidates for migration toX64 due to low SAPS per core.

    Central Instance can be considered for FT.

    • 1 vCPU is enough for most cases
    ms ad
    MS AD

    Good candidate.

    • 1 vCPU 2 GB RAM are sufficient. Use the UP HAL.
      • 100,000 users require up to 2.75GB of memory to cache directory (x86)
      • 3 Million users require up to 32GB of memory to cache entire directory (x64)
    • Disk is rather small
      • Disk2 (D:) for Database. Around ~16GB or greater for larger directories
      • Disk3 (L:) for Log files. Around 25% of the database LUN size

    Changes in MS AD design once all AD are virtualised

    • VM is not a reliable source of time. Time drift may happens inside a VM.
    • Instead of synchronising with the Forest PDC emulator or the “parent” AD, synchronise with Internal NTP Server.

    Best practices

    • Set the VM to auto boot.
    • Boot Order
      • vShield VM
      • AD
      • vCenter DB
      • vCenter App
    • Regularly monitor Active Directory replication
    • Perform regular system state backups as these are still very important to your recovery plan
    ms exchange
    MS Exchange

    Exchange has become leaner and more scalable

    Building block CPU and RAM sizing for 150 sent/received

    • http://technet.microsoft.com/en-us/library/ee712771.aspx

    Database Availability Group (DAG)

    • DAG feature in Exchange 2010 necessitates a different approach to sizing the Mailbox Server role, forcing the administrator to account for both active and passive mailboxes.
    • Mailbox Servers that are members of a DAG can host one or more passive databases in addition to any active databases for which they may be responsible.
    • Not supported by MS when combined

    Exchange 2003

    • 32-bit Windows
    • 900 MB database cache
    • 4 Kb block size
    • High read/write ratio

    Exchange 2007

    • 64-bit Windows
    • 32+ GB database cache
    • 8 Kb block size
    • 1:1 read/write ratio
    • 70% reduction in disk I/O

    Exchange 2010

    • 64-bit Windows
    • 32 Kb block size
    • I/O pattern optimization
    • Further 50% I/O reduction
    vmware ha dags no ms support
    VMware HA + DAGs (no MS support)

    Protects from hardware and application failure

    • Immediate failover (~ 3 to 5 secs)
    • HA decreases the time the database is in an ‘unprotected state’

    No passive servers.

    Windows Enterprise edition.

    Exchange Standard or Enterprise editions

    Complex configuration and capacity planning

    2x or more storage needed

    Not officially supported by Microsoft

    realtime applications
    Realtime Applications

    Overall: Extremely Latency Sensitive

    • All apps are somewhat latency sensitive
    • RT apps break with extra latency

    “Hard Realtime Systems”

    • Financial trading systems
    • Pacemakers

    “Soft Realtime Systems”

    • Telecom: Voice over IP
      • Technically Challenging, but possible. Mitel and Cisco both provide official support. Need 100% reservation.
      • Not life-or-death risky

    Financial Desktop Apps (need hardware PCoIP)

    • Market News
    • Live Video
    • Stock Quotes
    • Portfolio Updates
    file server
    File Server

    Why virtualise?

    • Cheaper
    • Simpler.

    Why not virtualise

    • You already have an NFS server
    • You don’t want additional layer.
    upgrade best practices
    Upgrade Best Practices

    Turn Upgrade into Migrate

    • Much lower risk. Ability to roll back and much simpler project.
    • Fewer stages. 3 stages  1
      • Upgrade + New Features + Rearchitecture in 1 clean stage.
    • Faster overall project
    • Need to do server tech refresh for older ESXi

    Think of both data centers

    • vCenter 5 can’t linked-mode to vCenter 4.

    Involve App Team

    • Successful upgrade should result in faster performance

    Involve Network and Storage team

    • There cooperation is required to take advantage of vSphere 5

    Compare Before and After

    • …. and document your success! 
    migrate overall approach
    Migrate: Overall Approach

    Document the Business Drivers and Technical Goals

    • Upgrade is not simple. And you’re not doing it for fun 
    • If you are going to support larger VM, you might need to change server

    Check compabitility

    • Array to ESXi 5.
      • Is it supported?
      • You need firmware upgrade to take advantage of new vStorage API
    • Backup software to vCenter 5
    • Products that integrates with vCenter 5
      • VMware “integration” products: SRM, View, vCloud Director, vShield, vCenter Heartbeat
      • Partner integration products: TrendMicro DS, Cisco Nexus
      • VMware management products, partner management products.
      • All these products should be upgraded first

    Assuming all the above is compatible, proceed to next step

    Read the Upgrade Guide

    Plan and Design the new architecture

    • Based on vSphere 5 + SRM 5 + vShield 5 + others
    • Decide which architectural changes you are going to implement. Examples:
      • vSwitch to vDS?
      • Datastore Cluster?
      • Auto-deploy?
      • vCenter appliance? Take note the limitation (View, VCM, LinkedMode, etc limitation)
    • What improvements are you implementing? Examples:
      • Datastore clean up or consolidation.
      • SAN: fabric zoning, multi-pathing, 8 Gb, FCoE
      • Chargeback? This will impact your design
    migrate overall approach 1
    Migrate: Overall Approach

    Upgrade vCenter

    Create the first ESXi cluster

    • Start with IT cluster

    Migrate first 4.x cluster into vCenter 5

    • 1 cluster at a time.
    • Follow VM schedule downtime
    • Capture Before Performance, for comparison or proof.
    • Back up VM, then migrate.
    • Once last VM migrated, the hosts are free for reuse or decommissioned.

    Repeat until last cluster is migrated

    Upgrade VM to latest hardware version and upgrade VMware Tools.

    new features that impact design
    New features that impact design

    New features with major design impact

    • Storage Cluster
    • Auto Deploy
      • You need infrastructure to support it
    • vCenter appliance
    • VMFS-5
      • Larger datastore, so your datastore strategy might change to “less but larger” one.

    Other new features can wait after upgrade.

    • Example, Network IO Control can be turned on after upgrade.
    over time the dmz evolved
    Over Time The DMZ Evolved

    Increased technological and operational complexity

    Security Zones

    UTM

    # of systems & complexity increase over time

    1995

    2005

    SEC 1880

    design consideration for dmz zone
    Design Consideration for DMZ Zone

    5-dimensional decision model

    SEC 1880

    vdmz operations is different
    vDMZ Operations is different
    • Virtual DMZ Operations
      • Needs VMware Know-how
      • Needs Windows Know-how
      • Needs Hardware Know-how
      • Needs Application Know-how
      • Needs security automation
      • Needs organizational integration
    • Virtual DMZ Operations
      • Highly dynamic & agile
      • Additional systems (vSphere, Windows)
      • Additional Hardware (Blades, Converged Networking)
      • Server sprawl inside the DMZ
    • Physical DMZ Operations
      • Network, Network-Security & Unix only
      • Disparate Silos
      • Manual operations
      • No integration into “internal ops”
    • DMZ Operations:
      • Maintenance: Upgrading, Updating and Troubleshooting
      • ServiceChanges: Changing existing Services
      • Innovation: Introduction of new Services
      • Monitoring: Keeping things “in the green” & “secure”
    chargeback
    Chargeback

    Chargeback database

    vCloud Director database

    Chargeback Server

    vCloud data collector

    REST

    JDBC

    JDBC

    VSM data collector

    vShield Manager

    REST

    REST

    JDBC

    vCenter database

    Chargeback data collector

    JDBC

    JDBC

    automation impacts 8 a reas of it excellence
    Automation impacts 8areas of IT Excellence
    • Service Design
    • Continual Process Improvement
    • Supplier Management
    • Service Level Management
    • Service Catalog Management
    • Availability Management

    Transformation Planning

    Organization and Skill Development

    Life Cycle Management

    Systems Management

    Capacity Management

    Financial Management

    Configuration Management

    Security Management

    internal cloud maturity model
    Internal Cloud Maturity Model

    Technologicallyproficient

    Operationallyready

    Application- centric

    Service-oriented

    Cloud- enabled

    Include virtualization in software procurement

    Update procurement and change management

    Update audit/accounting practices

    Define HIaaSstandard models

    Plan IA policy requirements

    Governance

    Assess and deploy lab automation

    Automate VMprovisioning

    Automate application provisioning

    Automate service provisioning

    Automatecloud bursting

    Serviceautomation

    Define service tiers

    Implement service pools

    Implement show-back,update data protection

    Implement or update service catalog

    Define IA service management requirements

    Servicemanagement

    Define standard templates

    Deploy essential management services

    Enforce QoS

    Deploy virtual infrastructureappliances

    Deploy virtual datacenters

    Cloud infrastructure management

    Consolidate physical to virtual

    Deploy HA services, tier-3 apps

    Deploy load balancing, tier-2 apps

    Optimize fortier-1 apps, multi-tenants

    Optimize for cloud portability

    HIaaSinfrastructure