Vmug storage g hjem m de vmware vsphere 5 update storage integration
This presentation is the property of its rightful owner.
Sponsored Links
1 / 33

VMUG storage gå-hjem møde VMware vSphere 5 Update - Storage Integration PowerPoint PPT Presentation


  • 59 Views
  • Uploaded on
  • Presentation posted in: General

VMUG storage gå-hjem møde VMware vSphere 5 Update - Storage Integration. Morten Petersen Sr Technology Consultant Tlf : 2920 2328 [email protected] Core Storage & Infrastructure Related Topics. This Section Will Cover:. vStorage APIs for Array Integration (VAAI) – expansion

Download Presentation

VMUG storage gå-hjem møde VMware vSphere 5 Update - Storage Integration

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Vmug storage g hjem m de vmware vsphere 5 update storage integration

VMUG storage gå-hjem mødeVMware vSphere 5 Update - Storage Integration

Morten Petersen

Sr TechnologyConsultant

Tlf: 2920 2328

[email protected]


Vmug storage g hjem m de vmware vsphere 5 update storage integration

Core Storage & Infrastructure Related Topics

This Section Will Cover:

vStorageAPIs for Array Integration (VAAI) – expansion

vStorage Storage APIs for Storage Awareness (VASA)

Storage vMotionEnhancements

Storage DRS


Understanding vaai a little lower

VAAI = vStorageAPIs for Array Integration

A set of APIs to allow ESX to offload functions to storage arrays

In vSphere 4.1, supported on VMware File Systems (VMFS) and Raw Device Mappings (RDM) volumes,

vSphere 5 adds NFS VAAI APIs.

Supported by EMC VNX, CX/NS, VMAX arrays (coming soon to Isilon)

Goals

Remove bottlenecks

Offload expensive data operations to storage arrays

Motivation

Efficiency

Scaling

Understanding VAAI a little “lower”

VI3.5 (fsdm)

vSphere 4 ( fs3dm - software)

vSphere 4.1/5 (hardware) = VAAI

Diagram from VMworld 2009 TA3220 – Satyam Vaghani


Growing list of vaai hardware offloads

Growing list of VAAI hardware offloads

  • vSphere 4.1

    • For Block Storage:

      HW Accelerated Locking

      HW Accelerated Zero

      HW Accelerated Copy

    • For NAS storage:

      None

  • vSphere 5

    • For Block Storage:

      Thin Provision Stun

      Space Reclaim

    • For NAS storage:

      Full Clone

      Extended Stats

      Space Reservation


Vaai in vsphere 4 1 big impact

VAAI in vSphere 4.1 = Big impact

http://www.emc.com/collateral/hardware/white-papers/h8115-vmware-vstorage-vmax-wp.pdf


Vsphere 5 thin provision stun

vSphere 5 – Thin Provision Stun

Allocate VMFS

Allocate VMFS

Allocate VMFS

  • Without API

    • When a datastore cannot allocate in VMFS because of an exhaustion of free blocks in the LUN pool (in the array) this causes VMs to crash, snapshots to fail, and other badness.

    • Not a problem with “Thick devices”, as allocation is fixed.

    • Thin LUNs can fail to deliver a write BEFORE the VMFS is full

    • Careful management at VMware and Array level needed

  • With API

    • Rather than erroring on the write, array reports new error message

    • On receiving this command, VMs are “stunned”, giving the opportunity to expand the thin pool at the array level.

VMFS-5

Extent

SCSI WRITE - OK

SCSI WRITE - OK

SCSI WRITE – ERROR!

Thin LUNs

!

!

!

Utilization

Storage Pool (free blocks)

VMDK

VMDK

VMDK


Vsphere 5 space reclamation

vSphere 5 – Space Reclamation

  • Without API

    • When VMFS deletes a file, the file allocations are returned for use, and in some cases, SCSI WRITE ZERO would zero out the blocks.

    • If the blocks were zeroed, manual space reclamation at the device layer could help.

  • With API

    • Rather of SCSI WRITE ZERO, SCSI UNMAP is used.

    • The array releases the blocks back to the free pool.

    • Is used anytime VMFS deletes (svMotion, Delete VM, Delete Snapshot, Delete)

    • Note that in vSphere 5, SCSI UNMAP is used in many other places where previously SCSI WRITE ZERO would be used, and depends on VMFS-5

CREATE FILE

CREATE FILE

CREATE FILE

CREATE FILE

DELETE FILE

DELETE FILE

VMFS-5

Extent

SCSI WRITE - DATA

SCSI WRITE - DATA

SCSI WRITE - DATA

SCSI WRITE - DATA

SCSI WRITE - ZERO

SCSI UNMAP

Utilization

Storage Pool (free blocks)

VMDK

VMDK


Vsphere 5 nfs full copy

vSphere 5 – NFS Full Copy

  • Without API

    • Some NFS servers have the ability to create file-level replicas

    • This feature was not used for VMware operations – which were traditional host-based file copy operations.

    • Vendors would leverage them via vCenter plugins. An example was EMC exposed this array feature via the Virtual Storage Integrator Plugin Unified Storage Module.

  • With API

    • Implemented via NAS vendor plugin, used by vSphere for clone, deploy from template

    • Uses EMC VNX OE File file version

    • Somewhat analagous to block XCOPY hardware offload

    • NOTE – not used during svMotion

NFS Mount

Extent

“let’s clone this VM”

“let’s clone this VM”

ESX Host

File Read

File Read

File Read

..MANY times…

File Write

File Write

File Write

..MANY times…

“Create a copy (snap, clone, version) of the file

NFS Server

Filesystem

FOO-COPY.VMDK

FOO.VMDK


Vsphere 5 nfs extended stats

vSphere 5 – NFS Extended Stats

“just HOW much space does this file take?”

“just HOW much space does this file take?”

  • Without API

    • Unlike with VMFS, with NFS datastores, vSphere does not control the filesystem itself.

    • With the vSphere 4.x client – only basic file and filesystem attributes were used

    • This lead to challenges with managing space when thin VMDKs were used, and administrators had no visibility to thin state and oversubscription of both datastores and VMDKs.

      • think: with thin LUNs under VMFS, you could at least see details on thin VMDKs)

  • With API

    • Implemented via NAS vendor plugin

    • NFS client reads extended file/filesystem details

NFS Mount

Extent

ESX Host

“Filesize = 100GB, but it’s a sparse file and has 24GB of allocations in the filesystem. It is deduped – so it’s only REALLY using 5GB”

“Filesize = 100GB”

NFS Server

Filesystem

FOO.VMDK


Vsphere 5 nfs reserve space

vSphere 5 – NFS Reserve Space

  • Without API

    • There was no way on NFS datastores to do the equivalent of an “eagerzeroed thick” VMDK (needed for WSFC) or a “zeroed thick” VMDK

  • With API

    • Implemented via NAS vendor plugin

    • Reserves the complete space for a VMDK on an NFS datastore


Video

<Video>


What is vasa

What Is VASA?

  • VASA is an Extension of the vSphere Storage APIs, vCenter-based extensions. It allows storage arrays to integrate with vCenter for management functionality via server-side plug-ins or Vendor Providers.

  • Allows a vCenter administrator to be aware of the topology, capabilities, and state of the physical storage devices available to the cluster. Think of it as saying:

    “this datastore is protected with RAID 5, replicated with a 10 minute RPO, snapshotted every 15 minutes, and is compressed and deduplicated”.

  • VASA enables several features:

    • It delivers System-defined (array-defined) Capabilities that enables Profile-driven Storage.

    • It provides array internal information that helps Storage DRS work optimally with various arrays.


How vasa works

How VASAWorks

VASA allows a storage vendor to develop a software component called a VASA provider for its storage arrays.

  • A VASA provider gets information from the storage array about available storage topology, capabilities, and state

vCenterServer5.0

vSphereClient

VASAProvider

EMC

storage

The vCenter Server connects toa VASA Provider.

  • Information from the VASA provideris displayed in the vSphere Client.


Storage policy

Storage Policy

  • Once the VASA Provider has been successfully added to vCenter, the VM Storage Profiles displays the storage capabilities from the Vendor Provider.

  • For EMC in Q3, this was provided for VNX and VMAX via Solutions Enabler for block storage. NFS will require user-defined.

  • In the future, VNX will have a native provider, and will gain NFS system-defined profiles

  • Isilon VASA support is targeted for Q4


Profile driven storage

Profile Driven Storage

Profile driven storage enables the creation of datastores which provide varying levels of service.

Profile driven storage can be used to

  • Categorize datastores based on system- or user-defined levels of service

    • For example, user-defined levels might be gold, silver, and bronze.

  • Provision virtual machine’s disks on “correct” storage

  • Check that virtual machines comply with user-defined storage requirements

gold

silver

bronze

unknown

compliant

not compliant


Create vm storage profile and capabilities

Create VM Storage Profile andCapabilities

Home - VM Storage Profile


Using the virtual machine storage profile

Using the Virtual Machine Storage Profile

Use the virtual machine storage profile when you create, clone, or migrate a virtual machine.


Storage profile during provisioning

Storage Profile During Provisioning

  • By selecting a VM Storage Profile, datastores are now split into Compatible & Incompatible.

  • The Celerra_NFS datastore is the only datastore which meets the GOLD Profile (user defined) requirements


Vm storage profile compliance

VM Storage Profile Compliance


Storage vmotion enhancements

Storage vMotion – Enhancements

  • New Functionality

    • In vSphere 5.0, Storage vMotion uses a new mirroring architecture (vs. old snapshot method) to provide the following advantages over previous versions:

      • Guarantees migration success even when facing a slower destination.

      • More predictable (and shorter) migration time.

  • New Features

    • Storage vMotion in vSphere 5 works with Virtual Machines that have snapshots

      • means coexistence with other VMware products & features such as VDR & vSphere Replication.

    • Storage vMotion will support the relocation of linked clones.

  • New Use Case

    • Storage DRS


What does storage drs solve

What Does Storage DRS Solve?

  • Without Storage DRS:

    • Identify the datastore with the most disk space and lowest latency.

    • Validate which virtual machines are placed on the datastore and ensure there are no conflicts.

    • Create Virtual Machine and hope for the best.

  • With Storage DRS:

    • Automatic selection of the best placement for your VM

    • Advanced balancing mechanism to avoid storage performance bottlenecks or “out of space” problems.

    • VM or VMDK Affinity Rules.


Datastore cluster

Datastore Cluster

  • A group of datastores called a “datastore cluster”

  • Think:

    • Datastore Cluster - Storage DRS = Simply a group of datastores (like a datastore folder)

    • Datastore Cluster + Storage DRS = resource pool analagous to a DRS Cluster.

    • Datastore Cluster + Storage DRS + Profile-Driven Storage = nirvana 

2TB

datastore cluster

500GB

500GB

500GB

500GB

datastores


Storage drs initial placement

Storage DRS – Initial Placement

  • Initial Placement – VM/VMDK create/clone/relocate.

    • When creating a VM you select a datastore cluster rather than an individual datastore

    • SDRS recommends a datastore based on space utilization and I/O load.

    • By default, all the VMDKs of a VM will be placed on the same datastore within a datastore cluster (VMDK Affinity Rule), but you can choose to have VMDKs assigned to different datastore clusters.

2TB

datastore cluster

datastores

500GB

500GB

500GB

500GB

300GB available

260GB available

265GB available

275GB available


Storage drs qos operations

Storage DRS QoS Operations

When using EMC FAST VP, use SDRS, but disable I/O metric here.

This combination gives you the simplicity benefits of SDRS for automated placement and capacity balancing but adds:

Economic and performance benefits of automated tiering across SSD, FC, SAS, SATA

10x (VNX) and 100x (VMAX) higher granularity (sub VMDK)

  • SDRS triggers action on either capacity and/or latency

    • Capacity stats are constantly gathered by vCenter, default threshold 80%.

    • I/O load trend is evaluated (default) every 8 hours based on a past day history, default threshold 15ms.

  • Storage DRS will do a cost / benefit analysis!

  • For latency Storage DRS leverages Storage I/O Control functionality.


Emc vfc ache

EMC VFCache

Performance. Intelligence. Protection.


What if you could achieve an order of magnitude better performance

What If You Could Achieve an Order of Magnitude Better Performance?

PCIe Flash technology

IOPS/GB


Performance results traditional architecture

Performance Results: Traditional Architecture

1

2

3

4

FAST Policy 1

3%

0

97%

EFD

FC HDD

6

8

7

5

SATA HDD

Read Latency: ~640 μs – 7.5 ms, Write Latency: ~550 μs – 11 ms

  • Reads and writes are serviced by the storage array

  • Performance varies depending on back-end array’s media, workload, and network

* VNX7500 with 20 SSDs and 20 HDDs; typical loads with 32 outstanding I/Os


Performance results vfcache advanced architecture

Performance Results: VFCache Advanced Architecture

1

2

3

4

FAST Policy 1

3%

0

97%

EFD

5

FC HDD

9

8

7

6

SATA HDD

Read Latency: ~<100 μs

Write Latency: ~550 μs – 11 ms

  • Reads are serviced by VFCache for

Performance

  • Writes are passed through to the storage array for

Protection

* VNX7500 with 20 SSDs and 20 HDDs; typical loads with 32 outstanding I/Os


100 percent transparent caching

100 Percent Transparent Caching

VFCache Driver extends your SAN

Application

VFCache Driver

SANHBA

PCIeFlash

SAN storage


Vstorage apis

vStorage APIs

  • vStorage APIs are a “family”

    • vStorage API for Array Integration (VAAI)

    • vStorage API for Site Recovery Manager

    • vStorage API for Data Protection

    • vStorage API for Multipathing

    • vStorage API for Storage Awareness (VASA)


Vcenter plug ins

vCenter Plug-ins

  • VMware administrators already use vCenter to manage their organization’s

    • ESX/ESXi Clusters

    • Virtual Machines

  • Purpose-built plug-ins to the vCenter management interface allow VMware administrators to

    • Provision

    • Manage

    • Monitor

      their storage from vCenter as well.


Emc now has one vcenter plug in to do it all virtual storage integrator vsi

EMC now has ONE vCenter Plug-in to do it all:Virtual Storage Integrator (VSI)

  • VSI feature menu structure


  • Login