Day 1 session 2 building the cloud fabric
This presentation is the property of its rightful owner.
Sponsored Links
1 / 35

Day 1, Session 2 Building the Cloud Fabric PowerPoint PPT Presentation


  • 51 Views
  • Uploaded on
  • Presentation posted in: General

Day 1, Session 2 Building the Cloud Fabric. Session 2 Overview. Configuring the Storage Layer Physical Network Configuring Virtual Networking Bringing the Hypervisor Under Management. Configuring the Storage Layer. New Technologies in the Storage Layer.

Download Presentation

Day 1, Session 2 Building the Cloud Fabric

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Day 1 session 2 building the cloud fabric

Day 1, Session 2Building the Cloud Fabric


Session 2 overview

Session 2 Overview

  • Configuring the Storage Layer

  • Physical Network

  • Configuring Virtual Networking

  • Bringing the Hypervisor Under Management


Configuring the storage layer

Configuring the Storage Layer


New technologies in the storage layer

New Technologies in the Storage Layer

Windows Server 2012 introduces technologies at the storage layer that can replace traditional SAN

Storage Spaces

SMB 3.0

Scale-Out File Server

When leveraged together, these enable high performance, ease administration and lower cost


Storage spaces

Storage Spaces

  • Storage spaces use a pooling model - affordable commodity hardware is put into a pool and LUNs are created from these pools:

  • Supports mirroring and parity for resiliency

  • Works with Windows clustering technologies for high availability

  • Existing backup and snapshot-based infrastructures can be used.

Storage Pool


Storage spaces1

Storage Spaces

  • Virtualization of storage with Storage Pools and Storage Spaces

  • Storage resilience and availability with commodity hardware

  • Resiliency and data redundancy throughn-way mirroring (clustered or unclustered) or parity mode (unclustered)

  • Utilization optimized through thin and trim provisioning and enclosure awareness

  • Integration with other Windows Server 2012 capabilities

  • Serial Attached SCSI (SAS) and Serial AT Attachment (SATA) interconnects

Windows Application Server or File Server

Physical or virtualized deployments

File Server Administration Console

Hyper-V

SMB Multichannel

Integrated with otherWindows Server 2012 capabilities

Failover Clustering

SMB Direct

NTFS

Cluster Shared Volume

NFS

Windows Storage Mgmt.

WindowsVirtualizedStorage

Storage Space

Storage Space

Storage Space

Storage Pool

Storage Pool

PhysicalStorage

(Shared) SAS or SATA


Application storage support smb 3 0

Application storage support – SMB 3.0

Hyper-V Cluster

  • Highly available, shared data store for SQL Server databases and Hyper-V workloads

  • Increased flexibility, and easier provisioning and management

  • Ability to take advantage of existing network infrastructure

  • No application downtime for planned maintenance or unplanned failures with failover clustering

  • Highly available scale-out file server

  • Built-in encryption support

Microsoft SQL Server

SMB

Single Logical Server \\Foo\Share

File Server Cluster

Cluster Shared Volumes

Single File System Namespace

WindowsVirtualizedStorage

Storage Space

Storage Space

Storage Space

SAN

RAID Array

RAID Array

RAID Array

Storage Pool

Storage Pool

PhysicalStorage


Efficient storage through data deduplication

Efficient storage through Data Deduplication

  • Maximize capacity by removing duplicate data

    • 2:1 with file shares, 20:1 with virtual storage

    • Less data to back up, archive, and migrate

  • Increased scale and performance

    • Low CPU and memory impact

    • Configurable compression schedule

    • Transparent to primary server workload

  • Improved reliability and integrity

    • Redundant metadata and critical data

    • Checksums and integrity checks

    • Increase availability through redundancy

  • Faster file download times with BranchCache

VHD Library

Software Deployment Share

General File Share

0% 20% 40% 60% 80% 100%

User Home Folder (My Docs)

Average savings with Data Deduplication by workload type

Source: “Microsoft Internal Testing"


Improved network performance through smb direct rdma

Improved network performance through SMB Direct (RDMA)

  • Higher performance through offloading of network I/O processing onto network adapter

  • High throughput with low latency and ability to take advantage of high-speed networks (such as InfiniBand and iWARP)

  • Remote storage at the speed of direct storage

  • Transfer rate of around 50 Gbs on a single NIC port

  • Compatible with SMB Multichannel for load balancing and failover

Without RDMA

With RDMA

File Client

File Server

Application

Application

App

Buffer

App

Buffer

SMB Client

SMB Client

SMB Server

SMB Server

SMB

Buffer

SMB

Buffer

SMB

Buffer

SMB

Buffer

Transport

Protocol Driver

Transport

Protocol Driver

Transport

Protocol Driver

Transport

Protocol Driver

OS

Buffer

OS

Buffer

NIC Driver

NIC Driver

NIC Driver

NIC Driver

Driver

Buffer

Driver

Buffer

rNIC

NIC

rNIC

NIC

iWARP

Adapter

Buffer

Adapter

Buffer

Adapter

Buffer

Adapter

Buffer

InfiniBand


Offloaded data transfer odx

Offloaded Data Transfer (ODX)

Offloaded Data Transfer (ODX)

Token-based data transfer between intelligent storage arrays

  • Benefits:

  • Rapid virtual machine provisioning and migration

  • Faster transfers on large files

  • Minimized latency

  • Maximized array throughput

  • Less CPU and network use

  • Performance not limited by network throughput or server use

  • Improved datacenter capacity and scale

  • Offload

    Copy

    Request

    Write Request Token

    Successful Write Result

    Token

    External Intelligent Storage Array

    Actual Data

    Token

    Virtual Disk

    Virtual Disk


    Unmediated san access with virtual fibre channel

    Unmediated SAN access with Virtual Fibre Channel

    Access Fibre Channel SAN data from a virtual machine

    • Virtualize workloads that require direct access to FC storage

    • Live migration support

    • N_Port ID Virtualization (NPIV) support

    • Single Hyper-V host connected to different SANs

    • Up to four Virtual Fibre Channel adapters on a virtual machine

    • Multipath I/O (MPIO) functionality

    Hyper-V host 1

    Hyper-V host 2

    LIVE MIGRATION

    Virtual machine

    Virtual machine

    Worldwide Name Set A

    Worldwide Name Set B

    Worldwide Name Set A

    Live migration maintaining Fibre Channel connectivity


    Day 1 session 2 building the cloud fabric

    Demo

    Shared Nothing Live Migration


    Storage automation storage classification

    Storage Automation – Storage Classification


    Controlling what people should consume

    Controlling what people should consume

    Associate a storage pool and/or logical unit to host group for consumption by hosts/clusters contained in host group

    Allocate Storage

    Unassigned Storage

    Available storage pools

    Available storage logical units

    Host groups

    Assigned Storage


    Day 1 session 2 building the cloud fabric

    Demo

    Storage Classification Options in VMM 2012 SP1


    Hyper v over smb

    Hyper-V over SMB

    Hyper-V Cluster

    What is it?

    Store Hyper-V files in shares over the SMB 3.0 protocol(including VM configuration, VHD files, snapshots)

    Works with both standalone and clustered servers (file storage used as cluster shared storage)

    Highlights

    Increases flexibility

    Eases provisioning, management and migration

    Leverages converged network

    Reduces CapEx and OpEx

    Supporting Features

    SMB Transparent Failover - Continuous availability

    SMB Scale-Out – Active/Active file server clusters

    SMB Direct (SMB over RDMA) - Low latency, low CPU use

    SMB Multichannel – Network throughput and failover

    SMB Encryption - Security

    VSS for SMB File Shares - Backup and restore

    SMB PowerShell - Manageability

    Hyper-V

    Hyper-V

    Hyper-V

    Hyper-V

    Hyper-V

    Hyper-V

    Hyper-V

    Hyper-V

    Hyper-V

    SQLServer

    SQLServer

    SQLServer

    IIS

    IIS

    IIS

    VDIDesktop

    VDIDesktop

    VDIDesktop

    File Server Cluster

    File Server

    File Server

    Shared

    Storage


    Smb multichannel

    SMB Multichannel

    Full Throughput

    Bandwidth aggregation with multiple NICs

    Multiple CPUs cores engaged when NIC offers Receive Side Scaling (RSS)

    Automatic Failover

    SMB Multichannel implements end-to-end failure detection

    Leverages NIC teaming (LBFO) if present, but does not require it

    Automatic Configuration

    SMB detects and uses multiple paths

    Sample Configurations

    Single 10GbE RSS-capable NIC

    Multiple 1GbE NICs

    Multiple 10GbE in LBFO team

    Multiple RDMA NICs

    SMB Client

    SMB Client

    SMB Client

    SMB Client

    LBFO

    NIC

    10GbE/IB

    NIC

    10GbE/IB

    NIC

    10GbE

    NIC

    10GbE

    NIC

    10GbE

    NIC

    1GbE

    NIC

    1GbE

    Switch

    10GbE/IB

    Switch

    10GbE/IB

    Switch

    10GbE

    Switch

    10GbE

    Switch

    1GbE

    Switch

    1GbE

    Switch

    10GbE

    SMB Server

    SMB Server

    SMB Server

    SMB Server

    NIC

    10GbE/IB

    NIC

    10GbE

    NIC

    10GbE/IB

    NIC

    10GbE

    NIC

    10GbE

    NIC

    1GbE

    NIC

    1GbE

    LBFO

    Vertical lines are logical channels, not cables


    Day 1 session 2 building the cloud fabric

    Demo

    Scale-Out File Share

    Hyper-V over SMB


    Physical network

    Physical Network


    Physical components

    Physical components

    Edge Devices

    Firewall

    Security

    Load Balancer

    Compute

    Router

    Switch

    Physical

    NICs

    Storage

    Rack


    Day 1 session 2 building the cloud fabric

    Core Router

    Edge

    Devices

    Aggregate

    Switch

    Rack 1

    Rack 2

    Top of rack

    Switch

    Compute

    Storage


    Host configuration three options

    Host configurationThree options

    Non-converged

    Converged Option1

    Converged Option2

    Converged Option1+

    VM1

    VMN

    VM1

    VMN

    Storage

    Cluster

    VM1

    VMN

    VM1

    VMN

    Manage

    LM

    Manage

    Live Migration

    Cluster

    Live Migration

    Storage

    Cluster

    Manage

    Storage

    Live Migration

    Storage

    Cluster

    Manage

    10GbE each

    10GbE each

    HBA/

    10GbE

    10GbE

    1GbE

    1GbE

    1GbE

    10GbE each

    10GbE each

    CSV/RDMA Traffic

    10GbE each


    Configuring virtual networking

    Configuring Virtual Networking


    Merging physical and logical in vmm

    Merging Physical and Logical in VMM

    • Logical Network

    • Models the physical network

    • Separates like subnets and VLANs into named objects that can be scoped to a site

    • Container for fabric static IP address pools

    • VM networks are created on logical networks

    • Logical Switch

    • Central container for virtual switch settings

    • Consistent port profiles across data center

    • Add port classifications

    • Consistent extensions

    • Compliance enforcement


    Configuring logical networks

    Configuring Logical Networks

    INTERNET

    5 - Create and Assign Gateways

    4 – Assign Logical Switch

    3 - Create Logical Switches

    2 - Define VM Networks

    1 - Define Logical Networks

    Gateway

    Tenant 1

    Tenant 2

    Virtual Switch

    Logical Switch

    Logical Network

    Physical Network


    Address pools

    Address Pools

    IP POOLS

    Assigned to VMs, vNICs, hosts, and virtual IPs (VIP’s)

    Specified use in VM template creation

    Checked out at VM creation—assigns static IP in VM

    Returned on VM deletion

    MAC POOLS

    VIRTUAL IP POOLS

    Assigned to VMs

    Specified use in VM template creation

    Checked out at VM creation—assigned before VM boot

    Returned on VM deletion

    Assigned to service tiers that use a load balancer

    Reserved within IP Pools

    Assigned to clouds

    Checked out at service deployment

    Returned on service deletion


    Day 1 session 2 building the cloud fabric

    Demo

    Configuring Virtual Networking in VMM


    Demo configuring network fabric in vmm

    DEMO: Configuring Network Fabric in VMM

    Define Logical Networks

    Datacenter Networks (Isolated VLANs)

    Provider Networks (Virtualized Networks)

    Define VM Networks

    One per VLAN or Virtualized Network

    Create logical Switch

    Port Classifications & Port Profiles

    Switch Extensions

    Assign Logical Switch

    Host – Add Logical Switch

    Create and Assign Gateways (Virtualized Networks)

    The Gateway is how Internet access is provided to isolated tenant VM networks


    A note on tenant configuration

    A Note on Tenant Configuration

    • Using network virtualization for isolation

    • NVGRE gateway gives tenants access to outside world

    • With Gateway

    • Private cloud: route to local networks

    • Hybrid cloud: create site to site tunnel

    • Without Gateway

    • Use a VM with two NICs

    • One on isolated network, one on “Internet”


    Bringing the hypervisor under management

    Bringing the Hypervisor Under Management


    Bringing hyper v hosts under management

    Bringing Hyper-V Hosts Under Management

    VMM provides a lot of flexibility in managing Hyper-V hosts and clusters

    Supports domain and workgroup hosts

    Windows Server 2008 and 2012 hosts

    Add hosts through the UI or PowerShell

    Enables drag-and-drop clustering in the VMM console

    Provides RBAC for provisioning access to map to our “classes of service”


    Bringing vmware hosts under management

    Bringing VMware Hosts Under Management

    A few important points to understand

    Connecting VMM to vCenter does not result in a fundamental change to the datacenter tree

    Re-arranging and securing vSphere hosts and host clusters in VM does NOT affect security within vCenter

    Even if you don’t deploy to vSphere in phase 1, this connectivity brings visibility from an asset management perspective


    Day 1 session 2 building the cloud fabric

    Demo

    Managing Hyper-V and VMware Hosts in VMM


    Module summary

    Module Summary

    In this module, you learned about:

    • Configuring the Storage Layer

    • Physical Network

    • Configuring Virtual Networking

    • Bringing the Hypervisor Under Management


  • Login