unified fabric aka fcoe
Download
Skip this Video
Download Presentation
Unified Fabric aka FCOE

Loading in 2 Seconds...

play fullscreen
1 / 59

Unified Fabric aka FCOE - PowerPoint PPT Presentation


  • 64 Views
  • Uploaded on

Unified Fabric aka FCOE. Dave Gibson Senior Systems Engineer Cisco Systems. Legal Disclaimer.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Unified Fabric aka FCOE' - tovah


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
unified fabric aka fcoe

Unified Fabricaka FCOE

Dave Gibson

Senior Systems Engineer

Cisco Systems

legal disclaimer
Legal Disclaimer

Many of the products and features described herein remain in varying stages of development and will be offered on a when-and-if-available basis. This roadmap is subject to change at the sole discretion of Cisco, and Cisco will have no liability for delay in the delivery or failure to deliver any of the products or features set forth in this document.

agenda
Agenda
  • The Evolution of the Data Center
  • Introduction to FCoE
  • Standards Defined
  • Nexus and the Unified Fabric
  • Nexus 5000
data center access layer trends
Data Center Access Layer Trends

Multi-Core CPU architectures allowing bigger and multiple workloads on the same machine

Server virtualization driving the need for more I/O bandwidth per server

Growing need for network storage driving the demand for higher network bandwidth to the server

Increasing adoption of Blades in data centers.

10G LOM on server Motherboard

next gen switch design goals
Next-Gen Switch Design Goals
  • Consolidate LAN & SAN infrasctucture
  • Standards based solution
  • Reduce total cost of ownership
  • End-to-end data center architecture
  • Operational consistency across platforms
  • Build with superior performance in mind
  • Support low latency applications (e.g. HPC, clustered app’s)
  • Enable Virtualization
  • Address increase in server processing power
  • Scale to 40G and 100G in future
  • Increase feature velocity
cisco nexus family

Complete data center class switching portfolio

Consistent data center operating system across all platforms

Infrastructure scalability, transport flexibility and operational manageability

2008

1K

Cisco Nexus 1000V

x86

Cisco Nexus Family

Nexus 7000

(Modular Switch Platform)

Nexus 1000V

(Virtual Switch)

Nexus 4000

(Blade Switch)

Nexus 2000

(Fabric Extender)

Nexus 5000

(Fixed Config Switch)

NX-OS Data Center Operating System

Data Center Network Manager

before i o consolidation
Parallel LAN/SAN Infrastructure

Inefficient use of Network Infrastructure

5+ connections per server – higher adapter and cabling costs

Adds downstream port costs; cap-ex and op-ex

Each connection adds additional points of failure in the fabric

Multiple switching modules in Blade Chassis

Longer lead time for server provisioning

Multiple fault domains – complex diagnostics

Management complexity

Before I/O Consolidation

LAN

SAN B

SAN A

Blade Chassis with I/O Modules

Server with

NICs and HBAs

Ethernet

FC

i o consolidation
Reduction of server adapters

Simplification of access layer and cabling

Gateway free implementation – fits in installed base of existing LAN and SAN

Lower Total Cost of Ownership

Fewer Cables

Investment Protection (LANs and SANs)

Consistent Operational Model

I/O Consolidation

LAN

SAN B

SAN A

Nexus 5000

Nexus 5000

Blade Chassis with Nexus 4000

Server with CNAs

Data Center Bridging

and FCoE

Ethernet

Fibre Channel (FC)

evolution of 10g ethernet physical media role of transport in enabling these technologies

X2

SFP+ Cu (BER better than 10 )

SFP+ Fiber

Cat 6/7

-18

Evolution of 10G Ethernet Physical MediaRole of Transport in Enabling these Technologies!

Mid 1980’s

Mid 1990’s

Early 2000’s

Late 2000’s

10Gb

10Mb

100Mb

1Gb

UTP Cat 5SFP Fiber

UTP Cat 3

UTP Cat 5

what is fibre channel over ethernet

FCoE is an extension of Fibre Channel

onto a Lossless Ethernet fabric

What is Fibre Channel over Ethernet?
  • From a Fibre Channel standpoint it’s
    • FC connectivity over a new type of cable called… an Ethernet cloud
  • From an Ethernet standpoints it’s
    • Yet another ULP (Upper Layer Protocol) to be transported
unified fabric overview fibre channel over ethernet fcoe

FCoE

Benefits

Ethernet

Fibre Channel Traffic

Unified Fabric OverviewFibre Channel over Ethernet (FCoE)
  • Fewer Cables
    • Both block I/O & Ethernet traffic co-exist on same cable
  • Fewer adapters needed
  • Overall less power
  • Interoperates with existing SAN’s
    • Management SAN’s remains constant
  • No Gateway
  • Mapping of FC Frames over Ethernet
  • Enables FC to Run on a Lossless Ethernet Network

16

8/20/2014

fcoe enablers

Ethernet

Header

FCoE

Header

FC

Header

FC Payload

CRC

EOF

FCS

FCoE Enablers
  • 10Gbps Ethernet
  • Lossless Ethernet
    • Matches the lossless behavior guaranteed in FC by B2B credits
  • Ethernet jumbo frames

Normal ethernet frame, ethertype = FCoE

Same as a physical FC frame

Control information: version, ordered sets (SOF, EOF)

unified i o fibre channel over ethernet fcoe

Easy to Understand

Same Operational Model

FCoE is

Fibre Channel

Same Techniques ofTraffic Management

Same Managementand Security Models

Unified I/OFibre Channel over Ethernet (FCoE)

FCoE is managed like FC at initiator, target, and switch level

Completely based on the FC model

Same host-to-switch and switch-to-switch behavior as FC

e.g. in order delivery, FSPF load balancing

WWNs, FC-IDs, hard/soft zoning, DNS, RSCN

network stack comparison
Network Stack Comparison

SCSI

SCSI

SCSI

SCSI

SCSI

iSCSI

FCP

FCP

FCP

FC

FC

FC

FCIP

Less Overheadthan FCIP, iSCSI

TCP

TCP

IP

IP

FCoE

Ethernet

Ethernet

Ethernet

PHYSICAL WIRE

SCSI

iSCSI

FCIP

FCoE

FC

a larger picture
A larger picture
  • IEEE 802
    • Evolution of Ethernet (10 GE, 40 GE, 100 GE, copper and fiber)
    • Evolution of switching (Priority Flow Control, Enhanced Transmission, Congestion Management, Data Center Bridging eXchange)
  • INCITS/T11
    • Evolution of Fibre Channel (FC-BB-5)
    • FCoE (Fibre Channel over Ethernet)
  • IETF
    • Layer 2 Multi-Path
      • TRILL (Transparent Interconnection of Lots of Links)
dce versus dcb
DCE versus DCB
  • DCE is an old Cisco marketing term
  • Cisco is now using the term DCB
    • The term IEEE uses
  • Cisco supports the DCB standard activity
    • By implementing products that are DCB compliant
  • CIN-DCBX – Cisco, Intel, Nuova Data Center Bridging Exchange protocol, pre-standard
  • CEE-DCBX – Converged Enhanced Ethernet Data Center Bridging Exchange protocol, which is standards base
what s fc bb 5
What’s FC-BB-5
  • FC-BB-5 covers the majority of the FC features, using Ethernet
  • From an Ethernet perspective, FC-BB-5 is
    • Ethernet control plane referred to as FIP (Fibre Channel over Ethernet Initiation Protocol)
      • discover and build virtual paths between end points
    • Ethernet data plane providing FCoE forwarding
      • including both FC control plane and FC data plane (FCF)
fc bb 6
FC-BB-6
  • It is an active working group of T11 that will discuss the future of FCoE or FCoE v2.0
  • It is just started, 18 months to have a standard
    • Approximate target spring 2011
  • You can track it on
    • http://www.fcoe.com
protocol organization
Protocol Organization

FCoE itself …

Is the data plane protocol

It is used to carry most of the FC frames and all the SCSI traffic

FIP (FCoE initiation protocol)

It is the control plane protocol

It is used to discover the FC entities connected to an Ethernet cloud

It is used to login to and logout from the FC fabric

FCoE is really two different protocols:

  • The two protocols have:
    • Two different Ethertypes
    • Two different frame formats
what s not fc bb 5
What’s NOT FC-BB-5
  • FC-BB-5 doesn’t deal with how lossless is realized in Ethernet
    • no Priority Flow Control, Bandwidth Management, etc.
  • FC-BB-5 doesn’t deal with management functions
ieee dcb standards status
IEEE DCB standards status

DCB technologies allow Ethernet to be lossless and to manage bandwidth allocation of SAN and LAN flows

data center ethernet pfc bandwidth management
Nuova Systems Inc.

CoS based Bandwidth Management

Priority Flow Control

10 GE Realized Traffic Utilization

Offered Traffic

Transmit Queues

Receive Buffers

Ethernet Link

3G/s

HPC Traffic3G/s

2G/s

Zero

Zero

3G/s

3G/s

2G/s

One

One

3G/s

Storage Traffic3G/s

3G/s

Two

Two

3G/s

3G/s

3G/s

EightVirtual

Lanes

Five

Five

Four

Four

3G/s

LAN Traffic

4G/s

5G/s

3G/s

4G/s

6G/s

Three

Three

PAUSE

STOP

Six

Six

t1

t2

t3

t1

t2

t3

Seven

Seven

  • Enables lossless behavior for each class of service
  • PAUSE sent per virtual lane when buffers limit exceeded
  • Enables Intelligent sharing of bandwidth between traffic classes control of bandwidth
  • 802.1Qaz Enhanced Transmission
Data Center Ethernet: PFC & Bandwidth Management
dcbx overview
DCBX Overview

Auto-negotiation of capability and configuration

Priority Flow Control capability and associated CoS values

Allows one link peer to push config to other link peer

Link partners can choose supported features and willingness to accept

Discovers FCoE Capabilities

Responsible for Logical Link Up/Down signaling of Ethernet and FC

DCBX negotiation failures will result in:

  • vfc not coming up
  • Per-priority-pause not enabled on CoS values with PFC configuration

http://download.intel.com/technology/eedc/dcb_cep_spec.pdf

http://www.ieee802.org/1/files/public/docs2008/

fip fcoe initialization protocol
FIP: FCoE Initialization Protocol
  • FCoE VLAN discovery
    • Automatic discovery of FCoE VLANs
  • Device discovery
    • ENodes discover VF_Port capable FCF-MACs for VN_Port to VF_Port Virtual Links
    • VE_Port capable FCF-MACs discover other VE_Port capable FCF-MACs for VE_Port to VE_Port Virtual Links
    • The protocol verifies the Lossless Ethernet network supports the required Max FCoE Size
  • Virtual Link instantiation
    • Builds on the existing Fibre Channel Login process, adding the Negotiation of the MAC address to use
      • Fabric Provided MAC Address (FPMA), or
      • Server Provided MAC Address (SPMA)
  • Virtual Links maintenance
    • Timer based
fabric provided mac addresses

FC-ID

7.8.9

FC-MAP

(0E-FC-00)

24 bits

24 bits

MAC

Address

FC-MAP

(0E-FC-00)

FC-ID

7.8.9

Burned in or Configured

48 bits

48 bits

Fabric Provided MAC Addresses

Server Provided MAC Addresses

  • MAC address assigned for each FC_ID:
    • Consistent with the Fibre Channel model
  • Multiple FC-MAPs may be supported
  • One per SAN
  • No table needed for Encapsulation
  • Multiple MACs may be needed for NPIV
  • Adapter uses burned-in or configured MAC address:
    • Consistent with the Ethernet model
  • FCF needs a table to map between MAC addresses and FC_IDs

Cisco Nexus 5000 uses FPMA

initial login flow ladder
Initial Login Flow ladder

ENode

FCoE Switch

VLAN

Discovery

VLAN

Discovery

FIP:FCoEInitialization Protocol

Solicitation

FCF

Discovery

FCF

Discovery

Advertisement

FLOGI/FDISC Accept

FLOGI/FDISC

FC Commandresponses

FCOEProtocol

FC Command

enode simplified model
ENode: Simplified Model

ENode (FCoE Node): a Fibre Channel HBA implemented within an Ethernet NIC aka CNA (Converged Network Adapter)

FCoE LEP : The data forwarding component that handles FC frame encapsulation/decapsulation

FCoE Controller is the functional entity that performs the FIP and instantiates VN_Port/FCoE_LEP pairs.

FC Node

FCoE_Controller

FCoE_Controller

FCoE_LEP

FCoE_LEP

Enetport

Enetport

fcoe switch simplified model
FCoE Switch: Simplified Model

FCF (Fibre Channel Forwarder), the forwarding entity inside an FCoE switch

FCport

FCport

FCport

FCport

Eth

port

Eth

port

Eth

port

Eth

port

Eth

port

Eth

port

Eth

port

Eth

port

FCoE Switch

FCF

FCoE_Controller

FCoE_LEP

Ethernet Bridge

fcoe initial deployment
FCoE: Initial Deployment

SAN A

SAN B

10GE

Backbone

VF_Ports

Nexus 5000 (FCF)

VN_Ports

10GE

4/8 Gbps FC

fcoe adding blade servers

SAN A

SAN B

FCoE: Adding Blade Servers

10GE

Backbone

VF_Ports

10GE

VN_Ports

4/8 Gbps FC

fcoe adding native fcoe storage
FCoE: Adding Native FCoE Storage

SAN A

SAN B

10GE

Backbone

VN_Ports

VF_Ports

10GE

VN_Ports

4/8 Gbps FC

fcoe adding ve ports

SAN A

SAN B

FCoE: Adding VE_ports

10GE

Backbone

VE_Ports

VF_Ports

10GE

VN_Ports

4/8 Gbps FC

the unified data center architecture

B

A

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

The Unified Data Center Architecture

Core: L3 boundary to the DC network. Functional point for route summarization, the injection of default routes and termination of segmented virtual transport networks

NEXUS 7000

L3

Aggregation: Typical L3/L2 boundary. DC aggregation point for uplink and DC services offering key features: VPC, VDC, 10GE density and 1st point of migration to 40GE and 100GE

Service Appliances

Service Modules

L3

NEXUS 7000 - VPC

L2

Catalyst 6500

Access: Classic network layer providing non-blocking paths to servers & IP storage devices through VPC. It leverages Distributed Access Fabric Model (DAF) to centralize config & mgmt and ease horizontal cabling demands related to 1G and 10GE server environments

Unified Compute System

NEXUS 7000 - VPC

NEXUS 5000

L2

Virtual Access: A virtual layer of network intelligence offering access layer-like controls to extend traditional visibility, flexibility and mgmt into virtual server environments. Virtual network switches bring access layer switching capabilities to virtual servers without burden of topology control plane protocols. Virtual Adapters provide granular control over virtual and physical server IO resources

NEXUS 2000

NEXUS 1000v

vL2

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

POD

Rack 1

Rack x

Rack 1

Rack 2 Rack 3

fitting the pieces together
Fitting the pieces together…

Catalyst 650010GbE VSS Agg

DC Services

DC Core

Gigabit Ethernet

WAN

Nexus 700010GbE Core

10 Gigabit Ethernet

IP+MPLS WAN Agg Router

4, 8Gb Fibre Channel

10 Gigabit FCoE/DCE

DC Aggregation

SAN A/B

Nexus 700010GbE Agg

Catalyst 6500

DC Services

MDS 9500

Storage

Services

DC Access

Catalyst 6500End-of-Row

Catalyst 49xxRack

CBS 3100

| MDS 9100 Blade

Nexus 7000

End-of-Row

Nexus 5K|2K

Top of Rack

UCS blade

or

Nexus 4K

MDS 9500

Storage

Nexus 1000V VN-Link

1GbE Server Access

1GbE,10GbE Server Access

Storage

policy enforcement
Policy Enforcement

Switch Port Analyzer (SPAN) and Diagnostic Sampling

Control Plane Redirect/Snooping

VLAN Membership

Check

pass

fail

  • Frames evaluated by multistage engine searches occur in parallel results, and are evaluated in pipeline diagnostics, and control plane tap pipelines.

Interface, VLAN, and MAC Binding

pass

fail

MAC and L3 Binding

(IP & Fibre Channel)

pass

fail

Fibre Channel Zone Membership Check

pass

fail

Port ACLs

permit

deny

VLAN ACLs (ingress)

permit

deny

Role Based ACLs (egress)

permit

deny

QoS ACLs (ingress)

permit

policerdrop

Multipath Expansion

To SPAN

session

To Sup

default qos configuration
Default QoS Configuration

Default Policy-Map

Default Class-Map

  • Qos is always on.
  • Four default class of services defined when system boots up
  • Two for control traffic. One for FCoE traffic and another one for Ethernet traffic
  • Match CoS 3 for class-fcoe.
  • Class-fcoe is no-drop with MTU 2240.
  • Match any for class-default
  • Class-fcoe and class-default get 50% of guaranteed bandwidth by default

switch1# sh policy-map

Type qos policy-maps

====================

policy-map type qos default-in-policy

class type qos class-fcoe

set qos-group 1

class type qos class-default

set qos-group 0

Type queuing policy-maps

========================

policy-map type queuing default-in-policy

class type queuing class-fcoe

bandwidth percent 50

class type queuing class-default

bandwidth percent 50

policy-map type queuing default-out-policy

class type queuing class-fcoe

bandwidth percent 50

class type queuing class-default

bandwidth percent 50

Type network-qos policy-maps

===============================

policy-map type network-qos default-uf-policy

class type network-qos class-fcoe

pause no-drop

mtu 2240

class type network-qos class-default

mtu 1538

switch2# show class-map

Type qos class-maps

===================

class-map type qos class-fcoe

match cos 3

class-map type qos class-default

match any

Type queuing class-maps

=======================

class-map type queuing class-fcoe

match qos-group 1

class-map type queuing class-default

match qos-group 0

Type network-qos class-maps

==============================

class-map type network-qos class-fcoe

match qos-group 1

class-map type network-qos class-default

match qos-group 0

nexus 5000 software features set
Nexus 5000 Software Features Set

802.1w (Rapid Spanning Tree), 802.1s (Multiple Spanning Tree), RPVST+, Root Guard, Uplink Guard, Bridge Assurance, PortFast, CDP, PVLANs, UDLD, LACP, IGMP Snooping, 802.1Q trunks, Port-Channel, SVI, SPAN, Jumbo Frames, NTP, Link State Tracking (LST)

Layer 2

Radius, Tacacs+, AAA, CallHome, SSHv1/V2, telnet, IPv4 & IPv6 mgmt, SNMP MiBs, Traps, EthAnalyzer (wireshark), RBAC, DCNM, RME support via Cisco Works, syslog, coredump, RMON, first-setup script, accounting log, checkpoint and configuration rollback

Management/

Security

PACLs, VACLs, Session based ACLs, ACL based QOS, egress Bandwidth Limiting, 802.1p priority, strict priority scheduling, WRED, Tail Drop, Storm Control (broadcast, multicast), Egress Shaper

ACL/QOS

FIP Snooping Bridge, DCBXP, PFC (Priority Flow Control), 8 Virtual Lanes, ETS (Enhance Transmission Selection)

FCOE

switch mode
Switch Mode
  • Nexus 5000 FC module can be ISL’ed to another FC switch (E_port)
  • Zoning, DPVM, etc. are enforced on the Nexus 5000
  • Domain manager, FSPF, zone server, fabric login server, name server run on Nexus 5000
  • Require a domain ID for every VSAN
  • Interop mode considerations when connecting to non-Cisco FC switches
  • Note: Nexus 5000 supports direct connectivity to FC initiator (server HBAs) and targets (storage arrays)
n port virtualization npv mode
N-Port Virtualization (NPV) mode
  • Nexus 5000 FC module can work in NPV mode
    • Server-facing ports are regular F ports
    • Uplinks toward SAN core fabric are NP ports
  • Nexus 5000 switches assign FCIDs to attached devices
    • First byte in FCID received from core SAN switch
  • One VSAN per uplink on Nexus 5000 (will change in future)
    • No trunking or channelling of NP ports
  • Zoning, DPVM, etc. are not enforced on the Nexus 5000
  • Domain manager, FSPF, zone server, fabric login server, name server
    • They do not run on Nexus 5000
  • No local switching
    • All traffic routed via the core SAN switches
n port virtualization npv an overview

FC

N-Port Virtualization (NPV): An Overview

NPV-Core Switch (MDS or 3rd party switch with NPIV support)

F-port

VSAN 5

NP-port

VSAN 10

Can have multiple

uplinks – one VSAN per uplink

Two uplinks can be in the same VSAN

No port channel or trunking

F-ports

Host

Host

Nexus 5000 to SAN Fabric A & B

Assign FCIDs to servers – no domain to configure!

Host

N-ports

Servers log in (FLOGI) locally

slide55

Working with

Nexus 2148

(Optional)

fabric extender uplink modes
Fabric ExtenderUplink Modes

Static Pinning

  • Fabric Extender associates (pins) a server side (1GE) port with an uplink (10GE) port
  • Server ports are either individually pinned to specific uplinks (static pinning) or all interfaces pinned to a single logical port channel
  • Behaviour on FEX uplink failure depends on the configuration
  • Static Pinning – Server ports pinned to the specific uplink are brought down with the failure of the pinned uplink
  • Port Channel – Server traffic is shifted to remaining uplinks based on port channel hash

Server Interface goes down

Port Channel

Server Interface stays active

ad