1 / 0

Introducing qfabric reinventing the data center network

Introducing qfabric reinventing the data center network. Simon Gordon – Juniper Networks Senior Product Line Manager - FSG/DCBU sgordon@juniper.net +1-408-242-2524. Juniper Confidential. Qfabric is Real. QF/Node (QFX3500). QF/Interconnect. QF/Director. Trends in data center.

cardea
Download Presentation

Introducing qfabric reinventing the data center network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introducing qfabricreinventing the data center network

    Simon Gordon– Juniper Networks Senior Product Line Manager - FSG/DCBU sgordon@juniper.net +1-408-242-2524 Juniper Confidential
  2. Qfabric is Real QF/Node (QFX3500) QF/Interconnect QF/Director
  3. Trends in data center Server Trends Consolidation Multi-core (8->16->32,….128,…) Virtualization and VMs Mega DCs; 400K sq ft 4K racks, 200K servers QFabric Any Service Any Port Low O/S DC Scale Interconnect Trends Application Trends Convergence to 10GE Enhancements to Ethernet SOA, Web2.0 MapReduce, Hadoop, Grids 10/40/100 GE East-West traffic
  4. Today’s Architecture is Non-Transparent Scale vs. Latency 1 1 1 1 1 1 2 2 3 4 4 3 Latency Multi Tier Network Ethernet Scale Scale vs. Bandwidth Bandwidth NAS 1 2 3 4 Servers Location dependent adjacencies between nodes Scale Juniper Confidential
  5. QFABRIC – 1 tier SRX and vGW Inter-DC connectivity MPLS and VPLS Virtual Control Single, scalable fabric MX Series Remote Data Center SRX5800 Servers FC Storage NAS One large, seamless resource pool
  6. QFABRIC benefit Scale vs. Latency Latency Traditional design QFabric Scale Scale vs. Bandwidth Bandwidth QFabric Traditional design Scale Juniper Confidential
  7. A Revolutionary New Architecture 3 Design Principles Rich edge, Simple core Everything is one hop away Data Plane Federated Intelligence Only way to scale with resilience Director Plane N=1 Operational model ofa single switch Management Plane Juniper Confidential
  8. DATA PLANE IN A SINGLE SWITCH Data Plane 1. All ports are directly connectedto every other port 2. A single “full lookup” processes packets Juniper Confidential
  9. Director PLANE IN A SINGLE SWITCH Director Plane Single consciousness Centralized shared table(s) have information about all ports Management Plane All the ports are managed from a single point Juniper Confidential
  10. SINGLE SWITCH DOES NOT SCALE Ports can be added to a single switch fabric. …but eventually it runs out of real estate. After this, the network cannot be flat. Sacrifice simplicity or…change the scaling model Choice: Juniper Confidential
  11. Disaggregate Scaling the Switch QF/Director QF/Interconnect QF/Node So, we separate theline cards and supervisor cards from the fabric. And replace the coppertraces with fiber links. For redundancy addmultiple devices. Juniper Confidential
  12. Disaggregate Scaling the Switch QF/Director QF/Interconnect QF/Node So, we separate theline cards from the fabric. And replace the coppertraces with fiber links. For redundancy addmultiple devices. Enable large scale. Juniper Confidential
  13. Data Plane Scaling the DATA PLANE QF/Interconnect QF/Node All ports are directly connectedto every other port A single “full lookup” at the ingress QF/Node device Blazingly fast: Always under 5us 3.71us (short cables) QFabric is faster than any Ethernet chassis switch ever built Juniper Confidential
  14. Director Plane Scaling the Director PLANE Backup Active Old Model Active/Backup The singleactive instance limits scalability Juniper Confidential
  15. Director Plane Scaling the Director PLANE QF/Director New Model Services Oriented The intelligence and state is federated, distributed across the fabric Director and management services use a scale out model New Host Address Juniper Confidential
  16. ManagementPlane Scaling the management plane QF/Director Single point of management Extensive use of automation Familiar operational model Managed as a single switch - N=1 Juniper Confidential
  17. Qfabric Convergence – The End View Storage Convergence FC FCoE Fully Blended Fabric Fibre Channel Services Flexible ports FC/FCoE/E Fully converged unified Network FCoE Servers FC Servers Juniper Confidential
  18. Qfabric Convergence – 2011 Storage SAN Convergence FC FCoE Transit Switch Converged Enhanced Ethernet – Standards based (CEE or DCB) Provides Perimeter protection with FIP Snooping. FCoE-FC Gateway Ethernet or Fibre channel gateway with FC ports atthe QF/Node Interoperates withexisting SANs FCoE FCoE Servers FCoE FC Servers FCoE Juniper Confidential
  19. Hardware Juniper Confidential
  20. QFabric HARDWARE QF/Interconnect Connects all the QF/Node devices QF/Node Media independent I/O ToR device.Can be run in independent or fabric mode QF/Director 2 RU high fixed configurationX86 based system architecture Juniper Confidential
  21. QFabric hardware – INTERCONNECT QF/Interconnect 21 RU high 8 slot chassis 128 QSFP 40G ports – wire speed 8 fabric cards (10.24Tbps/chassis) Dual redundant Director board Redundant AC power supply Front to back air flow Front View Rear View Juniper Confidential
  22. Fabric hardware – QF/Node QF/Node Front View 1 RU high fixed configuration 48 SFP+/36 SFP ports 12 FC capable (2/4/8G) ports 4 * 40G fabric uplink ports (can also operate in 10G mode) Redundant AC power supply Front to back air flow Rear View Will also operate as a Stand Alone Switch QFX3500 4 QSFP+ ports 48 SFP+/36 SFP ports 12 FC Capable ports Juniper Confidential
  23. QFabric hardware – Director QF/Director 2RU device Has GE ports to connect to QF/Node and interconnect devices Based on x86 architecture Juniper Confidential
  24. System Design Juniper Confidential
  25. QFABRIC CONFIGURATION FOR SMALL DEPLOYMENT QF/Interconnect QF/Director LEGEND 1 GB 40 GB QF/Node #1 QF/Node #2 QF/Node #3 QF/Node #16 Solution for 768 10GE/1GE ports 2 Fabric cards per chassis (25% fill rate) Redundant uplinks Juniper Confidential
  26. QFABRIC configuration for large deployment QF/Interconnect QF/Director LEGEND 1 GB 40 GB QF/Node #1 QF/Node #2 QF/Node #3 QF/Node #128 Solution for 6,000 10GE/1GE ports 40 Gig uplink from each Node to Interconnect 1GE connections to the Director cluster Juniper Confidential
  27. QFabric Software Juniper Confidential
  28. Pizza Box Chassis Matrix, Virtual Chassis QFabric Management Management Management Management Route Engine Route Engine Route Engine Peer Master Peer Peer Forwarding Engine Forwarding Engine Forwarding Engine Forwarding Engine Forwarding Engine Forwarding Engine Forwarding Engine Peer Forwarding Engine Slave Slave Route Engine Route Engine Route Engine SYSTEM ARCHITECTURE EVOLUTION
  29. Control Plane Control Plane Control Plane Data Plane Data Plane Data Plane Platform Platform Platform QFabricSOFTWARE STACK Centralized Fabric Administrator Management Views APIs Fabric Control Inventory Fault Connectivity Topology Troubleshooting Distributed L2/L3 switch stack
  30. Management Juniper Confidential
  31. FABRIC DIRECTOR SINGLE POINT for signaling and configuration: CLI, SNMP, NETCONF/DMI (XML ), SMI-S Control And Management interfaces for QFabric Director Director Director RE Partition RE Fabric RE Fabric Administrator Hypervisor Automation JUNOS built in Automation capabilities Virtualized Hide CP and DP components, views, scale Standard SNMP, Syslog, NETCONF, CIM, SMI-S Simplicity Single Box management paradigm Juniper Confidential
  32. Director Views APIs Management Inventory Fault Topology Troubleshooting Connectivity Control Plane Control Plane Control Plane Data Plane Data Plane Data Plane Platform Platform Platform QFabricMANAGEMENT STACK Business Services BSS: Juniper + Partners Data Center Orchestration WSDL/SOAP, REST APIs Junos Space EMS/NMS/Apps Space SDK Signaling and Configuration: CLI, SNMP, NETCONF/DMI (XML), SMI-S JUNOS SDK JUNOS SDK JUNOS SDK Juniper Confidential
  33. FABRIC SYSTEM CONTROL SCALE OUT Director Director Director SFC Director RE Partition RE QFabric RE Fabric Administrator Hypervisor Node 0 Node 1 Node N N-way compute cluster Automatic balancing of compute load and connection traffic (both south and northbound) No redundant nodes / hot spares – all resources available for computation Graceful degradation upon failure Scale out: adding nodes in service, nodes automatically discover each other SFC application logic dynamically upgradable while running Juniper Confidential
  34. A Revolutionary New Architecture Performance and simplicityof a single switch Scalability and resiliencyof a network
  35. Migration to QFabric Introducing QFX3500 Juniper Confidential
  36. Migrating to Qfabric MX Series QFabric EX8216 SRX5800 EX4200 QFX3500 4 Pod 1 Pod 2 Juniper Confidential
  37. Introducing QFX3500 Front view Rear view Wirespeed switching 1.28 Tbps , 960 MPPS switching FCoE & Fibre Channel Support FCoE Transit Switch & FCoE-FC Gateway Ultra low latency Sub microsecond Low power 5Watts/port Layer 2 Layer 3 FCoE-FC
  38. PORTS 4 QSFP+ ports  Rear view 48 SFP+/SFP ports 12 port FC; 36 port GbE      Roadmap (not available at FRS)
  39. Transceiver support SFP optical transceiver 2/4G or 8G FC-SW FC SFP GbE SFP Copper Rear view SFP transceiver SR, LR, 1000BaseT GbE SFP Optical 10GbE SFP+ SFP+ optical transceiver USR, SR, LR Direct Attached/Twinax SFP+ copper 1, 3, 5, 7 meter
  40. 63 port 10GbE in 1RU (2Q2011) 4 x 10GbE 4 x 10GbE 4 x SFP+ To Servers  QSFP+ 4 x 10GbE 3 x 10GbE Rear view 48 x 10GbE SFP+ 12 FC Capable ports SFP+ optical transceiver USR, SR, LR Direct Attached/Twinax SFP+ copper 1, 3, 5, 7 meter Roadmap (not available at FRS)
  41. Performance & scale
  42. Securities Technology Analysis Center (STAC) Test Results STAC-M2 Benchmarks™ v1.0 Highlights 42
  43. QFX3500 – UNIVERSAL ToR Full L3; VirtualControl; FC Gateway; HA; VPN <1µSec; Cut-through; 40G Ultra Low Latency Feature Rich QFX3500 Converged I/O Fabric Attach DCB; FCoE-FC Gateway; FCoE Transit Switch Unique Value Add to Scale Ethernet/ IP FC SAN FC/FCoE Ethernet TOR Certify Once; Deploy Everywhere
  44. Converged I/O Converged I/O with CNA & FCoE QFX3500 Solution 10GbE/FCoE standard FCoE transit switch FCoE-FC Gateway Feature rich DCB (PFC, ETS, DCBX) FIP Snooping ULTRA LOW LATENCY FEATURE RICH CONVERGED I/O FABRIC ATTACH
  45. FCoE transit switch use case (QFX3500) Requirements 10GbE server access including Blade servers with pass through or with embedded DCB switch Copper and/or fiber cabling High availability Dual homed to aggregation layer >40 port per ToR switch DCB support with FIP Snooping QFX3500 solution 48 (63) ports wirespeed 10GbE Copper DAC and SFP+ fiber support Hardware & software HA DCB & FCoE transit switch support FCoE is standard on all ports PFC, ETS, DCBX support FIP snooping support Interoperability with Qlogic, Emulex CNA FCoE enabled SAN MX series MCLAG or VC or EX8200 VC LAG LAG LAG FCoE Transit Switch Rack servers or Blade servers with CNA
  46. FCoE-FC Gateway use case (QFX3500) Requirements 10GbE server access including Blade servers with pass through or with embedded DCB switch Copper and/or fiber cabling High availability Dual homed to aggregation layer >40 port per ToR switch DCB & FCoE-FC Gateway support QFX3500 solution 48 (63) ports wirespeed 10GbE Copper DAC and SFP+ fiber support Hardware & software HA DCB & FCoE-FC Gateway support FCoE is standard on all ports PFC, ETS, DCBX support 12 port FC (2/4/8G FC) with FC license Interoperability with Qlogic CNA, Emulex CNA, Cisco & Brocade FC switch FC SAN MX series MCLAG or VC or EX8200 VC LAG LAG FCoE-FC Gateway Rack servers or Blade servers with CNA
  47. FCoE transit & Gateway switch use case Requirements 10GbE server access including Blade servers with pass through or with embedded DCB switch Separation of management between the LAN & SAN Teams Gateway administered by SAN Team ToR administered by LAN Team Support for Blade servers with pass through or with embedded DCB switch QFX3500solution FCoE Transit Switch at ToR FCoE-FC Gateway at EoR EX4500 as Transit Switch 3rd party Transit Switches In particular blade shelf embedded switches FC SAN MX series MCLAG or VC or EX8200 VC FCoE-FC GW FCoE-FC Gateway LAG LAG LAG FCoE Transit Switch Rack servers or Blade servers with CNA
  48. FCoE transit & Gateway switch use case Requirements 10GbE server access including Blade servers with pass through or with embedded DCB switch Separation of management between the LAN & SAN Teams Gateway administered by SAN Team ToR administered by LAN Team Support for Blade servers with pass through or with embedded DCB switch QFabric & QFX3500 Solution FCoE Transit Switch at ToR FCoE-FC Gateway at EoR EX4500 as Transit Switch 3rd party Transit Switches In particular blade shelf embedded switches FC SAN MX series MCLAG or VC FCoE-FC GW FCoE-FC Gateway Rack servers or Blade servers with CNA
  49. Juniper Confidential
More Related