1 / 52

Juniper Metafabric

Juniper Metafabric. Westcon 5 daagse. Washid Lootfun Sr. Pre-Sales Engineer wmlootfun@juniper.net. February, 2014. Meta-Fabric ARCHITECTURE PILLARS. Simple. Open. Smart. Easy to deploy & use Mix- and match deployment One OS Universal buidling block for any network architecture

tiana
Download Presentation

Juniper Metafabric

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Juniper Metafabric Westcon 5 daagse Washid Lootfun Sr. Pre-Sales Engineer wmlootfun@juniper.net February, 2014

  2. Meta-Fabric ARCHITECTURE PILLARS Simple Open Smart • Easy to deploy & use • Mix- and match deployment • One OS • Universal buidling block for any network architecture • Seamless 1GE  10GE  40GE  100GE upgrades • Maximize flexibility • Open Standards-based interfaces L2,L3 MPLS • Open SDN protocol support, VxLAN, OVSDB, OpenFlow • IT Automation via Open Interfaces; Vmware, Puppet, Checf, Python • JUNOS Scripting & SDK • Standard Optics • Save time,improve performance • Elastic (Scale-out) Fabrics • Qfabric • Virtual Chassis • Virtual Chassis Fabric

  3. MetaFabric ARCHITECTURE portfolio • Flexible building blocks; simple switching fabrics Switching • Universal data center gateways Routing • Smart automation and orchestration tools Management • Simple and flexible SDN capabilities SDN • Adaptive security to counter data center threats Data Center Security • Reference architectures and professional services Solutions & Services

  4. EX switches

  5. EX SERIES PRODUCT FAMILY AGGREGATION/ CORE Programmable Core/Distribution Switch Network Director EX9204 EX9208 EX9214 Core/Aggregation Switch Dense Access/Aggregation Switch One JUNOS ACCESS EX8208 EX8216 EX6210 Entry Level Access Switches MODULAR Proven Access Switch Versatile Access Switch Powerful Aggregation Switch EX2200 EX2200-C EX3300 EX4200 EX4300 EX4550 FIXED

  6. EX4300 Series switches Product Description • 24/48x 10/100/1000 TX access ports • 4x 1/10G (SFP/SFP+) uplink ports • 4x 40G (QSFP+) VC / uplink ports • PoE/ PoE+ options • Redundant / Field Replaceable components (power supplies, fans, uplinks) • DC power options Notable Features • L2 and basic L3 (static, RIP) included • OSPF, PIM available with enhanced license • BGP, ISIS available with advanced license • Virtual Chassis • 10 members • 160-320 Gbps VC backplane • 12 hardware queues per port • Front to Back & Back to front airflow options Target Applications • Campus data closets • Top of Rack data center / High Performance 1G server attach applications • Small Network Cores AFI AFO 

  7. Introducing The EX9200 Ethernet SwitchAvailable March 2013 EX9214 • Native programmability (Junos image) • Automation toolkit • Programmable Control/Management planes and SDK (SDN, OpenFlow, etc.) • 1M MAC addresses • 256K IPv4 and 256K IPv6 routes • 32K VLANs (bridge domains) EX9208 EX9204 • L2, L3 switching • MPLS & VPLS /EVPN* • ISSU • Junos Node Unifier • 4, 8 & 14 slots; 240G/slot • 40x1GbE, 32x10GbE, 4x40GbE & 2x100GbE • Powered by Juniper One Custom Silicon Juniper One Custom Silicon  Roadmap

  8. EX9200 Line Cards • 40 x 10/100/1000BASE-T • 40 x 100FX/1000BASE-X SFP • 1GbELine Cards EX9200-40F/40T • 32 x 10GbE SFP+ • Up to 240G throughput • 10GbELine Card EX9200-32XS • 4 x 40GE QSFP+ • Up to 120G throughput • 40GbELine Card EX9200-4QS • 2 x 100G CFP + 8 x 10GbE SFP+ • Up to 240G throughput • 100GbELine Card EX9200-2C-8XS

  9. EX9200 Flexibility Virtual Chassis 13.2R2 • High Availability • Redundant RE, switch fabric • Redundant power /cooling • Performance and Scale • Modular configuration • High-capacity backplane • Easy to Manage • Single image, single config • One management IP address • Single Control Plane • Single protocol peering • Single RT/FT • Virtual Chassis–A Notch Up • Scale ports/services beyond one chassis • Physical placement flexibility • Redundancy beyond one chassis • One management and control plane Management Require Dual RE’s Per Chassis Access Switch Access Switch

  10. ON ENTERPRISE SWITCHING ARCHITECTURES Network Director Multi-Tier Collapsed Distribution & Core Distributed Access Core Distribution Access Problem: Existing architectures lack scale, flexibility and are operationally complex Solution: Collapse Core and Distribution, Virtual chassis at Access layer Solution: Virtual chassis at Access layer acrosswiring closets Solution: Virtual chassis at both Access and Distribution layers Benefit: Simplification through Consolidation, Scale, Aggregation, Performance Benefit: Management Simplification, Reduced Opex Benefit: Flexibility to expand and grow, Scale, Simplification

  11. 10/40GbE 40G VCP Connect Wiring Closets Collapse a Vertical Building • VIRTUAL CHASSIS DEPLOYMENT ON ENTERPRISE • Span Horizontal or Vertical BUILDING A BUILDING B EX4300VC-3a EX6200-1b CLOSET 1 EXSeries Virtual Chassis 10GbE/40GbE uplinks WLCCluster WLA WLA WLA WLA EX4300VC-2a LAG Centralized DHCP and other services WLA EX3300VC-1a App Servers CLOSET 2 WLA WLA WLA WLA LAG LAG SRX Series Cluster EX4300 Aggregation/ Core Internet LAG Access EX9200VC-1b EX4550VC-1a

  12. Private MPLS Campus Core with VPLS or L3VPN • DEPLOYING MPLS AND VPN ON ENTERPRISE— METRO/DISTRIBUTED CAMPUS • Stretch the Connectivity for a Seamless Network Core Switch (PE) Core Switch (PE) Core Switch (PE) Core Switch (PE) MPLS MPLS VLAN VLAN Access Switche (CE) Access Switche (CE) Access Switche (CE) Access Switche (CE) Core Switch (PE) Core Switch (PE) MPLS VLAN Wireless Access Point Wireless Access Point Wireless Access Point Wireless Access Point Access Switches (CE) Access Switches (CE) SITE 1 SITE 3 Wireless Access Point Finance/ Business Ops VPN Wireless Access Point VLAN1 VLAN3 VLAN2 R&D VPN SITE 2 Marketing/ Sales VPN

  13. JUNIPER ETHERNET SWITCHING • #3 market share in 2 years • 20,000+ switching customers • Enterprise & Service Providers • 23+ Million ports deployed Secure Simple Reliable

  14. QFX5100 Platform

  15. QFX5100 Series • Next Generation Top of rack switches • Multiple 10GbE/40GbE port count options • Supports multiple data center switching architectures • New Innovations: • Topology-Independent In-Service Software Upgrades • Analytics • MPLS • GRE tunneling • Rich L2/L3 features including MPLS • Low Latency • SDN ready

  16. QFX5100 next generation Tor QFX5100-96S QFX5100-48S QFX5100-24Q • 24 x 40GbE QSFP • 8 x 40GbE expansion slots • 2.56 Tbps throughput • 1U fixed form factor • 48 x 1/10GbE SFP+ • 6 x 40GbE QSFP uplinks • 1.44 Tbps throughput • 1U fixed form factor • 96 x 1/10GbE SFP+ • 8 x 40GbE QSFP uplinks • 2.56 Tbps throughput • 2U fixed form factor Low latency │ Rich L2/L3 feature set │ Optimized FCoE

  17. Q4CY2013 QFX5100-48s Front side (port side) view • Each 40GbE QSFP interface can be converted to 4 x 10GbE interfaces without reboot • Maximum 72 x 10GbE interfaces, 720Gbps • CLI to change port speed: • set chassis fpc <fpc-slot> pic <pic-slot> port <port-number> channel-speed 10G • set chassis fpc <fpc-slot> pic <pic-slot> port-range <low> <high> channel-speed 10G 6 x 40GbE QSFP interfaces 48 x 1/10GbE SFP+ interfaces Console USB 4+1 redundancy fan tray, color coded (orange: AFO, blue: AFI), Hot-swappable 1+1 redundancy 650W PS color coded, hot-swappable Mgmt1 (SFP) Mgmt0 (RJ45)

  18. Q1CY2014 QFX5100-96s Front side (port side) view • Supports two port configuration modes: • 96 x 10GbE SFP plus 8 x 40GbE interfaces • 104 x 10GbE interfaces • 1.28Tbps (2.56Tbps full duplex) switching performance • New 850W 1+1 redundant color-coded hot-swappable power supplies • 2+1 redundant color-coded hot-swappable fan tray 96 x 1/10GbE SFP+ interfaces 8 x 40GbE QSFP interfaces

  19. Q1CY2014 QFX5100-24Q (Same FRU side configuration as QFX5100-24S Front side (port side) view • Port configuration has 4 modes, mode change requires reboot • Default (Fully Subscribed mode): • Doesn’t support QIC • Maximum 24x40GbE interfaces or 96x10GbE interfaces; line rate performance for all packet sizes • 104-port mode • Only first 4x40GbE QIC are supported with last 2 40GbE interfaces disabled; first 2 QSFPs work as 8x10GbE • 2nd QIC slot cannot be used; no native 40GbE support. • All base ports can be changed to 4x10GbE ports (24x4=96), so total is 104x10GbE interfaces • 4x40GbE PIC mode • All base ports can be channelized • Only 4x40GbE QIC is supported; works in both QIC slots but can’t be channelized. • 32X40GbE or 96X10GbE + 8X40GbE • Flexi PIC mode • Support all QICs but QIC can’t be channelized • Only base port 4-24 can be channelized. Also supports 32x40GbE configuration Two hot-swappable 4x40GbE QSFP modules 24 x 40GbE QSFP interfaces

  20. Advanced JUNOS SOFTWARE ARCHITECTURE • Provides the foundation for advanced functions • ISSU (In-Service Software Upgrade). ENABLE HITLESS UPGRADE • Other Juniper applications for additional service in a single switch • Third-party application • Can bring up the system much faster JunOS VM (Active) JunOS VM (Standby) 3rd Party Application Juniper Apps Host NW Bridge KVM Linux Kernel (Centos)

  21. QFX5100 Hitless operationsDramatically Reduces Maintenance Windows e l b QFX5100 Topology- Independent ISSU H i i t l x e l e High-Level QFX5100 Architecture F s s Junos VM (Master) Junos VM (Master) Junos VM (Backup) Junos VM (Master) Network Performance PFE PFE Kernal Based Virtual Machines Linux Kernel CompetitiveISSU Approaches Simple x86 Hardware Broadcom Trident II Broadcom Trident II Network Resiliency • Benefits: • Seamless Upgrade • No Traffic Loss • No Performance impact • No resilient risk • No port flap Data Center Efficiency DuringSwitch Software Upgrade

  22. Introducing VCF architecture • Spines – Integrated L2/L3 switches • Connects leafs , Core, WAN and services • Leafs - Integrated L2/L3 gateways • Connects to Virtual and bare metal servers • Local switching • Any to Any connections • Single Switch to Manage Leaf switches Services GW VM VM VM VM VM VM O O Spine Switches vSwitch vSwitch Virtual Server Virtual Server Any to Any connections Bare Metal

  23. Plug-n-Play Fabric Services GW WAN/Core • New leafs are auto-provisioned • Auto configuration and image Sync • Any non-factory default node is treated as network device VM VM VM VM VM VM O O vSwitch vSwitch Virtual Server Virtual Server Bare Metal

  24. virtual chassis fabric Deployment option EX9200 QFX5100-24Q Virtual Chassis Fabric (VCF) – 10G/40G QFX5100-48S QFX3500 EX4300 10G access Existing 1G access Existing 10G access

  25. QFX5100 – Software Features • Planned Post-FRS Features • Virtual Chassis – Mixed mode • 10 Member Virtual Chassis: Mix of QFX5100, QFX3500/QFX3600, EX4300 • Virtual Chassis Fabric: 20 nodes at FRS with mix of QFX5100, QFX3500/QFX3600, and EX4300 • Virtual Chassis features: • Parity with standalone • HA: NSR, NSB, GR for routing protocols, GRES • ISSU on standalone QFX5100 and all QFX5100 Virtual Chassis, Virtual Chassis Fabric • NSSU in mixed mode of Virtual Chassis or Virtual Chassis Fabric • 64-way ECMP • VXLAN gateway* • OpenStack, Cloudstack integration* • Planned FRS Features* • L2: xSTP, VLAN, LAG, LLDP/MED • L3: Static routing, RIP, OSPF, IS-IS, BGP, vrf-lite, GRE • Multipath: MC-LAG, L3 ECMP • IPv6: Neighbor Discovery, Router advertisement, static routing, OSPFv3, BGPv6, IS-ISv6, VRRPv3, ACLs • MPLS, L3VPN, 6PE • Multicast: IGMPv2/v3, IGMP snooping/querier, PIM-Bidir, ASM, SSM, Anycast, MSDP • QoS: Classification, Cos/DSCP rewrite, WRED, SP/WRR, ingress/egress policing, dynamic buffer allocation, FCoE/Lossless flow, DCBx, ETS. PFC, ECN • Security: DAI, PACL, VACL, RACL, storm control, Control Plane Protection • 10G/40G FCoE, FIP snooping • Micro-burst Monitoring, analytic • Sflow, SNMP • Python * After Q1 time frame *Please refer to release notes and manual for latest information

  26. QFX5100 New Virtual Chassis Fabric Improved Up to 20 members Improved QFabric Virtual Chassis Up to 10 members Up to 128 members Managed as a Single Switch Spine-Leaf Layer 3 Fabric … … QFX5100 L3 Fabric

  27. VCF Overview • Flexible • Up to 768 ports • 1,10,40G • 2-4 spines • 10 and 40G spine • L2 , L3 and MPLS • Simple • Single device to manage • Predictable performance • Integrated RE • Integrated control plane …. • Automated • Plug-n-Play • Analytics for traffic monitoring • Network Director • Available • 4 x Integrated RE • GRES/NSR/NSB • ISSU/NSSU • Any-to-Any connectivity • 4 way multi-path

  28. CDBU Switching roadmap summary 2T2013 3T2013 1T2014 2T2014 Future EX4300 Hardware EX9200 2x100G LC QFX5100 (24QSFP+) QFX5100 10GBASE-T Opus PTP EX4550 10GBASE-T EX9200 6x40GbE LC QFX5100 (48SFP+) EX9200 MACsec QFX5100 (24SFP+) EX9200 400GbE per slot EX4550 40GbE Module QFX5100 (96SFP+) EX4300 Fiber Software AnalyticsD VXLAN Gateway Opus Virtual Chassis w/ QFX Series QFX3000-M/G 10GBASE-T Node V20 ND 1.5 VXLAN Routing EX9200 ISSU on Opus QFX3000-M/G QinQ, MVRP QFX3000-M/G L3 Multicast 40GbE ND 2.0 OpenFlow 1.3 QFX3000-M/G QFX5100 (48 SFP+) Node Solutions DC 1.0 Virtualized IT DC Campus 1 .0 DC 1.1 ITaaS & VDI DC 2.0 IaaS /w Overlay

  29. MX Series

  30. SDN and the MX Series Delivering innovation inside and outside of the data center Flexible SDN enabled silicon to provide seamless workload mobility and connections between private and public cloud infrastructures USG (Universal SDN Gateway) EVPN (Ethernet VPN) VMTO (VM Mobility Traffic Optimizer) ORE (Overlay Replication Engine) The most advanced and flexible SDN bridging and routing gateway Next-generation technology for connecting multiple data centers and providing seamless workload mobility Creating the most efficient network paths for mobile workloads A hardware-based, high-performance services engine for broadcastand multicast replication within SDN overlays

  31. VXLAN PART OF UNIVERSAL GATEWAY FUNCTION ON MX 1H 2014 • - High scale multi-tenancy • VTEP tunnels per tenant • P2P, P2MP tunnels • - Tie to full L2, L3 functions on MX • Unicast, multicast forwarding • IPv4, IPv6 • L2: Bridge-domain, virtual-switch • - Gateway between LAN, WAN and Overlay • Ties all media together • Giving migration options to the DC operator IRB.N IRB.1 VPLS, EVPN L3VPN IRB.0 Tenant #N, virtual DC #N Tenant #1, virtual DC #1 Tenant #0: virtual DC #0 DC GW

  32. USG (Universal SDN Gateway) NETWORK devices IN The Data Center Bare Metal Servers Virtualized Servers SDN Servers L4 – 7 Appliances • Databases • HPC • Legacy Apps • Non x86 • IP Storage • ESX • ESXi • HyperV • KVM • ZEN • NSX ESXi • NSX KVM • SC HyperV • Contrail KVM • Contrail ZEN • Firewalls • Load Balancers • NAT • Intrusion Detection • VPN Concentrator

  33. USG (Universal SDN Gateway) USG (UNIVERSAL SDN GATEWAY) Introducing four new options for SDN enablement Provide SDN-to-non-SDN translation, same IP subnet SDN to IP (Layer 2) Layer2 USG Provide SDN-to-non-SDN translation, different IP subnet SDN to IP (Layer 3) Layer3 USG Provide SDN-to-SDN translation, same or different IP subnet, same or different overlay SDN to SDN RemoteData Center Branch Offices Internet SDN USG Provide SDN-to-WAN translation, same or different IP subnet, same or different encapsulation SDN to WAN WAN USG

  34. USG (Universal SDN Gateway) USGs inside the Data center DATA CENTER 1 VxLAN VxLAN Native IP L2 Native IP L2 Layer2 USG VxLANVxLANVxLANVxLANVxLAN Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2 VxLAN VxLAN SDNPod 1 VxLANVxLANVxLANVxLANVxLAN Native IP L2 Native IP L2 Native IP L2 Native IP L2 Legacy Pods Layer3 USG Using Layer 2 USGs to bridge between devices that reside within the same IP subnet: Bare metal servers like high-performance databases, non-x86 compute, IP storage, non-SDN VMs Layer 4–7 services such as load balancers, firewalls, Application Device Controllers, and Intrusion Detection/Prevention gateways. Native IP L2 Native IP L2 Native IP Native IP L2 SDN USG L2 Native IP L2 Native L4 – 7Services WAN USG

  35. USG (Universal SDN Gateway) USGs inside the Data center DATA CENTER 1 VxLAN VxLAN Native IP L3 Native IP L3 Layer2 USG VxLANVxLANVxLANVxLANVxLAN Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 VxLAN VxLAN SDNPod 1 VxLANVxLANVxLANVxLANVxLAN Native IP L3 Native IP L3 Native IP L3 Native IP L3 Legacy Pods Layer3 USG Using Layer 3 USGs to route between devices that reside within different IP subnets: Bare metal servers like high-performance databases, non-x86 compute, IP storage, non-SDN VMs Layer 4–7 services such as load balancers, firewalls, Application Device Controllers, and Intrusion Detection/Prevention gateways. Native IP L3 Native IP L3 Native IP Native IP L3 SDN USG L3 Native IP L3 Native L4 – 7Services WAN USG

  36. USG (Universal SDN Gateway) USGs inside the Data center DATA CENTER 1 VxLAN VxLAN VxLAN Layer2 USG VxLANVxLANVxLANVxLAN VxLANVxLANVxLANVxLANVxLAN VxLAN VxLAN MPLSover GRE MPLSoverGREMPLSoverGREMPLSoverGRE MP SDNPod 1 VxLANVxLANVxLANVxLANVxLAN Layer3 USG VxLANVxLANVxLANVxLANVxLANVxLAN Using SDN USGs to communicate betweenislands of SDN: NSX to NSX – Risk, scale, change control, administration NSX to Contrail – Multi-vendor, migrations LSoverGREMPLSoverGRE MPLS SDN USG VxLAN MPLSover NSXSDN Pod 2 ContrailSDN Pod 1 WAN USG

  37. USG (Universal SDN Gateway) USGsfor remote connectivity DATA CENTER 1 VxLAN VxLAN VxLAN Native IP L3 Native IP L3 Layer2 USG VxLANVxLANVxLANVxLANVxLAN Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 GRE GRE GRE GRE SDNPod 1 GRE GREGREGREGREGREGREGREGREGREGREGREGRE EVPN EVPN Layer3 USG Branch Offices EVPN EVPNEVPN EV Using SDN USGs to communicate to resources outside the local data center: Data Center Interconnect – SDN to [VPLS, EVPN, L3VPN] Branch Offices – SDN to [GRE, IPSec] Internet – SDN to IP (Layer 3) NSX SDN Pod 2 Internet PN EVPN EVPNEVPNEVPN SDN USG VxLAN VxLAN VxLAN EVPN EVPN VxLANVxLANVxLANVxLANVxLAN EVPN EVPN DATA CENTER 2 WAN USG

  38. USG (Universal SDN Gateway) Universal gateway solutions DATA CENTER 1 Native IP L2 Native IP L2 VxLAN VxLAN Native IP L3 Native IP L3 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Layer2 USG VxLANVxLANVxLANVxLANVxLAN VxLAN Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 Native IP L3 VxLANVxLANVxLANVxLAN MPLSover MPLSoverGREMPLSoverGREMPLSoverGRE Native IP L2 VxLAN VxLAN SDNPod 1 VxLANVxLANVxLANVxLANVxLAN Native IP L3 Native IP L2 Native IP L2 Native IP L2 Native IP L2 Legacy Pods Native IP L3 Native IP L3 Native IP L3 Native IP L3 GRE GRE EVPN EVPN Native IP L3 Layer3 USG LSoverGREMPLSoverGRE MPLS EVPN SDN Pod 2 EVPN EVPN VxLANVxLANVxLANVxLANVxLAN Native IP L2 Native IP L2 Native IP GRE GREGRE Native IP L3 Native IP L3 Native IP VxLAN Native IP L3 Native IP Internet GRE GRE MPLSover VxLAN Native IP L2 SDN USG VxLAN GRE GREGREGREGREGREGRE Native IP L3 VxLAN L2 Native IP L2 Native VxLANVxLANVxLAN L3 Native IP L3 Native NSXSDN Pod 2 ContrailSDN Pod 1 L4–7Services BRANCH OFFICES DATA CENTER 2 WAN USG

  39. USG (Universal SDN Gateway) USG Comparisons Layer 2 USG Layer 3 USG SDN USG WAN USG Description Description Provide SDN-to-non-SDN translation, same IP subnet Provide SDN-to-non-SDN translation, different IP subnet Provide SDN-to-SDN translation, same or different IP subnet, same or different Overlay Provide SDN-to-WAN translation, same or different IP subnet QFX5100 ✔ MX Series/EX9200 ✔ ✔ ✔ ✔ X86 Appliance ✔ ✔ Competing ToRs ✔ Competing Chassis ✔ Use Cases NSX or Contrail talk Layer 2 to non-SDN VMs, bare metal and L4-7 services NSX or Contrail talk Layer 3 to non-SDN VMs, bare metal and L4-7 services and Internet NSX or Contrail talk to other PODs of NSX or Contrail NSX or Contrail talk to other remote locations – branch, DCI

  40. EVPN(Ethernet VPN) Next-generation technology for connecting multiple data centers and providing seamless workload mobility

  41. EVPN (Ethernet VPN) Pre-evpn: Layer 2 stretch between Data centers DATA CENTER 1 DATA CENTER 2 ge-1/0/0.10 ge-1/0/0.10 xe-1/0/0.10 xe-1/0/0.10 Server 1 Server 2 PRIVATE MPLS WAN without EVPN MAC: AA MAC: BB xe-1/0/0.10 xe-1/0/0.10 ✕ VLAN 10 VLAN 10 ge-1/0/0.10 ge-1/0/0.10

  42. EVPN (Ethernet VPN) Post-evpn: Layer 2 stretch between Data centers DATA CENTER 1 DATA CENTER 2 ge-1/0/0.10 ge-1/0/0.10 xe-1/0/0.10 xe-1/0/0.10 Server 1 Server 2 PRIVATE MPLS WAN without EVPN MAC: AA MAC: BB xe-1/0/0.10 xe-1/0/0.10 VLAN 10 VLAN 10 ge-1/0/0.10 ge-1/0/0.10

  43. VMTO (VM Mobility Traffic Optimizer) Creating the most efficient network paths for mobile workloads

  44. VMTO (VM Mobility Traffic Optimizer) The need for L2 location awareness Scenario without VMTO Scenario with VMTO enabled VLAN 10 VLAN 10 VLAN 10 VLAN 10 PRIVATE MPLS WAN PRIVATE MPLS WAN DC1 DC2 DC2 DC1

  45. VMTO (VM Mobility Traffic Optimizer) Without VMTO: Egress Trombone Effect 20.20.20.100/24 Server 1 VLAN 20 DC 1 Task: Server 3 in Data Center 3 needs to send packets to Server 1 in Data Center 1. Standby VRRP DG: 10.10.10.1 Active VRRP DG: 10.10.10.1 Standby VRRP DG: 10.10.10.1 Standby VRRP DG: 10.10.10.1 Problem: Server 3’s active Default Gateway for VLAN 10 is in Data Center 2. DC 3 DC 2 PRIVATE MPLS WAN VLAN 10 VLAN 10 Effect: Traffic must travel via Layer 2 from Data Center 3 to Data Center 2 to reach VLAN 10’s active Default Gateway. The packet must reach the Default Gateway in order to be routed towards Data Center 1. This results in duplicate traffic on WAN links and suboptimal routing – hence the “Egress Trombone Effect.” Server 2 Server 3 10.10.10.100/24 10.10.10.200/24

  46. VMTO (VM Mobility Traffic Optimizer) With VMTO: No Egress Trombone Effect 20.20.20.100/24 Server 1 VLAN 20 DC 1 Task: Server 3in Datacenter 3needs to send packets to Server 1in Datacenter 1. Active IRB DG: 10.10.10.1 Active IRB DG: 10.10.10.1 Active IRB DG: 10.10.10.1 Active IRB DG: 10.10.10.1 Solution: Virtualize and distribute the Default Gateway so it is active on every router that participates in the VLAN. DC 3 DC 2 PRIVATE MPLS WAN VLAN 10 VLAN 10 Effect: Egress packets can be sent to any router on VLAN 10 allowing the routing to be done in the local datacenter. This eliminates the “Egress Trombone Effect” and creates the most optimal forwarding path for the Inter-DC traffic. Server 2 Server 3 10.10.10.100/24 10.10.10.200/24

  47. VMTO (VM Mobility Traffic Optimizer) Without VMTO: ingress Trombone Effect 20.20.20.100/24 Server 1 VLAN 20 DC 1 DC 1’s Edge Router Table Without VMTO 10.10.10.0/24 Cost 10 10.10.10.0/24 Cost 5 Task: Server 1 in Datacenter 1needs to send packets to Server 3 in Datacenter 3. Problem: Datacenter 1’s edge router prefers the path to Datacenter 2 for the 10.10.10.0/24 subnet. It has no knowledge of individual host IPs. DC 3 DC 2 PRIVATE MPLS WAN VLAN 10 VLAN 10 Effect: Traffic from Server 1 is first routed across the WAN to Datacenter 2 due to a lower cost route for the 10.10.10.0/24 subnet. Then the edge router in Datacenter 2 will send the packet via Layer 2 to Datacenter 3. Server 2 Server 3 10.10.10.100/24 10.10.10.200/24

  48. VMTO (VM Mobility Traffic Optimizer) With VMTO: No ingress Trombone Effect 20.20.20.100/24 Server 1 VLAN 20 DC 1 DC 1’s Edge Router Table WITH VMTO 10.10.10.200/32 Cost 5 10.10.10.100/32 Cost 5 10.10.10.0/24 Cost 10 10.10.10.0/24 Cost 5 Task: Server 1 in Datacenter 1needs to send packets to Server 3 in Datacenter 3. Solution: In addition to sending a summary route of 10.10.10.0/24 the datacenter edge routers also send host routes which represent the location of local servers. DC 3 DC 2 PRIVATE MPLS WAN VLAN 10 VLAN 10 • Effect: • Ingress traffic destined for Server 3 is sent directly across the WAN from Datacenter 1 to Datacenter 3. This eliminates the “Ingress Trombone Effect” and creates the most optimal forwarding path for the Inter-DC traffic. Server 2 Server 3 10.10.10.100/24 10.10.10.200/24

  49. Network directorSmart network management from a single pane of glass Network Director Visualize Physical and virtual visualization API Analyze Smart and proactive networks Control Lifecycle and workflow automation Physical Networks Virtual Networks

  50. Control Contrail SDN controller Overlay Architecture Orchestrator SDN CONTROLLER Configuration Analytics REST Horizontally scalable Highly available Federated Control SDN Controller JunosV Contrail Controller BGPFederation BGPClustering BGP + Netconf XMPP XMPP Virtualized Server Virtualized Server Tenant VMs IP fabric(underlay network) VM VM VM VM VM VM KVM Hypervisor +JunosV Contrail vRouter/Agent (L2 & L3) MPLS over GRE or VXLAN Juniper Qfabric/QFX/EX or 3rd party underlay switches Juniper MXor 3rd party gateway routers

More Related