1 / 23

Evolving trends in high performance infrastructure

Evolving trends in high performance infrastructure. Andrew F. Bach Chief Architect FSI – Juniper Networks. Agenda. The Need For High Performance. 5. 4. 1. 2. 3. The challenge. The limitations today . Resulting trends. Impact on Data center Infrastructure. Agenda.

jola
Download Presentation

Evolving trends in high performance infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evolving trends in high performance infrastructure Andrew F. Bach Chief Architect FSI – Juniper Networks

  2. Agenda • The Need For High Performance 5 4 1 2 3 The challenge The limitations today Resulting trends Impact on Data center Infrastructure

  3. Agenda • The Need For High Performance 5 4 1 2 3 Thechallenge The limitations A better solution Junipers products and next steps

  4. Transactions become Bandwidth Distributed to the Financial Community (Terabits/Sec) from OPRA system

  5. 400+ years of rapid technology adoption • And a rich history of technology innovation in markets… • First stock ticker to disseminate data (1867) • First telephones on the trading floors (1878) • First electronic ticker display board (1966) • Wireless handheld devices on Trading floors 15 years before ipad invented (1995) • Industry’s first private network offeringglobal connectivity • Industry’s private network exceeds 1TbS

  6. Agenda • The Need For Speed 5 4 1 2 3 The challenge The limitations A better solution Junipers products and next steps

  7. FSS Challenges • Regulatory model is driving change • Requirement for long term retention of data • Requirement to archive meta data • Real time risk management is now required • FSS is evolving to a commodity industry • Time to market must be faster • Product life time is shorting • Margins are reducing driving OPEX reduction • Technology adoption is continuing to accelerating to meet accelerating business needs • Fuel the race to the triple crown of technology (0 cost, 0 latency, 0 time to market);Technology is a strategic weapon • Bandwidth demand continues to grow at 30% - 50% per year. • Comprehensive management and orchestration of the data center • Fundamental new architectures are required • Flat Clos like architectures • Tightly coupled datacenter network to the wide area optical network • Tightly couple the Networks and the servers - SDN

  8. A typical challenge Gateway Symbol routing risk management Customer TOR TOR Matching engine CORE CORE TOR TOR CORE CORE TOR TOR TOR TOR Trade Plant Size: ≈ 100 Servers ≈ 1500 Ports Grand Total ≈150.0µS

  9. Agenda • The Need For Speed 5 4 1 2 3 The challenge The limitations today A better solution Junipers products and next steps

  10. LATENCYREDUCTIONTRENDS (TOR) 2008 2010 2011 2012 2013 Ethernet is narrowingthegapto Infiniband;industrytracking to <450nsin2013

  11. Slowing of processor speed

  12. THE TRADE OFF AXIS RELIABILITY FEATURE VELOCITY LATENCY SCALE

  13. Agenda • The Need For Speed 5 4 1 2 3 The challenge The limitations Resulting trends Junipers products and next steps

  14. A different approach - Distributed computing Symbol routing embedded in TOR Risk Management embedded in TOR Customer Matching engine NIC CORE TOR TOR NIC CORE TOR TOR Trade Plant Size: ≈ 60 Servers ≈ 1000 Ports Data pre/post processing Grand Total ≈100.0µS Reduced by 50µS

  15. Applications Embedded Networking – the new way to reduce latency and cost • The race to zero is ending • at about 200 – 500NS for a reasonable switch • Need to focus on a different approach • Imbed application snippets into the switching fabric • Lower latency • Eliminates servers • Reduces network ports • Imbed snippets at the control or data plane of the network • Application can be embedded in a VM in the switch of into a FPGA in the data path of the switch

  16. Return of the Clos data center Fabrics 100% of all National market da runs on JNPR All US markets distribute market data over JNPR 90% Allequity order flow Data Center 1 Layer Carrier Hotel CustomerConnections

  17. Data Center Simplification Lowercost and ease of use • Four architectures for a data center, all the same building blocks • Stand alone TOR’s • MC-LAG • Virtual Chassis • Fabric • Solutions range from classic to fabric • All share common management support • All are SDN enabled and support advanced management tools and scripting

  18. WAN – FSS building their own private carrier networks • Lease a service/cloud • Shared service • Reduced agility • Resiliency tested at failure time • Easy solution – no need for a technical staff • Good solution for a medium to small firm • Build and operate a private cloud • Dedicated service • Rapidly adaptable to meet changing requirements • Customer defines and validates resiliency, easing regulatory compliance • Needs a small skilled staff • Lower cost as third parties profit is removed

  19. Agenda • The Need For Speed 5 4 1 2 3 The challenge The limitations A better solution Impact on the data center infrastructure

  20. Impact on Compute and network • Centralize processing where you can distribute where you must • Processors and network switches and hitting natural limits • To achieve a high performance infrastructure compute resource must be distributed • Optimize computing in the Server, NIC(FPGA), and network Switches(VM’s and FPGA’s) – not just one place • Drive to Clos fabrics • Heavy East – West Traffic • Compounded Bandwidth growth > 20% per year • All networks become virtual

  21. Bandwidth demands • Servers will be requiring 40G • As CPU cores increase by 2016 server bandwidth will need 40G • High end servers will need 100G • Servers at 100G will drive the need for network links of 400G and 1T

  22. Impact of SDN/NFV/Orchestration • Manages virtual fiber plant • Controls the adds/moves/changes from the servers via overlays • Increased need to design the cable plant correct and for a longer life • Merging of network, server, and storage teams into one

  23. Thank you!

More Related