1 / 13

CMPE 252A : Computer Networks

CMPE 252A : Computer Networks. Chen Qian Computer Science and Engineering UCSC Baskin Engineering. Jupiter Rising: A Decade of Clos Topologies and Centralized Control in Google’s Datacenter Network

gloriaheath
Download Presentation

CMPE 252A : Computer Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CMPE 252A : Computer Networks Chen Qian Computer Science and Engineering UCSC Baskin Engineering

  2. Jupiter Rising: A Decade of Clos Topologies and Centralized Control in Google’s Datacenter Network • Arjun Singh, Joon Ong, Amit Agarwal, Glen Anderson, Ashby Armistead, Roy Bannon, Seb Boving, Gaurav Desai, Bob Felderman, Paulie Germano, Anand Kanagala, Hong Liu, Jeff Provost, Jason Simmons, Eiichi Tanda, Jim Wanderer, Urs Hölzle, Stephen Stuart, and Amin Vahdat (Google)

  3. Ten years ago, cost and operational complexity associated with datacenter network architectures was prohibitive. • Maximum network scale was limited by the cost and capacity of the highest-end switches

  4. Traffic has increased 50x in this time period, roughly doubling every year.

  5. Google’s DCN in 2004 • 40 servers connected at 1Gb/s to a ToR switch with approximately 10:1 oversubscription in a cluster delivering 100Mb/s (per host) among 20k servers. • High bandwidth applications had to fit under a single ToR to avoid the heavily oversubscribed ToR uplinks.

  6. Google realized that existing commercial solutions could not meet our scale, management, and cost requirements. • Hence, they decided to build own custom data center network.

  7. Clos topology

  8. Firehose 1.0

  9. Firehose 1.1 • Firehose 1.0 used server to house switch chips but with bad experience. • Hence they built custom switch fabric

  10. Watchtower and Saturn

  11. Jupiter: A 40G datacenter-scale fabric

  12. multiple clusters within the same building and multiple buildings on the same campus

More Related