1 / 38

NSF CC-NIE Award Panel

NSF CC-NIE Award Panel. Westnet CIO Meeting Monday, January 6, 2014 University of Arizona - Tucson. Aside – Pat’s “Hardware Upgrade”. I regret very much not being a ble to join you in person, b ut I am still recovering from m y “hardware upgrade.”. Front Elevation View. Background.

bugg
Download Presentation

NSF CC-NIE Award Panel

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NSF CC-NIE Award Panel Westnet CIO Meeting Monday, January 6, 2014 University of Arizona - Tucson

  2. Aside – Pat’s “Hardware Upgrade” I regret very much not being able to join you in person, but I am still recovering from my “hardware upgrade.” Front Elevation View

  3. Background • NSF “Campus Cyberinfrastructure - Network Infrastructure and Engineering (CC-NIE)” program introduced in response to the 2009 Advisory Committee for Cyberinfrastructure (ACCI) www.nsf.gov/cise/aci/taskforces/TaskForceReport_CampusBridging.pdf to: • Invest in the weakest links for enabling data-intensive research – in campus LANs and secondarily in the WAN connectivity • Deploy perfSONAR • Expand InCommon

  4. Participants • Scott Baily, Colorado State University • Thomas Hauser, Univ. of Colorado Boulder • Daniel Ewart, University of Idaho • Steve Corbató, University of Utah

  5. Overview

  6. “Signature” Applications

  7. Other Resources Enabled

  8. Networks “Before & After”

  9. CSU - Before Research Networks Internet FRGP Core 2 Border 2 Border 1 Core 1 B i S O N 1 ea. 10 Gbps Wave Campus 1 Gig Core Routing Cluster Production LAN Commodity Users & Researchers (typ.)

  10. CSU – After DYNES IDC Research Networks Internet FRGP B i S O N Core 2 Border 2 Border 1 Core 1 Dynamic VLANs 3 ea. 10 Gbps Wave Campus 10 Gig Core Routing Cluster 40 or 100 Gig? “Research LAN” Production LAN 10 Gbps Research Connections (typ.) Commodity Users & Researchers (typ.) DYNES Server & Storage, FDT AFTER

  11. CSU – “Before & After”

  12. Status of Deployments: CSU • Campus LAN enhancements • WAN enhancements • DYNES implemented at 10 Gbps • Little “real” usage yet • Waiting into year two to see if 100 Gbps for the Research DMZ will be affordable • Don’t really know what to do about firewalling the Research DMZ

  13. CU-Boulder Before (simple)

  14. CU-Boulder Before (detailed)

  15. CU-Boulder After (simple)

  16. CU-Boulder After (detailed)

  17. CU-Boulder – science DMZ bandwidth

  18. Status of Deployments: CU-Boulder • Upgraded CU-Boulder science DMZ core routers to redundant Arista 7504 chassis’: • 80Gb core bandwidth • distributed spine/leaf (MLAG) design • Upgraded CU-Boulder border routers to redundant Juniper MX960’s: • 40Gb to campus cores • 40Gb to science DMZ cores • Implemented multiple perfSONAR systems: • 10Gb & 40Gb (39.6 demonstrated across campus) • dedicated to science DMZ • Implemented secondary BRO system: • dedicated to science DMZ traffic • based on SDN using Arista DANZ/tap aggregation • goal to implement dynamic blocking via upstream ACL injection • Enabling IPv6 in the science DMZ • True OOB (serial + ethernet) connectivity for science DMZ network devices • Implementation of sflowcollection/storage system for science DMZ telemetry • DYNES implemented at 10Gb but no production usage yet • Insta-GENI rack integration with science DMZ underway • Early-stage build-out of openstack environment • Looking at MPLS/VXLAN for campus-scienceDMZ overlay interconnects

  19. University of Idaho (UI) • State before grant: • Internal network: 2 x 1Gbps internal bandwidth through 4 core Cisco 6500s • Could not move big data sets across internal network • No way to utilize IRON 10 Gbps bandwidth to HPC resources at Idaho National Laboratory • No way to utilize 10 Gbps connectivity to Washington State University

  20. UI Desired State • 10 Gbps internal networking between data centers containing HPC and storage • 10 Gbps bandwidth to storage and HPC shared with Idaho National Laboratory • 10 Gbps connectivity possible to WSU for DR/BC • Upgraded core switches and firewalls • Firewalls would be separate to reflect commitment to Science DMZ concept

  21. UI Grant • $447k • Collaborative partnership between: • Central IT • Research Office • Northwest Knowledge Network (NKN) • Institute for Bioinformatics and Evolutionary Studies (IBEST) • Idaho National Laboratory (INL) • Idaho Regional Optical Network (IRON)

  22. UI Grant • Phase I – complete March 1, 2014 • Ensure successful completion of grant requirements to prepare for submission of CC-IIE proposal by March 17th, 2014 • 10 Gbps backbone • Moscow Campus • INL NKN Servers via IRON • Minimize disruption to network operations • Don’t change core network topology • Implementation of perfSONAR

  23. UI Phase I Solution

  24. UI Grant • Phase II – complete August 1, 2014 • Implement virtual router topology in core/border • Implement virtual chassis (VSS) in data center • Upgrade data centers to 6807 chassis • Migrate firewall out of core chassis (ITS expense, outside the grant)

  25. UI Phase II Solution

  26. Utah CC-NIE award • Funded under Network Integration and Applied Innovation track ($1M/2 years) • Leadership: • Steve Corbató, Deputy CIO/PI • Adam Bolton, Physics & Astronomy, co-PI • Joe Breen, CHPC, senior personnel • Tom Cheatham, Medicinal Chemistry, co-PI (Research Portfolio chair) • Rob Ricci, School of Computing, co-PI • Kobus van Der Merwe, School of Computing, co-PI • Partnerships with Center for High Performance Computing (CHPC) and Honors College

  27. Utah CC-NIE context - I • Close collaboration among campus and external entities • University of Utah • Flux network/systems research group (School of Computing) • Central IT (UIT) • Center for High Performance Computing (CHPC) • Common Infrastructure Services/Network • Information Security Office • Honors College • Utah Education Network (UEN) • GENI Project Office (GPO) • U.S. Ignite • Internet2 • Key premise: the Science DMZ should serve as a testbed of future perimeter defense and detection strategies for the production campus network

  28. Utah CC-NIE context - II • Closely coordinated with other research awards • Emulab/protoGENI research – Ricci, Eide, van Der Merwe, Lepreau (deceased) • EPSCoR RII Cyber Connectivity – Research@UEN optical network in northern Utah (with UEN NTIA BTOP award) • EPSCoR RII Track-2 – CI-WATER (UT/WY) – petascale data store for atmospheric and hydrological research • GENI Spiral 3 – UEN GENI – deploying GENI racks in Utah • NSF MRI – Apt – combined network research and computational science cluster (Flux/CHPC)

  29. Utah CC-NIE objective • Leverage upgrade of UEN connection to Internet2 to 100G • Upgrade prototype Science DMZ to 100G • Incorporate SDN technology for dynamic science DMZ • Dynamic slices (Emulab/protoGENI) • Support Big Cycle/Big Data Science on campus • Incorporate novel groups • Honors College (GENI ‘opt-in’ model) • Lassonde Student Garage for venture development

  30. Utah Before (detailed)

  31. Utah After (detailed)

  32. Question #1: Security • What types of devices do you have connected to the research network? • Scientific instruments • HPC systems • Other server systems • Personal computers? • What types and nature of security have you implemented for/on the research network?

  33. Question #2: Level of Effort • Enhancing/implementing the changes/upgrades • Supporting users • Investigating end-to-end performance • Tuning applications • Tuning TCP windows • Supporting SDN/DYNES

  34. Question #3 • Please describe the usage of SDN/DYNES • Now • Anticipated in the future?

  35. Question #4: Struggles/Issues • CSU • Learning about MX960s in clustered environment • Still drumming up interest in the Science DMZ • Getting Globus running should help there • Recruiting “early adopters” • Researchers want to put EVERYTHING in the DMZ • UCB • DYNES • Understanding OSCARS/Dragon • Finding other sites to test/collaborate with

  36. Other Unmet Infrastructure Needs • HPC • Storage • Institutional Repositories and Preservation • Supporting Data Management • SHARE for scholarly journals and data sets? • Other?

  37. Next NSF CC solicitation is out! • Campus Cyberinfrastructure - Infrastructure, Innovation and Engineering Program (CC*IIE) • NSF 14-521 • New tracks added • Network Design and Implementation for Small Institutions • Identity and Access Management (IdAM) Integration • Campus CI Engineer resources • Regional Coordination and Partnership in Advanced Networking • Program managers • Kevin Thompson (CISE/ACI) • Bryan Lyles (CISE/CNS) • Campus CI plan required as supplement • Due March 17, 2014

  38. Questions/discussion

More Related