midwest tier2 networking status and plans l.
Skip this Video
Loading SlideShow in 5 Seconds..
Midwest Tier2 Networking Status and Plans PowerPoint Presentation
Download Presentation
Midwest Tier2 Networking Status and Plans

Loading in 2 Seconds...

play fullscreen
1 / 29

Midwest Tier2 Networking Status and Plans - PowerPoint PPT Presentation

  • Uploaded on

Midwest Tier2 Networking Status and Plans. Rob Gardner Computation and Enrico Fermi Institutes University of Chicago USATLAS Tier1 and Tier2 Networking Planning Meeting Brookhaven Lab December 14, 2005. MWT2 Personnel here today . Fred Luehring: MWT2 co-PI @ IU Matt Davy: GlobalNOC @ IU

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

Midwest Tier2 Networking Status and Plans

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
midwest tier2 networking status and plans

Midwest Tier2 Networking Status and Plans

Rob Gardner

Computation and Enrico Fermi Institutes

University of Chicago

USATLAS Tier1 and Tier2 Networking Planning Meeting

Brookhaven Lab

December 14, 2005

mwt2 personnel here today
MWT2 Personnel here today
  • Fred Luehring: MWT2 co-PI @ IU
  • Matt Davy: GlobalNOC @ IU
  • Ron Rusnak: Network Services and Information Technologies (NSIT) @ UC

R. Gardner - MWT2 Network Plan

  • Description of Midwest Tier2 (MWT2)
  • Network Status and Plans

R. Gardner - MWT2 Network Plan

mwt2 description
MWT2 Description
  • Midwest Tier2 center is a joint project between UC and IU
    • to provide a computing resource for Monte Carlo production
    • facilitate physics analysis of AOD samples
  • Project is building from iVDGL prototype Tier2 centers, which are now in place and have been active in DC2 and Rome production
  • This winter we will begin procurement of resources for the MWT2, adding ~32 nodes at each site and a combined 50 TB of storage, plus network upgrades
  • Two dedicated project-funded FTEs in place now
    • Marty Dippel @ UC; Kristy Kallback-Rose @ IU
    • Dan Shroeder to arrive 1/06 @ IU.
    • Continued contributions from Andrew Zhan (GriPhyN/iVDGL)

R. Gardner - MWT2 Network Plan

the atlas computing model j shank nsf review 7 05
The ATLAS Computing ModelJ. Shank, NSF Review 7/05
  • Worldwide, there are approximately 30 T2 of various sizes
    • Approximate Overall CAPACITY in 2008:
      • 20 MSi2k CPU
      • 9 PB Disk
    • US to satisfy commitments to ATLAS:
      • 3.3 MSi2k CPU
      • 1.5 PB Disk
    • In addition, U.S. ATLAS physicists needs at our T2 will require more resources. Current estimate of our average T2 in 2008:
  • 1 MSi2k CPU
  • 500 TB Disk

R. Gardner - MWT2 Network Plan

projected t2 hardware growth j shank nsf review 7 05
Projected T2 Hardware GrowthJ. Shank, NSF Review 7/05
  • Assumes Moore’s law doubling of CPU and disk capacity every 3 years at constant cost
  • Assumes replacement of hardware every 3 years

R. Gardner - MWT2 Network Plan

current scale chicago
Current Scale - Chicago
  • 64 dual 3.0 GHz Xeon nodes, 2 GB memory
  • Interconnected by Cisco Catalyst 3750G
  • A storage system consists of four servers with dual 3Ware disk controllers, providing 16 TB of attached RAID storage.
  • Eight front-end nodes provide grid GRAM and GridFTP servers from VDT (OSG and OSG Integration testbed gateways)
  • The facility also has interactive login nodes to support local batch analysis that we are providing on a best effort basis until official US ATLAS Tier2 policies are developed.
  • 4 machine development cluster (Rocks and Condor configuration trials, OSG integration testbed)

R. Gardner - MWT2 Network Plan

current scale iu
Current Scale - IU
  • The prototype Tier-2 consists of dedicated use of 64 2.4 GHz Xeon processors possible through an in-kind contribution by IU (AVIDD, NSF-MRI) to ATLAS.
  • The site has 1.5 TB of dedicated fiber channel disk.
  • Recently IU provided an additional 8.0 TB of Network Attached Storage (NAS) for ATLAS production use.
  • Archival tape system
  • New MWT2 equipment will be located at the ICTC machine room on the Indianapolis campus of IU (IUPUI)

R. Gardner - MWT2 Network Plan

network status and plans
Network Status and Plans
  • Overall MWT2 network architecture
  • UC Connectivity status and plans
  • IU Connectivity status and plans
  • Starlight Configuration Issues
  • IU and UC network support organizations, represented here today

R. Gardner - MWT2 Network Plan


Dec 2005

R. Gardner - MWT2 Network Plan

uc connectivity to starlight status
UC Connectivity to Starlight - Status
  • Prototype Tier2 Cisco 3750 connected via 1 Gbps path to campus border router (Cisco 6509)
    • Once at the border, we share with the rest of UC
  • Campus to 710 N. Lakeshore Drive via 2 x 1 Gbps fiber provided by I-WIRE
    • State of Illinois project: http://www.iwire.org/
  • At 710 NLSD
    • DWDM output connected to MREN Force10, providing L2 Ethernet connectivity
    • MREN Force10 provides L3 routing
    • 10 Gbps link between MREN and Starlight Force10, shared

R. Gardner - MWT2 Network Plan

uc starlight connectivity upgrade
UC-Starlight Connectivity Upgrade
  • 10 Gbps upgrade funded by UC Teraport project (NSF-MRI)
  • Agreements with Starlight 9/8/05
    • One time cost for 10 Gbps interface at Starlight: 10K
    • First year of operation: 10K
  • Components ordered from Qwest 9/15/05
    • DWDM 10 Gbps Lambda transceivers for UC edge routers at Starlight and at campus border: 56.1K
  • Delivery on these components delayed by Cisco-Qwest
    • Current ETA: ~ month
  • Service brought to Research Institutes building, RI-050, where new Tier2 equipment will be housed
  • Tier2 to fund RI-050 connectivity w/ Cisco 6509

R. Gardner - MWT2 Network Plan

iu connectivity to starlight
IU Connectivity to Starlight
  • Currently there are 2x10 Gbps lambdas between IU GigaPoP and 710 NLSD
    • On campus, 10 GigE connections from GigaPoP to ICTC, home of IU.MWT2
    • Connected to Starlight Force10
  • Current UC-IU connectivity thus goes via Abilene
  • IU dedicated (shared with other IU projects) in place between IUPUI and 710 NLSD
    • Expect by Feb 06

R. Gardner - MWT2 Network Plan

viewed from the machine room
Viewed from the machine room
  • Both UC and IU are still finalizing cluster & storage design
  • At UC, will likely build cluster around Cisco 6509
    • Cisco chassis plus supervisor engine
    • 4 port 10 Gbps interface card
    • 1 x 48 port 1 Gbps interface card
    • Total switching costs: ~60K, negotiating with other projects in the UC Computation Institute for sharing the costs for 10 Gbps card
    • Separate network for storage-compute node (backplane interference?)
  • At IU
    • MWT2 infrastructure costs cover basic network connectivity
    • Currently, this goes via Force10, E600
    • Considering upgrade to E1200
    • Depends on final cluster and storage architecture

R. Gardner - MWT2 Network Plan

starlight peering
Starlight Peering
  • Goal is to setup VLAN peering to enable 10 Gbps virtual circuits between each of the major nodes on the network:
    • Between UC and IU hosts
    • For either UC or IU to BNL hosts
    • For either UC or IU to CERN hosts
  • For UC, setup rules on MREN router
  • For IU, setup with dedicated IU router at Starlight
  • All this should be straightforward to establish for UC-IU link
  • Not as clear the CERN or BNL links (why we’re here!)

R. Gardner - MWT2 Network Plan


NSIT – Data Networking

  • Networking Services and Information Technologies
    • Academic Technologies, Administrative Systems, Data Center, General Services and Networking
    • ~340 individuals
  • Data Networking
    • Director
      • Installation and Repair – Assistant Director plus 5 technicians
      • Engineering – Manager plus 6 engineers
    • Equipment located in 406 closets in 128 buildings (1/4 with UPS)
    • ~25 Miles of fiber between buildings
    • ~26,000 hosts
    • Approximately 1700 Cisco switches deployed (35xx, 2950, 2970)
    • ~700 wireless APs in 97 buildings (90% of best places, 60% of full coverage)

R. Gardner - MWT2 Network Plan


NSIT – Data Networking

  • 18 remote sites serviced with 2 45Mpbs and 20 1.5Mbps circuits; 1 OPT-E-MAN 20Mbps
  • 2 ISPs on OC3 circuits
  • Internet 2 = 3x1Gbps on I-Wire Ciena DWDM to MREN/Starlight
    • Soon 10 Gbps
  • Campus Core = 14 Cisco 6500s with Sup 720 interconnected with 1Gbps
    • Soon 10 Gbps
  • Multicast Support
    • 5 large access grid facilities

R. Gardner - MWT2 Network Plan

globalnoc functions
GlobalNOC functions

24x7x365 Network Operations and Engineering Services

  • Real-Time Network Monitoring
  • Documentation including and an extensive Network Information Database
  • Problem Tracking & Resolution
  • Historical Reporting (Traffic, Availability, etc)
  • Change Management
  • Security Incident Response
  • Network Design and Planning
  • Network Installation & Implementation

R. Gardner - MWT2 Network Plan

open questions
Open Questions
  • Configuration of MWT2 connectivity with BNL via Starlight
  • Same, for CERN - what are the rules/policy? (for access to AOD datasets located elsewhere in ATLAS)

R. Gardner - MWT2 Network Plan

I-Wire DWDM Network

Argonne National Laboratory

Production(Abilene, ESnet, MREN, TeraGrid, CERN, NASA…)

Experimental(CA*Net4, OmniNet, Surfnet, Europe, Japan…)



(Northwestern University)

University of Illinois-Chicago


Illinois Century Network (K-20)

University of Chicago

Illinois Institute of Technology


Gleacher Center

(University of Chicago)

2 DWDM Systems(660 Gb/s capacity each)


R. Gardner - MWT2 Network Plan

tier2 site architecture

Edge Nodes





Tier2 Site Architecture
  • Provision for hosting persistent “Guest” VO services & agents



Private network

Worker nodes

Worker nodes

VO Services,


Proxy Caches,


Guest VO

Services, Agents,

Proxy Caches



VO dedicatedboxes

Grid Gateways

Edge Services


R. Gardner - MWT2 Network Plan

software environment
Software Environment
  • ATLAS releases provided at both MWT2 sites
    • Distribution kit releases for production
    • Additional releases
  • Local tools (UC site)
    • Local batch submit scripts & monitors
  • Backups of system configuration and important service installations, and user /home implemented
    • Have ordered a 4 TB server to provide dedicated service

R. Gardner - MWT2 Network Plan

policy for cpu usage
Policy for CPU Usage
  • Resource allocation at both UC and IU will be determined by US ATLAS policy
  • UC (Condor) and IU (PBS) will have different implementations, but will effectively set usage for:
    • for production managers
    • for software maintainers
    • for US ATLAS users
    • for general ATLAS
    • for general OSG access
  • Set by role based authorization system (VOMS, GUMS, and local Unix accounts)
    • Have configured UC GUMS to support US ATLAS roles
    • Will use UC GUMS for both MWT2 sites

Many fine grained

details on queue and

user behavior will

need to be worked


R. Gardner - MWT2 Network Plan

policy for storage usage
Policy for Storage Usage
  • User, Community and DDM (ATLAS) datasets need to be provisioned on the data servers
  • Storage and quota of existing 16 TB @ UC
    • Se1: grid and system use
    • Se2: community (local data manager)
    • Se3: community (local data manager)
    • Se4: user w/ quota
  • DDM, Panda storage at present 1 TB
  • Expansion to 50 TB FY06
  • IU storage (archival)

R. Gardner - MWT2 Network Plan

grid environment
Grid Environment
  • VDT Grid clients
    • Provided by tier2-u2.uchicago.edu
  • DDM clients
    • As part of DDM service install at UC
  • Local specific guides
    • http://pgl.uchicago.edu/twiki/bin/view/Tier2/WebHome
  • Services deployed
    • DDM managed agents and catalogs (still in a long period of debugging)
    • dCache (first deployments successful, problems with compatibility with dCache version and FTS prevented larger scale testing)
    • OSG production and ITB instances

R. Gardner - MWT2 Network Plan

service and operations
Service and Operations
  • Points of contact
    • tier2-users@listhost.uchicago.edu (General announcements)
    • tier2-support@listhost.uchicago.edu (System problems)
  • Systems administrator response: 9-5 M-F and best effort
  • Non-local grid problems:
    • US ATLAS support center @ BNL trouble ticket
  • IU and UC will develop MWT2 specific trouble ticketing system

R. Gardner - MWT2 Network Plan