1 / 7

MWT2 Status

MWT2 Status. Greg Cross University of Chicago . USATLAS Tier2 Workshop Harvard University 17–18 August 2006. Outline. Project Update Cluster Topology Architecture and Software Next 3 Months. Project Update.

lyris
Download Presentation

MWT2 Status

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MWT2 Status Greg Cross University of Chicago USATLAS Tier2 Workshop Harvard University 17–18 August 2006

  2. Outline • Project Update • Cluster Topology • Architecture and Software • Next 3 Months MWT2 Status / USATLAS Tier2 Workshop

  3. Project Update • Existing UC and IU prototype Tier2 resources strong contributors to Panda production, OSG development (ITB), ATLAS user-grid and interactive analysis • Production MWT2 facility ( http://plone.mwt2.org/ ): • Phase 1: RFP complete, vendor selected (ACT), first installations at both IU and UC Sep 1 (106K SI2K, 52 TB combined & dCache+Edge/Grid servers, 10 Gbps ethernet) • Phase 2: quickly follows with same hardware (~doubling CPU & disk) • 10 Gbps connectivity established campus-level at UC and IU to Starlight; new MWT2 clusters will have 10 GigE data movers as edge servers • Staff: 3 sys admins hired, one vacancy (now advertising); shared admin model between UC and IU sites being developed. MWT2 Status / USATLAS Tier2 Workshop

  4. Project Update • Major contributions to US ATLAS in LCG accounting, DQ2 installation procedures, and Panda troubleshooting and support • 5 service deployments of DQ2 latest release in the ATLAS DDM infrastructure • GUMS (shared service between sites), VOMS, and OSG implement ATLAS roles with queue priorities set according to US ATLAS RAC. • 4 TB-scale production SRM/dCache service deployed, tested at 40 MB/s w/SC4 transfers • Leveraged resources from NSF/MRI for ATLAS managed production and ATLAS physicist-users (UC Teraport project) • Development cluster in OSG Integration Testbed MWT2 Status / USATLAS Tier2 Workshop

  5. Cluster Topology MWT2 Status / USATLAS Tier2 Workshop

  6. Architecture and Software • UC and IU have “mirrored,” remote-managed cluster design for co-institutional administration • SLC4 running on AMD64, initially in 32-bit mode • Managed with ACT tools plus other configuration tools (likely bcfg2) • Minimizing network filesystem dependencies on compute nodes • dCache aggregates storage on compute nodes with dedicated (edge) write pools MWT2 Status / USATLAS Tier2 Workshop

  7. Next 3 Months • Deploy functional cluster as unified facility with local customizations • Validate and exercise services (Grid, DQ2, dCache, job queue) • Begin integration of Tier2 prototype and Tier3 resources at each site • Instrument with monitoring (Ganglia, Nagios, OSG and ATLAS monitors) • Publish usage policies MWT2 Status / USATLAS Tier2 Workshop

More Related