1 / 20

CERN’s openlab Project

Sverre Jarp, Wolfgang von Rüden IT Division CERN 29 November 2002. CERN’s openlab Project. Our ties to IA-64 (IPF). A long history already…. Nov. 1992: Visit to HP Labs (Bill Worley): “ We shall soon launch PA-Wide Word!”

yukio
Download Presentation

CERN’s openlab Project

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sverre Jarp, Wolfgang von Rüden IT Division CERN 29 November 2002 CERN’s openlabProject SJ – Nov.2002

  2. Our ties to IA-64 (IPF) • A long history already…. • Nov. 1992: Visit to HP Labs (Bill Worley): • “We shall soon launch PA-Wide Word!” • 1994-6: CERN becomes one of the few external definition partners for IA-64 • Now a joint effort betweenIntel and HP • 1997-9: Creation of a vector math library for IA-64 • Full prototype to demonstrate the precision, versatility, and unbeatable speed of execution (with HP Labs) • 2000-1: Port of Linux onto IA-64 • “Trillian” project: glibc • Real applications • Demonstrated already at Intel’s “Exchange” exhibition on Oct. 2000 SJ – Nov.2002

  3. openlab Status • Industrial Collaboration • Enterasys, HP, and Intel are ourpartners • Technology aimed at the LHC era • Network switchat 10 Gigabits • Connect via both 1 Gbit and 10 Gbits • Rack-mounted HP servers • Itanium processors • Storage subsystem may be coming • from a 4th partner • Cluster evolution: • 2002: Cluster of 32 systems (64 processors) • 2003: 64 systems (“Madison” processors) • 2004: 64 systems (“Montecito” processors) SJ – Nov.2002

  4. The compute nodes • HP rx2600 • Rack-mounted (2U) systems • Two Itanium-2 processors • 900 or 1000 MHz • Field upgradable to next generation • 4 GB memory (max 12 GB) • 3 hot pluggable SCSI discs (36 or 73 GB) • On-board 100 and 1000 Mbit Ethernet • 4 full-size 133 MHz/64-bit PCI-X slots • Built-in management processor • Accessible via serial port or Ethernet interface SJ – Nov.2002

  5. openlab SW strategy • Exploit existing CERN infrastructure • Which is based on • RedHat Linux, GNU compilers • OpenAFS • SUE (Standard Unix Env.) systems maintenance tools • Native 64-bit port • Key LHC applications: • CLHEP, GEANT4, ROOT, etc. • Important subsystems: • Castor, Oracle, MySQL, LSF, etc. • Intel compiler where it is sensible • Performance • 32-bit emulation mode • Wherever it makes sense • Low usage, no special performance need • Non-strategic areas SJ – Nov.2002

  6. openlab - phase 1 Estimated time scale: 6 months Awaiting recruitment of: 1 system programmer • Integrate the openCluster • 32 nodes + development nodes • Rack-mounted DP Itanium-2 systems • RedHat 7.3 (AW2.1 beta) – kernel at 2.4.19 • OpenAFS 1.2.7, LSF 4 • GNU, Intel Compilers (+ ORC?) • Database software (MySQL, Oracle?) • CERN middleware: Castor data mgmt • GRID middleware: Globus, Condor, etc. • CERN Applications • Porting, Benchmarking, Performance improvements • CLHEP, GEANT4, ROOT, CERNLIB • Cluster benchmarks • 1  10 Gigabit interfaces Also: Prepare porting strategy for phase 2 SJ – Nov.2002

  7. openlab - phase 2 • European Data Grid • Integrate OpenCluster alongside EDG testbed • Porting, Verification • Relevant software packages • Large number of RPMs • Document prerequisites • Understand dependency chain • Decide when to use 32-bit emulation mode • Interoperability with WP6 • Integration into existing authentication scheme • Interoperability with other partners • GRID benchmarks (As available) Also: Prepare porting strategy for phase 3 Estimated time scale: 9 months(May be subject to change!) Awaiting recruitment of: 1 GRID programmer SJ – Nov.2002

  8. openlab - phase 3 • LHC Computing Grid • Need to understand • Software architectural choices • To be made between now and mid-2003 • Need new integration process of selected software • Time scales • Disadvantage: • Possible porting of new packages • Advantage: • Aligned with key choices for LHC deployment Impossible at this stage to give firm estimates for timescale and required manpower SJ – Nov.2002

  9. openlab time line Order/Install 32 nodes Systems experts in place – Start phase 1 Complete phase 1 Start phase 2 Order/Install Madison upgrades + 32 more nodes Complete phase 2 Start phase 3 Order/Install Montecito upgrades openCluster EDG LCG End-02 End-03 End-04 End-05 SJ – Nov.2002

  10. IA-64 wish list • For IA-64 (IPF) to establish itself solidly in the market-place: • Better compiler technology • Offering better system performance • Wider range of systems and processors • For instance: Really low-cost entry models, low power systems • State-of-the-art process technology Similar “commoditization” as for IA-32 SJ – Nov.2002

  11. openlab starts with CPU Servers Multi-gigabit LAN SJ – Nov.2002

  12. … and will be extended … Remote Fabric WAN Gigabit long-haul link CPU Servers Multi-gigabit LAN SJ – Nov.2002

  13. … step by step Storage system Remote Fabric WAN Gigabit long-haul link CPU Servers Multi-gigabit LAN SJ – Nov.2002

  14. Annexes • The potential ofopenlab • The openlab “advantage” • The LHC • Expected LHV needs • The LHC Computing Grid Project – LCG SJ – Nov.2002

  15. The openlab “advantage” openlab will be able to build on the following strong points: • CERN/IT’s technical talent • CERN existing computing environment • The size and complexity of the LHC computing needs • CERN strong role in the development of GRID “middleware” • CERN’s ability to embrace emerging technologies SJ – Nov.2002

  16. The potential ofopenlab • Leverage CERN’s strengths • Integrates perfectly into our environment • OS, Compilers, Middleware, Applications • Integration alongside EDG testbed • Integration into LCG deployment strategy • Show with success that the new technologiescan be solid building blocks for the LHC computing environment SJ – Nov.2002

  17. The openlab “advantage” openlab will be able to build on the following strong points: • CERN/IT’s technical talent • CERN existing computing environment • The size and complexity of the LHC computing needs • CERN strong role in the development of GRID “middleware” • CERN’s ability to embrace emerging technologies SJ – Nov.2002

  18. CMS ATLAS LHCb The Large Hadron Collider - 4 detectors Huge requirements for data analysis Storage – Raw recording rate 0.1 – 1 GByte/sec Accumulating data at 5-8 PetaBytes/year (plus copies) 10 PetaBytes of disk Processing – 100,000 of today’s fastest PCs SJ – Nov.2002

  19. Expected LHC needs Moore’s law (basedon 2000) SJ – Nov.2002

  20. The LHC Computing Grid Project – LCG LCG Goal –Prepare and deploy the LHC computing environment • 1) Applications support: • develop and support the common tools, frameworks, and environment needed by the physics applications • 2) Computing system: • build and operate a global data analysis environment • integrating large local computing fabrics • and high bandwidth networks • to provide a service for ~6K researchers • in over ~40 countries This is not “yet another grid technology project” – it is a grid deployment project SJ – Nov.2002

More Related