1 / 12

CERN Clusters

CERN Clusters. Tim Smith CERN/IT. Heterogeneous Clusters. 10 years evolution… HP-UX, IRIX, AIX, DUX, Solaris, WNT 4 years ago… Linux additions / replacements 37 clusters configurations ! e.g. CMS Interactive: Solaris, Linux Batch: Solaris, HP-UX, Linux. ‘RISC’ Decommissioning.

hashim
Download Presentation

CERN Clusters

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CERN Clusters Tim Smith CERN/IT

  2. Heterogeneous Clusters • 10 years evolution… • HP-UX, IRIX, AIX, DUX, Solaris, WNT • 4 years ago… • Linux additions / replacements 37 clusters configurations ! • e.g. CMS • Interactive: Solaris, Linux • Batch: Solaris, HP-UX, Linux Tim Smith: LCW in FNAL

  3. ‘RISC’ Decommissioning Tim Smith: LCW in FNAL

  4. The Rise and Fall of PC Clusters Tim Smith: LCW in FNAL

  5. The Rise and Fall of PC Clusters Elonex III TechAS Elonex II Siemens HP Elonex I Cogestra Tim Smith: LCW in FNAL

  6. Component Architecture High capacitybackboneswitch Application Server 100/1000baseT switch CPU CPU CPU CPU CPU Disk Server 1000baseT switch Tape Server Tape Server Tape Server Tape Server Tim Smith: LCW in FNAL

  7. Concentrated Facilities • Interactive Cluster • 50 bi-processor PCs; 512 MB, 440-800 MHz • Batch Cluster with chaotic access • 280 bi-processor PCs; 0.1-1 GB, 440-800 MHz • Batch Cluster with scheduled access • 190 bi-processor PCs; 512 MB, 600-800 MHz • Tape and Disk server ‘Clusters’ Tim Smith: LCW in FNAL

  8. ‘Chaotic’ Clusters lxbatch001 lxbatch001 DNS load balancing lxbatch001 lxbatch001 lxbatch001 lxbatch001 lxbatch001 lxbatch001 lxbatch001 lxbatch001 lxplus001 lxplus001 lxplus001 LSF lxplus001 lxplus001 lxplus001 rfio lxplus001 lxplus001 lxplus001 tape001 rfio tape001 disk001 disk001 Tim Smith: LCW in FNAL

  9. ‘Chaotic’ Clusters Public_queues lxbatch001 lxbatch001 lxbatch001 lxbatch001 DNS load balancing ATLAS_queues lxbatch001 lxbatch001 lxbatch001 lxbatch001 lxbatch001 lxbatch001 CMS_queues lxbatch001 lxbatch001 lxbatch001 lxbatch001 lxbatch001 lxbatch001 lxbatch001 lxbatch001 lxbatch001 lxbatch001 production_queues lxplus001 lxplus001 lxplus001 LSF lxplus001 lxplus001 lxplus001 rfio lxplus001 lxplus001 lxplus001 tape001 rfio tape001 disk001 disk001 Tim Smith: LCW in FNAL

  10. Scheduled Cluster Tim Smith: LCW in FNAL

  11. Management Techniques I • KickStart & JumpStart (Linux & Solaris) • System installation • ANIS • installation automation (bootp, dhcp, tftp, nfs) • SUE • System post installation and configuration • ASIS • Application installation (3 GB local) Tim Smith: LCW in FNAL

  12. Management Techniques II • Console Mgmt • PCM (DEC PolyConsole Manager) • Console Concentrators • Cross Wiring serial ports • Etherlite, VACM • Power Mgmt • NONE • Monitoring • SURE, perfmon, remperf, … Tim Smith: LCW in FNAL

More Related