1 / 40

Computer Centre Evolutions

Computer Centre Evolutions. SFT Group Meeting Tim Smith IT/FIO. Historical perspective Building Infrastructure evolution HW and SW management …evolving. Evolutions: Outline. Evolution of Focus. Proliferation. Consolidation. Expansion. Mainframe: IBM, Cray. Management Quattor, LEMON.

Download Presentation

Computer Centre Evolutions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Centre Evolutions SFT Group Meeting Tim Smith IT/FIO

  2. Historical perspective Building Infrastructure evolution HW and SW management …evolving Evolutions: Outline SFT Group: Tim.Smith@cern.ch

  3. Evolution of Focus Proliferation Consolidation Expansion Mainframe: IBM, Cray Management Quattor, LEMON Linux: CMS_WGS, ATLAS_WGS, NA48_CDR, … Scalable: SP2, CS2 Infrastructure 513 Refurbishment RISC: HP, AIX, DUX, IRIX, Solaris Shared PLUS, BATCH Acquisition CPU, Disk, Tape PC: Windows, Linux SysAdmin Out -> Insourced 1996 2000 2004 2008 SFT Group: Tim.Smith@cern.ch

  4. The Demise of Free Choice 2000 2001 2002 2003 SFT Group: Tim.Smith@cern.ch

  5. Cluster Aggregation SFT Group: Tim.Smith@cern.ch

  6. Acquisition Cycles SFT Group: Tim.Smith@cern.ch

  7. LXBATCH Upgrade SFT Group: Tim.Smith@cern.ch

  8. today LHC Computing needs Networking: 10 – 40 Gb/s to all big centres Aggregate to/from CERN: ≥70 Gb/s Moore’s law (basedon 2000 data) SFT Group: Tim.Smith@cern.ch

  9. Infrastructure Evolution • To meet LHC Requirements… • More space • More power • We want to be able to use up to 2MW • c.f. 600kW available today • requires normabarre density for ~3MW for flexibility • Better layout • Group related services by normabarre • today’s layout causes confusion during power failures • Spring clean! • Clear false floor void for air flow SFT Group: Tim.Smith@cern.ch

  10. Infrastructure Evolution • Elements • Re-activate the vault • Upgrade Electrical Supplies; Standard and UPS • Air Conditioning • Machine room layout • Phases • 2002: Refurbish vault • 2003: Clear RHS of equipment; refurbish RHS • 2003-2004: New Substation for 513 • 2004: Migrate LHS to RHS; refurbish LHS • 2005: Air conditioning upgrade • 2006-2008: UPS upgrade SFT Group: Tim.Smith@cern.ch

  11. The Vault CPU servers A Professional Vault  Disk servers Tape silos and servers SFT Group: Tim.Smith@cern.ch

  12. SFT Group: Tim.Smith@cern.ch

  13. SFT Group: Tim.Smith@cern.ch

  14. SFT Group: Tim.Smith@cern.ch

  15. Current Machine Room Layout Problem: Normabarres run one way, services run the other…. Services SFT Group: Tim.Smith@cern.ch

  16. Proposed Machine Room Layout 9m double rows of racks for critical servers Aligned normabarres 18m double rows of racks 12 Mario racks or 36 19” racks 528 box PCs 105kW 1440 1U PCs 288kW 324 disk servers 120kW(?) SFT Group: Tim.Smith@cern.ch

  17. SFT Group: Tim.Smith@cern.ch

  18. SFT Group: Tim.Smith@cern.ch

  19. SFT Group: Tim.Smith@cern.ch

  20. Stop B513 corridors cold air from machine room. Air Conditioning SFT Group: Tim.Smith@cern.ch

  21. Air Conditioning Upgrade Plans • Detailed air conditioning upgrade plan still in preparation. Key points as they are understood today are • additional chiller to be installed in 2005 (once the new substation is commissioned) • Machine room cooling probably requires at least some cold air to be routed under the false floor • some ducting exists already on Salėve wall. More may be needed. • Need to improve redundancy • or, at least, understand the impact and frequency of the different possible failures of critical elements (including the difference between failure in winter and in summer). • Major upgrade foreseen for Winter of 2004/5. SFT Group: Tim.Smith@cern.ch

  22. Management Issues • Hardware Management • Where are my boxes? and what are they? • Hardware Failure • #boxes  MTBF + Manual Intervention = Problem! • Software Consistency • Operating system and managed components • Experiment software? • State Management • Evolve configuration with high level directives, not low level actions. • Maintain service despite failures • or, at least, avoid dropping catastrophically below expected service level. SFT Group: Tim.Smith@cern.ch

  23. Framework for Management Actual State Desired State CDB Monitoring Manager Config Manager Node MonAgent Cfg Agent Cfg Cache SFT Group: Tim.Smith@cern.ch

  24. Framework for Management HardWare Manager Fault Manager State Manager CDB SW Rep Monitoring Manager Config Manager SW Manager XML RPM http http Node MonAgent Cfg Agent SW Agent Cfg Cache SW Cache SFT Group: Tim.Smith@cern.ch

  25. ELFms: LEAF HMS SMS LEMON CDB SW Rep OraMon CDB SWRep Node MSA NCM SPMA Cfg Cache SW Cache SFT Group: Tim.Smith@cern.ch

  26. structure template hardware_cpu_GenuineIntel_Pentium_III_1100; "vendor" = "GenuineIntel"; "model" = "Intel(R) Pentium(R) III CPU family 1133MHz"; "speed" = 1100; template hardware_diskserver_elonex_1100; "/hardware/cpus" = list(create("hardware_cpu_GenuineIntel_Pentium_III_1100"), create("hardware_cpu_GenuineIntel_Pentium_III_1100")); "/hardware/harddisks" = nlist("sda", create("pro_hardware_harddisk_WDC_20")); "/hardware/ram" = list(create("hardware_ram_1024")); "/hardware/cards/nic" = list(create("hardware_card_nic_Broadcom_BCM5701")); structure template hardware_card_nic_Broadcom_BCM5701; "manufacturer" = "Broadcom Corporation NetXtreme BCM5701 Gigabit Ethernet"; "name" = "3Com Corporation 3C996B-T 1000BaseTX"; "media" = "GigaBit Ethernet"; "bus" = "pci"; Configuration Hierarchies hardware_diskserv_elonex_1100 hardware_elonex_500 hardware_elonex_600 hardware_elonex_800 hardware_elonex_800_mem1024mb hardware_elonex_800_mem128mb hardware_seil_2002 hardware_seil_2002_interactiv hardware_seil_2003 hardware_siemens_550 hardware_techas_600 hardware_techas_600_2 hardware_techas_600_mem512mb hardware_techas_800 SFT Group: Tim.Smith@cern.ch

  27. SWRep Servers http Packages (rpm, pkg) cache SPMA packages Mgmt API nfs SPMA.cfg (RPM, PKG) ACL’s ftp SPMA SPMA NCM Components NCM Node (re)install? Installation server Cdispd PXE CCM PXE handling Mgmt API Registration Notification ACL’s Node Install DHCP DHCP handling KS/JS KS/JS generator Client Nodes CCM CDB quattor installation SFT Group: Tim.Smith@cern.ch

  28. Lemon Architecture SFT Group: Tim.Smith@cern.ch

  29. Host information SFT Group: Tim.Smith@cern.ch

  30. Cluster information SFT Group: Tim.Smith@cern.ch

  31. JpGraph and host reboots SFT Group: Tim.Smith@cern.ch

  32. Connection Management! SFT Group: Tim.Smith@cern.ch

  33. SFT Group: Tim.Smith@cern.ch

  34. LEAF: HMS and SMS • HMS (Hardware Management System): • Track systems trough all steps in lifecycle eg. installation, moves, vendor calls, retirement • Handle multiple nodes at a time (eg. racks) • Automatically requests installs, retires etc. to technicians • PC finder to locate equipment physically • HMS implementation is CERN specific, but concepts and design should be generic • SMS (State Management System): • Automated handling high-level configuration steps, eg. • Reconfigure and reboot all LXPLUS nodes for new kernel • Reallocate nodes inside LXBATCH for Data Challenges • Drain and reconfig node X for diagnosis / repair operations • extensible framework – plug-ins for site-specific operations possible • Issues all necessary (re)configuration commands on top of quattor CDB and NCM • Uses a state transition engine SFT Group: Tim.Smith@cern.ch

  35. LEAF screenshots SFT Group: Tim.Smith@cern.ch

  36. ELFms status – Quattor (I) • Manages (almost) all Linux boxes in the computer centre • ~ 2100 nodes, to grow to ~ 8000 in 2006-8 • LXPLUS, LXBATCH, LXBUILD, disk and tape servers, Oracle DB servers • Solaris clusters, server nodes and desktops to come for Solaris9 • Examples: • KDE/Gnome security patch rollout • 0.5 GB onto 700 nodes; 15 minute time smearing from 3 lb-servers • LSF 4-5 transition • 10 minutes; No queues or jobs stopped; C.f. last time 3 weeks, 3 people! • Starting: head nodes using Apache proxy technology for software and configuration distribution SFT Group: Tim.Smith@cern.ch

  37. ELFms status – Quattor (II) • LCG-2 WN configuration components available • Configuration components for RM, EDG/LCG setup, Globus • Progressive reconfiguration of LXBATCH nodes as LCG-2 WN’s • Community driven effort to use quattor for general LCG-2 configuration • Coordinated by staff from IN2P3 and NIKHEF • Aim is to provide a complete porting of EDG-LCFG config components to Quattor • CERN and UAM Madrid providing generic installation instructions and site-independent packaging, as well as a Savannah development portal • EGEE has chosen quattor for managing their integration testbeds • Tier1/2 sites as well as LHC experiments evaluating using quattor for managing their own farms SFT Group: Tim.Smith@cern.ch

  38. ELFms status – LEMON • Smooth production running of MSA agent and Oracle-based repository at CERN-CC • 150 metrics sampled every 30s -> 1d • ~ 1 GB of monitoring data / day on ~ 2100 nodes • New sensors and metrics, eg. tape robots, temperature, SMART disk info • GridICE project uses LEMON for data collection • Gathering experiment requirements and interfacing to grid-wide monitoring systems (MonaLisa, GridICE) • Good interaction with, and gathered feedback from CMS DC04 • Archived raw monitoring data will be used for CMS computing TDR • Visualization: • Operators - Test interface to new generation alarm systems (LHC control alarm system) • Sys managers - Finish status display pages (Miro’s talk) SFT Group: Tim.Smith@cern.ch

  39. ELFms status – LEAF • HMS • In production since late 2002 (installs only) • Rapid evolution – 16 production releases in 2003 • Used successfully to move & install 100’s nodes • Fully integrated (LAN DB, CDB, SMS, other workflow apps) • SMS • First production release January (stable CDB) • Now for all quattor-managed nodes (>2000) • All batch and interactive nodes change state automatically SFT Group: Tim.Smith@cern.ch

  40. Questions? SFT Group: Tim.Smith@cern.ch

More Related