1 / 9

UCL HEP Computing Status

HEPSYSMAN, RAL, 2005-04-27. UCL HEP Computing Status. Computers. Desktop PCs: ~50, from 733 MHz to 3.2 GHz Various laptops, inc. a few iBooks Group batch farm: 10 dual 2.4 GHz Dell rackmount CDF batch farm: 10 dual 2.4 GHz Dell rackmount ATLAS trigger testbed: 6 rackmount machines

naida-oneil
Download Presentation

UCL HEP Computing Status

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HEPSYSMAN, RAL, 2005-04-27 UCL HEP Computing Status

  2. Computers • Desktop PCs: ~50, from 733 MHz to 3.2 GHz • Various laptops, inc. a few iBooks • Group batch farm: 10 dual 2.4 GHz Dell rackmount • CDF batch farm: 10 dual 2.4 GHz Dell rackmount • ATLAS trigger testbed: 6 rackmount machines • Mail and web servers • LCG front-end nodes (CE, SE, MON) • Windows Terminal Server • Various dedicated and development machines

  3. Operating Systems • Desktops, farms, servers almost all SLC3 • recently completed changeover from RH7.3 • Windows 2000 Terminal Server • some problems with SAMBA • Windows machines for AutoCAD, hardware control • Laptops a mix of Linux, Windows, OSX

  4. LCG Front-End Machines • Dedicated service nodes: CE, SE, MON • Jobs go to HEP batch farm (normally) • Support for separate CE and batch server • not provided as part of standard LCG • relies on recipes from third parties • is getting easier • well documented for LCG 2.4.0 by Steve Traylen

  5. Central Computing Cluster • UCL SRIF-funded facility • 96 dual 2.8 GHz P4 nodes • Half dedicated to LCG • Other half uses Sun Grid Engine (non-HEP use) • Managed by Information Systems • Hope to integrate HEP and non-HEP parts • use SGE info provider from LeSC • but need to change again for gLite?

  6. Storage • User file server with ~300 GB • One 1.2 TB IDE RAID for data • One 1.2 TB IDE RAID for backup • One 3.2 TB RAID for data/backup • Various RAIDS bought by MINOS and CDF • Tape drive for backup

  7. File backup • Up to now: • backup selected areas to disk (tar) • secondary backup to tape • Problem: • tape solution expensive • need to spend more money to use with SLC • Solution: • RLBackup • Currently keeping old disk backup + RLBackup

  8. Changes • Bob Cranfield retiring later this year • Ben Waugh to be group “computing coordinator” • Gianfranco Sciacca started recently • Machine room moving to new building (early 2006?) • Moving behind campus firewall

  9. Issues • Desktops vs laptops • Management of laptops • OS choice • support • updates • Remote administration of servers and farms • Management and support using fractions of people's time

More Related