1 / 11

Liverpool HEP – Site Report May 2007

Liverpool HEP – Site Report May 2007. John Bland, Robert Fay. Staff Status. Two members of staff left in the past year: Michael George August 2006, Andy Washbrook December 2006 Replaced by two full time HEP system administrators John Bland, Robert Fay December 2006

tyler
Download Presentation

Liverpool HEP – Site Report May 2007

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Liverpool HEP – Site Report May 2007 John Bland, Robert Fay

  2. Staff Status • Two members of staff left in the past year: • Michael George August 2006, Andy Washbrook December 2006 • Replaced by two full time HEP system administrators • John Bland, Robert Fay December 2006 • One full time Grid administrator • Paul Trepka • One part time hardware technician • Dave Muskett

  3. Current Hardware • Desktops • ~100 Desktops: Upgraded to Scientific Linux 4.3, Windows XP • Minimum spec of 2GHz x86, 1GB RAM + TFT Monitor • 48 new machines, rest upgraded to equivalent spec • Laptops • ~30 Laptops: Mixed architectures, specs and OSes. • Batch Farm • Scientific Linux 3.0.4, + software repository (0.7TB), storage (1.3TB) • 40 dual 800MHz P3s with 1GB RAM • Split 30 batch / 10 interactive • Using Torque/PBS • Used for general analysis jobs

  4. Current hardware – continued • Matrix • 10 node dual 2.40GHz Xeon, 1GB RAM • 6TB RAID array • Used for CDF batch analysis and data storage • HEP Servers • User file store + bulk storage via NFS (Samba front end for Windows) • Web (Apache), email (Sendmail) and database (MySQL) • User authentication via NIS (+Samba for Windows) • Quad Xeon 2.40GHz shell server and ssh server • Core servers have a failover spare

  5. Current Hardware – continued • MAP2 Cluster • 960 node (Dell PowerEdge 650) cluster • 280 nodes shared with other departments • Each node has 3GHz P4, 1GB RAM, 120GB local storage • 12 racks (480 nodes) dedicated to LCG jobs • 5 racks (200 nodes) used for local batch processing • Front end machines for ATLAS, T2K, Cockcroft • 13 dedicated GRID servers for CE, SE, UI etc • Each rack has two 24 port gigabit switches • All racks connected into VLANs via Force10 managed switch

  6. Storage • RAID • All file stores are using at least RAID5. New servers starting to use RAID6. • All RAID arrays using 3ware 7xxx/9xxx controllers on Scientific Linux 4.3. • Arrays monitored with 3ware 3DM2 software. • File stores • New User and critical software store, RAID6+HS, 2.25TB • ~3.5TB general purpose ‘hepstores’ for bulk storage • 1.4TB + 0.7TB batchstore+batchsoft for the Batch farm cluster • 1.4TB hepdata for backups and scratch space • 2.8TB RAID5 for LCG storage element • New 10TB RAID5 for LCG SE (2.6 kernel) with 16x750GB SATAII.

  7. Network • Topology MAP2 WAN Force10 Gigabit Switch Switch Switch Switch NAT LCG servers Offices Servers 1GB link

  8. Proposed Network Upgrades • Topology - Future MAP2 3GB WAN 3GB Force10 Gigabit Switch firewall LCG servers Offices Servers 1GB link VLAN

  9. Network Upgrades • Recently upgraded core Force10 E600 managed switch to increase throughput and capacity. • Now have 450 gigabit ports (240 at line rate) • Use as central departmental switch, using VLANs • Increasing bandwidth to WAN using link aggregation to 2-3GBit/s • Possible increase to departmental backbone to 2GBit/s • Adding departmental firewall/gateway • Network intrusion monitoring with snort • Most office PCs and laptops are on internal private network • Wireless • Wireless is currently provided by Computer Services Department • HEP wireless in planning

  10. Security & Monitoring • Security • Logwatch (looking to develop filters to reduce ‘noise’) • University firewall + local firewall + network monitoring (snort) • Secure server room with swipe card access • Monitoring • Core network traffic usage monitored with ntop/MRTG (all traffic to be monitored after network upgrade) • Use sysstat on core servers for recording system statistics • Rolling out system monitoring on all servers and worker nodes, using SNMP, MRTG (simple graphing) and Nagios • Hardware temperature monitors on water cooled racks, to be supplemented by software monitoring on nodes via SNMP.

  11. Printing • We have three group printers • Monochrome laser in shared area • Colour led • Colour ink/phaser • Accessible from Linux using CUPS and automatic queue browsing • Accessible from Windows using Samba/CUPS, and auto driver installs • Large format posters printed through university print queues.

More Related