1 / 15

HEPSYSMAN Meeting 2003

HEPSYSMAN Meeting 2003. QMUL Site Report by Dave Kant. Overview. Local Computing Grid Computing. What We’ve Got. 2 Dual Processor Servers. 450MHz. 36GB. NIS, NFS, SAMBA, SENDMAIL, DHCP, APACHE, Mozilla. Master. rsync slave every night. Slave. 36GB. 450MHz. DLT Tape.

fbeckmann
Download Presentation

HEPSYSMAN Meeting 2003

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HEPSYSMAN Meeting 2003 QMUL Site Report by Dave Kant D.Kant@qmul.ac.uk

  2. Overview Local Computing Grid Computing D.Kant@qmul.ac.uk

  3. What We’ve Got 2 Dual Processor Servers 450MHz 36GB NIS, NFS, SAMBA, SENDMAIL, DHCP, APACHE, Mozilla Master rsync slave every night Slave 36GB 450MHz DLT Tape Incremental backups to tape every night Cycle Tapes every 10 weeks D.Kant@qmul.ac.uk

  4. Long Term Backup 1TB IDE Raid 5 with Promise TX2 Controllers 8 x 160GB Maxtor HD 650MHz 160GB 160GB 160GB 160GB Ultra133 TX2 features: slots into the 32bit portion of a 64bit PCI bus ( 48bit LBA (> 137GB) up to 140PB 160GB 160GB 160GB 160GB Redundancy of 1 drive (8-1)*160GB = 1120 GB D.Kant@qmul.ac.uk

  5. Linux Desktops Base Platform: RedHat 7.3 Applications: VMware, OpenOffice, Mozilla 7 Dual Athlon MP2000+ Tyan Tiger MPX 2 Dual Intel 1000MHz 4 Dual Intel 500MHz D.Kant@qmul.ac.uk

  6. Queen Mary Network Network Connection Upgraded to 100 Mbps in late 2001 No plans to upgrade the link to 1Gbps => expensive at about 30K/year PHYSICS D.Kant@qmul.ac.uk

  7. EDG TESTBED D.Kant@qmul.ac.uk

  8. EDG Testbed Five machines: CE, SE, UI, WN, LCFG server D.Kant@qmul.ac.uk

  9. More computing on the way Science Research Infrastructure Fund (SRIF) => 12M to QMUL Round 1: HEP + Astronomy + Other small groups awarded 1.2M for 03/04 Computing facility 100m2, 48 Rack capacity, 80KW Air Cooling HEP awarded 220K Round 2: HEP + Astronomy share a further 630K for 05/06 D.Kant@qmul.ac.uk

  10. Computing Facility 100m2 48 rack capacity Overhead power trunking Secure Access 200Amps / 3 Phase 32Amp circuit per 4-plug unit and cable bays FM200 GAS SYSTEM x4 20 KW Air Cooling Unit D.Kant@qmul.ac.uk

  11. More computing on the way HEP approach is biased towards High Throughput Computing “As many CPU’s as possible” Astronomy approach is biased towards High Performance Computing “Low Latency Interconnects for MPI” There may be a significant technology overlap in the future... D.Kant@qmul.ac.uk

  12. HEP Prototype • Front End Server: • Dual Athlon (2.0GHz), 2GHz RAM, x2 200GB HD in raid mirror, 64bit Gigabit Ethernet cards • Worker Nodes: • Dual Athlon (2.0GHz), 1GHz RAM, 120GB HD, Gigabit Ethernet • Storage: • Not yet decided but likely to be IDE Raid solution D.Kant@qmul.ac.uk

  13. HEP Prototype MicroDirect Supplier 4u Front End Server (1.5K) 2u Worker Nodes (1.0K) D.Kant@qmul.ac.uk

  14. HEP Prototype Gigabit Optical Fibre (multimode 50/125) 48 port terminal server (Cyclades) NetGear Gigabit switch Power Distribution Box D.Kant@qmul.ac.uk

  15. Timescales • Next 3 months: prototype part of EDG testbed • End of 2003 : additional 32 dual nodes + 2TB storage • End 2004 : additional 64 dual nodes + 8TB storage • 05/06 : aiming towards a further 100 dual nodes . and 100TB storage LHC start: 400 CPUs + 100TB Storage D.Kant@qmul.ac.uk

More Related