1 / 8

Brunel Grid Activities Report

Brunel Grid Activities Report. Peter van Santen Distributed and Grid Computing Group peter.van.santen@brunel.ac.uk. Background. Dept. of Electronic and Computer Engineering PP group (Detectors, CMS, BarBar, etc.) 7+ DC group (parallel systems, cluster computing, etc.) 4+

Download Presentation

Brunel Grid Activities Report

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Brunel Grid Activities Report Peter van Santen Distributed and Grid Computing Group peter.van.santen@brunel.ac.uk

  2. Background Dept. of Electronic and Computer Engineering • PP group (Detectors, CMS, BarBar, etc.) 7+ • DC group (parallel systems, cluster computing, etc.) 4+ ‘Grid computing interest group’ June 2001 (common interest, ad hoc) Unfunded Grid effort Applied for SRIF funding as part of the London eSience Consortium (£ 0.75m) BITlab to support collaborative projects for a range of disciplines. 48 (64) dual processor node cluster, 4 Tb storage, 1 Gbit network and 1 Gbit link.

  3. BITLabBrunel Information Technology Laboratory 1 Gbit switch Evans & Sutherland3D imagegen Elumensvisionstn 1 Tbytedatastore Videowallserver Videowall 48-64 node2(2 XEON)cluster1 Gb Cu network switch Graphicsworkstations 1 Gbit Fibre to CC 100Mb ?1Gb/2.5Gb?

  4. Resources Staff 5+ Academic (pt effort) 1 RF CMS 2 RA’s DataTAG, BarBar 2 Postgrad DataGrid, Cluster Equipment (present) to support testbed 2 Athlon 700MHz VIA RH6.2 (2.2.19) 1 dual PIII 800MHz 440GX RH6.2 24 PIII 800MHz 815 370SSA RH6.2 (now) (teaching) ? P4 1.8GHz P4SBA 845 RH7.1 1 Dual P4 (2 Xeon/Die) 1.8GHz 0.5Gb P4CD6+ 860 RH7.1 2.4.2 (eng.rel.)

  5. Planning Testbed support • Testbed compliance (range over hw) May/July (supermicro/intel support) • Testbed active Dec (2001)  Feb. • Cluster prototyping May  June testbed • Cluster operational September  testbed Skill transfer Linux, Globus, Testbed, etc. 3 (pt)  3 (pt) + 4 (ft) May/June

  6. Progress • Sept 2001 RH6.2 2.2.19 VIA and 815  Globus • Developed infra-structure to support testbed • Agreement for QoS at 100Mbs to ext. • Agreement for subnet outside University CC firewall. • Successful measurements on i860 dual prestonia (jackson) (see Richard Hughes-Jones DataGrid WP7). • 2 Machines (athlon 800MHz 0.5Gb) testbed software being installed, housed with main switch. • Main effort to date unfunded  2.5 funded posts

  7. StatusGrid Computing Group • 4 Academic staff active • 5 Research staff active • 2 Machines testbed ready • 24 Machines compliant (available o/s teaching) • Higher performance boards being evaluated • End Feb RHJ and PvS 2 day evaluation of Plumus chipset (performance/compliance)

  8. Summary • Overcome infrastructure problems re connectivity • High learning curve for new staff • August start up difficult mainly documentation • Installation problems during Aug/Sept due to chipsets, etc. • Lack on known publicised standards • In house mainly RH 7.1, Mandrake 8.0 (SKYLD pvm cluster) • RH6.2 compliance resolved (in most cases) • Funding and funded posts • Strategy HP motherboards (cluster, etc.) in parallel with testbed Support: DataGrid, DataTAG, CMS, BarBar, etc Collaborative effort improve technology information transfer

More Related