Irish Centre for High-End
This presentation is the property of its rightful owner.
Sponsored Links
1 / 23

Irish Centre for High-End Computing PowerPoint PPT Presentation


  • 83 Views
  • Uploaded on
  • Presentation posted in: General

Irish Centre for High-End Computing B ringing capability computing to the Irish research community. Andy Shearer Director ICHEC. Why ICHEC?. Ireland lags well behind the rest of Europe in terms of installed HEC capacity See www.arcade-eu.info/academicsupercomputing/comparison.html

Download Presentation

Irish Centre for High-End Computing

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Irish centre for high end computing

Irish Centre for High-End

Computing

Bringing capability computing to the Irish research community

Andy Shearer

Director

ICHEC

HEA 11th November


Why ichec

Why ICHEC?

  • Ireland lags well behind the rest of Europe in terms of installed HEC capacity

    • See www.arcade-eu.info/academicsupercomputing/comparison.html

    • Ireland now has an installed capacity of about 6000 Gflops/s

    • Ireland has about15-20 systems above 64 Gflops/s sustained

HEA 11th November


Why ichec1

Why ICHEC?

  • Ireland - <2500 processors

  • But … now about 1500 Kflops/s per capita

HEA 11th November


Irish centre for high end computing

High-End Computing in Ireland

DFT+U spin density. Michael Nolan1, Dean C. Sayle2, Stephen C. Parker3 and Graeme W. Watson1(1) TCD, (2) Cranfield University, (3) University of Bath

Marine Modelling Centre, NUI, Galway

G.Murphy, et al, , DIAS/CosmoGrid

Llyod D., et al, Dept. of Biochemistry, TCD

HEA 11th November


Irish centre for high end computing

Context and Motivation

  • Ireland’s ability to perform internationally competitive research and to attract the best computational scientists is currently hindered due to a lack of high end computational resources

  • Ireland has the research demands to justify a large-scale national facility

  • The Irish Centre for High-End Computing (ICHEC)

    • SFI-funded (€2.6M) with in collaboration with the PRTLI project CosmoGrid (€700k), plus links to the TCD/IITAC cluster. CosmoGrid - Grid-Enabled Computational Physics of Natural Phenomena.

    • Initially funded for one year, with proposal to extend to a minimum of three years

HEA 11th November


Irish centre for high end computing

Participating Institutions

National University of Ireland, Galway

Dublin City University

Dublin Institute for Advanced Studies

National University of Ireland, Maynooth

Trinity College Dublin

Tyndall Institute

University College Cork

University College Dublin

HEAnet

HEA 11th November


Irish centre for high end computing

ICHEC’s objectives

  • Support world-class research programmes within Ireland

  • Undertake collaborative research programmes

  • Provide access to HEC facilities to students and researchers in Ireland

  • Provide HPC training

  • Increase the application of HEC and GRID technologies

  • Encourage and publicise activities conducive to establishing and sustaining a world-class HEC facility in Ireland

  • Foster collaboration with similar internationally recognised organisations

HEA 11th November


Irish centre for high end computing

ICHEC’s Roadmap

Phase I (2005) – examine the challenges associated with the management and operations of a distributed national HEC centre

→ Hire key personnel, set up infrastructure and policies, gather user requirements,…

Phase II (2006) – bring the personnel required in to support such a system and develop large-scale new science

→ Fully utilise resources, deploy points of presence, participate in community-led initiatives, explore prospects for technology transfer activities,…

Phase III (2007/8) – ICHEC fully established as a national large-scale facility

HEA 11th November


Ichec hardware resources

ICHEC Hardware Resources

We have a diverse user group so we require heterogeneous resources.

Our tender looked for

Shared Memory System (20%)

Cluster (60%)

Shared File system (20%)

We were also open to imaginative solutions - as long as they were within our budget

Acceptable responses from Bull, IBM, Clustervision, HP, Sun.

Other vendors did not read the tender documents and/or deliver on time!

and of course if we do this again we would do it differently - and so I would hope would the manufacturers

HEA 11th November


Shared memory system

Shared memory system

  • Bull NovaScale NS6320

    • 32 Itanium 2 @1.5GHz

    • 256GB RAM

    • 2.1Tb storage

    • 192 GFlops/s peak

    • 166GFlops/s LINPACK

  • Bull Novascale NS4040

    • front-end functionalities

    • 4 Itanium 2 @1.5GHz

HEA 11th November


Distributed memory cluster and filesystem

Distributed memory cluster and filesystem

  • IBM cluster 1350

    • 474 e326 servers

      • Dual Opterons @2.4GHz each

    • 2.2Tb distributed memory

      • Mix of (410) 4Gb and (64) 8Gb nodes

    • 20Tb storage

      • SAN IBM DS4500 using GPFS

    • 4.55 TFlops/s peak, 2.8Tflops/s

      • Will feature in the upper end of the Top500

    • 15 additional nodes

      • front-end functionalities and storage/cluster management

    • Interconnect initially Gigabit Ethernet

HEA 11th November


Ichec machine room

ICHEC Machine Room

  • Installation on schedule

    • Acceptance tests mainly completed…

    • Transitional Service opened 1st September

HEA 11th November


Software

Software

  • Establishing user requirements

    • discussions with user groups

  • Limited software budget

    • Prices for National Centres higher than for academic research groups

    • NB… not possible to purchase all packages on people’s wish list

  • We had pressure to purchase many packages with overlapping functionalities – need to rationalise

    • Preference given to packages with best scalability/performance

    • Will work with research communities to reduce this list

HEA 11th November


Software cont

Software (cont.)

  • Currently shortlisted ~30 packages

  • Current list

    • Operating systems

      • SUSE Linux Enterprise 9, Bull Linux (based on Red Hat Enterprise)

    • Programming environments

      • Intel compilers, Portland Group Cluster Development Kit, DDT

    • Pre-/Post-processing, data visualisation

      • IDL, Matlab, OpenDX, VTK

    • Libraries and utilities

      • FFTW, Scalapack, PetSC, (Par)Metis …

HEA 11th November


Software cont1

Software (cont.)

  • Engineering software

    • CFX, Abaqus, Marc, Patran

  • Computational chemistry / materials sciences / life sciences

    • Amber, (MPI-)BLAST, Charmm, CPMD, Crystal, DLPoly, Gamess UK, Gaussian03, Gromacs, MOE, Molpro, NAMD, NWchem, Siesta, Turbomole, VASP

  • Environmental sciences

    • POM

  • Expensive packages (Accelerys, etc.) not initially available

    • defer decision until Phase 2 budget approved

  • HEA 11th November


    Scheduling policies

    Scheduling Policies

    • Transitional Period, cluster

    • Day regime:week days, 10am – 6pm

    • Night regime:week days, 6pm – 10am

    • Week-end regime:Friday 6pm – Monday 10am

  • Tuning pending…

    • Scalability studies of key applications

    • User feedback

  • Gaussian users expected to use the Production region / HiMem

    • Allow acceptable turn-around time on the Bull for FEM users

  • HEA 11th November


    Scheduling policies1

    Scheduling Policies

    • Transitional Period, shared-memory system

    • Day/night/week-end regimes: same as cluster

  • Feedback from user community is important to help us tune policies for the start of the full National Service

  • Need for checkpoint/re-start capability for long runs

  • Fair share mechanism enforced by Maui

  • Mechanism to “jump the queue” for important deadlines

  • HEA 11th November


    Supporting the research community

    Supporting the research community

    • ICHEC will be set up as a distributed national centre

    • The Centre will be an equally important mix of Hardware and People

      • Large groups in Dublin, Galway and Cork

      • Points of presence in each of the participating institutions (Phase 2)

      • Support will be essential

    • Plans are to recruit applications scientists in scientific areas:

      • engineering (FEM/CFD/LB)

      • life sciences (bioinformatics, drug design)

      • physics /chemistry (soft and hard condensed matter)

      • environmental sciences (geology/geophysics, oceanography, meteorology/climatology)

      • applied maths

      • HPC

    HEA 11th November


    Supporting the research community cont

    Supporting the research community (cont.)

    • The face of the Centre will be the support scientists

      • Serve their research community, rather than providing “free staff” to individual groups

        • Code development (community-led initiative)

        • Make the best use of the infrastructure (parallelise/optimise)

        • Development of specialised material (course, documentation)

        • Technical support for “Grand Challenge” projects

        • Persistent communication channel with ICHEC

    • Key to our success will be a close and sustained collaboration with the research community

    HEA 11th November


    Training activities

    Training activities

    • Training is seen as a way of increasing performance and efficiency of the machines - our programme reflects the relative youth of Ireland’s HEC activity.

    • ICHEC has developed courses - the first was delivered to UCD in October.

      • An introduction to High-Performance Computing (HPC)

        • What is HPC, HPC as a tool for furthering research, etc.

        • Overview of current hardware architectures

        • Overview of common programming paradigms

        • Decomposition strategies

        • An introduction to the ICHEC national service

      • Parallel programming with MPI: an introduction

      • Parallel programming with OpenMP: an introduction

    • Specialised material to be developed in 2006, e.g.

      • HPC for engineers, HPC in bioinformatics, parallel linear algebra, etc.

    HEA 11th November


    Irish centre for high end computing

    Now …

    • Service started on the 1st September (Bull)

      • 21st September (IBM)

    • Have 60 approved projects plus 17 from CosmoGrid

      • 38% Physical Sciences

      • 22% Engineering

      • 25% Life Sciences

      • 7% Environmental Sciences

      • 7% Other (Maths, Computer Sciences, Humanities)

    • At least three times as many applications will be submitted in 3-6 months

    • Running at full capacity by the end of the year

    HEA 11th November


    Problems

    Problems

    • Getting the community up to speed on

      • Batch submission

      • Moving from serial to parallel computing

    • Licences?

      • National software agreements

    • Firewalls and security

      • Having a miriad of security and firewall standards for HEAnet is insanity

        • DCU and TCD do not accept zip files but have different ways around this

        • Proxy servers for ssh is different on each site

        • Grid access is a nightmare …….

      • A common approach via HEAnet would help

    • The last mile problem

      • Our access is fine, redundant Gigabit; but the universities throttle this back

    HEA 11th November


    The future enhanced hardware

    The future … enhanced hardware

    • Technology/Architecture?

      • Massive clusters bring their own problems of reliability

      • Large scale SMP vs tightly bound clusters of ‘fat’ nodes

      • Novel architecture - IBM’s BlueGene or FPGAs - others

    • Software?

      • Massive clusters bring massive problems

        • Scalability

        • Fault tolerance - how to do a check point restart on 10000 nodes?

    • Licences?

      • How best to licence software on a 10000 node cluster?

    • How to benchmark?

      In 2006/7 we will be looking for original solutions to these problems

    HEA 11th November


  • Login