1 / 30

Portsmouth ICG – HPC The “ Sciama ” Environment

Portsmouth ICG – HPC The “ Sciama ” Environment. G Burton - Nov 10 - Version 1.1. SCIAMA (pronounced shama). S EPNet C omputing I nfrastructure for A strophysical M odeling and A nalysis. What we need from SEPNet partners:-. Named “superuser” Required for initial testing

harsha
Download Presentation

Portsmouth ICG – HPC The “ Sciama ” Environment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Portsmouth ICG – HPC The “Sciama” Environment G Burton - Nov 10 - Version 1.1

  2. SCIAMA (pronounced shama) SEPNet Computing Infrastructure for Astrophysical Modeling and Analysis

  3. What we need from SEPNet partners:- • Named “superuser” • Required for initial testing • Required for initial user training • Will require local IP range for firewall access • Required software packages to be installed • Approximation of number of likely users

  4. Sciama Building Blocks

  5. In the “good-ol-days” things were simple ……….

  6. In the “good-ol-days” things were simple ……….

  7. … then more sockets were added • Two main players are Intel and AMD • Single operating system controlling both sockets

  8. … then more cores to the sockets. The basic building block for the Sciama cluster Intel Xeon X5650 2.66Ghz six-core (Westmere core)

  9. Total ICG Compute Pool > 1000 Cores

  10. Sciama Basic Concept

  11. Basic Concept of Cluster

  12. A bit about Storage ……… NB. The storage is transient - IT WILL NOT BE BACKED UP

  13. Lustre Storage – V Large Files – High Performance

  14. Networking -Three Independent LAN’S

  15. Some users are at remote locations ..

  16. Use of Remote Login Client

  17. ICG-HPC Stack

  18. Installed S/W • Licensed Software:- • Intel Cluster Toolkit (compiler edition for Linux) • Intel Thread Checker • Intel Vtune Performance Analyser • IDL • use ICG license pool ? • Restrict access ?) • Matlab • use UoP floating licenses ? • Restrict access ? )

  19. Installed S/W Will install similar to Cosmos / Universe :- • OpenMPI, OpenMP, MPICH • Opens source C, C++ and Fortran compiler suites • Maths Libs – ATLAS, BLAS, (Sca)LAPACK, FFTW

  20. Running Applications on the Sciama

  21. 12 cores per Nodes • Multiple cores allow for multi treaded applications. • OpenMP is an enabler

  22. Inter node memory sharing not (usually ) possible • Gives rise to “distributed memory” Model • Need the likes of OpenMPI (Message Passing Interface)

  23. Largest (sensible) job is 24Gbytes in this distributed memory model

  24. MPI allows parallel programming in distributed memory model • MPI enables parallel computation • Message Buffers are used to pass data between processes • Standard tcp-ip network used

  25. Hybrid OpenMP and MP programming possible

  26. Comparing Sciama with Cambridge COMOS / Universe Environments

  27. Shared Memory Model • Sciama is a distributed memory systems. • Cosmos – Universe environments are SGI Altix shared memory systems.

  28. Shared Memory Models Can Support very large processes

  29. Shared Memory Model Supports OpenMP and MPI (and Hybrid) • Altix systems have an MPI Offload Engine for speeding up • MPI comms.

  30. Binary Compatibility • COSMOS and Universe are not binary compatible (Intel vs Itanium processors). • Universe is compatible with Sciama but some libraries may be SGI Specific (MPI offload engine)

More Related