1 / 25

Hungarian GRID Projects and Cluster Grid Initiative

Hungarian GRID Projects and Cluster Grid Initiative. P. Kacsuk MTA SZTAKI kacsuk@sztaki.hu www.lpds.sztaki.hu. Clusters. VISSZKI. DemoGrid. Globus, Condor service. Security. Grid monitoring. Data storage subsystem. Applications. Supercomputers. SuperGrid. Resource scheduling.

Download Presentation

Hungarian GRID Projects and Cluster Grid Initiative

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hungarian GRID Projects and ClusterGrid Initiative P. Kacsuk MTA SZTAKI kacsuk@sztaki.hu www.lpds.sztaki.hu

  2. Clusters VISSZKI DemoGrid Globus, Condor service Security Grid monitoring Data storage subsystem Applications Supercomputers SuperGrid Resource scheduling MPICH-G P-GRADE Accounting Hungarian Grid projects

  3. Ojectives of the VISSZKI project • Testing various tools and methods for creating a Virtual supercomputer (metacomputer) • Testing and evaluatingGlobus and Condor • Elaborating a national Grid infrastructure service based on Globus and Condor by connecting various clusters

  4. Results of the VISSZKI project Low-level parallel development PVM MW MPI Grid level job management Condor-G Grid middleware Globus Grid fabric Local job management Condor Condor Condor Condor Clusters Clusters Clusters Clusters

  5. Structure of the DemoGrid project GRID Generic architecture • GRID subsystems studyand development • Data Storage subsystem: Relational database, OO database, geometric database, distributed file system • Monitoring subsystem • Security subsystem • Demo applications • astro-physics • human brain simulation • particle physics • car engine design Data Storage Subsystem Security subsystem Monitoring subsystem RDB OODB DFS GDB Applications Decomp Data Tightly Loosely Hardver CPU Storage Network

  6. NIIFI 2*64 proc. Sun E10000 ELTE 16 proc. Compaq AlphaServer BME 16 proc. Compaq AlphaServer • SZTAKI 58 proc. cluster • University (ELTE, BME) clusters Structure of theHungarianSupercomputing Grid 2.5 Gb/s Internet

  7. The Hungarian Supercomputing GRID project GRID application GRID application Web based GRID access GRID portal High-level parallel development layer P-GRADE Low-level parallel development PVM MW MPI Grid level job management Condor-G Grid middleware Globus Grid fabric Condor, PBS, LSF Condor Condor, LSF, Sun Grid Engine Condor, PBS, LSF Klaszterek Compaq Alpha Server Compaq Alpha Server SUN HPC

  8. Distributed supercomputing: P-GRADE • P-GRADE (Parallel GRid Application Development Environment) • The first highly integrated parallel Grid application development system in the world • Provides: • Parallel, supercomputing programming for the Grid • Fast and efficient development of Grid programs • Observation and visualization of Grid programs • Fault and performance analysis of Grid programs

  9. Condor flocking Condor 2100 2100 2100 2100 Condor 2100 2100 2100 2100 P-GRADE P-GRADE P-GRADE Mainframes Clusters Grid Condor/P-GRADE on the whole range of parallel and distributed systems GFlops Super-computers

  10. P-GRADE program runsat the Madisoncluster P-GRADE program runs at the Budapest cluster P-GRADE program runsat the Westminstercluster Berlin CCGrid Grid Demo workshop: Flocking of P-GRADEprograms by Condor P-GRADE Budapest n0 n1 m0 m1 Budapest Madison p0 p1 Westminster

  11. P-GRADE program runsat the Londoncluster P-GRADE program downloaded to London as a Condor job 1 3 P-GRADE program runs at theBudapestcluster 4 2 London clusteroverloaded=> check-pointing P-GRADE program migrates to Budapest as a Condor job Next step: Check-pointing and migration of P-GRADEprograms Wisconsin P-GRADE GUI Budapest London n0 n1 m0 m1

  12. Further develoment: TotalGrid • TotalGrid is a total Grid solution that integrates the different software layers of a Grid (see next slide) and provides for companies and universities • exploitation of free cycles of desktop machines in a Grid environment after the working/labor hours • achieving supercomputer capacity using the actual desktops of the institution without further investments • Development and test of Grid programs

  13. Layers of TotalGrid P-GRADE PERL-GRID Condor v. SGE PVM v. MPI Internet Ethernet

  14. Hungarian Cluster Grid Initiative • Goal: To connect the new clusters of the Hungarian higher education institutions into a Grid • By autumn 42 new clusters will be established at various universities of Hungary. • Each cluster contains 20 PCs and a network server PC. • Day-time: the components of the clusters are used for education • At night: all the clusters are connected to the Hungarian Grid by the Hungarian Academic network (2.5 Gbit/sec) • Total Grid capacity in 2002: 882 PCs • In 2003 further 57 similar clusters will join the Hungarian Grid • Total Grid capacity in 2003: 2079 PCs • Open Grid: other clusters can join at any time

  15. Structure of the Hungarian Cluster Grid TotalGrid 2002: 42*21 PC Linux clusters, total 882 PCs 2003: 99*21 PC Linux clusters, total 2079 PCs TotalGrid 2.5 Gb/s Internet TotalGrid

  16. Live demonstration of TotalGrid • MEANDER Nowcast Program Package: • Goal: Ultra-short forecasting (30 mins) of dangerous weather situations (storms, fog, etc.) • Method: Analysis of all the available meteorology information for producing parameters on a regular mesh (10km->1km) • Collaborative partners: • OMSZ (Hungarian Meteorology Service) • MTA SZTAKI

  17. Structure of MEANDER First guess data ALADIN SYNOP data Satelite Radar Lightning CANARI Delta analysis decode Basic fields: pressure, temperature, humidity, wind. Radar to grid Rainfall state Derived fields: Type of clouds, visibility, etc. Satelite to grid Visibility Overcast GRID Type of clouds Current time Visualization For users: GIF For meteorologists:HAWK

  18. P-GRADE version of MEANDER

  19. Implementation of the Delta method in P-GRADE

  20. Live demo of MEANDER based on TotalGrid P-GRADE PERL-GRID 11/5 Mbit Dedicated job ftp.met.hu HAWK netCDF 34 Mbit Shared netCDF output job netCDF input netCDF output 512 kbit Shared PERL-GRID CONDOR-PVM Parallel execution

  21. On-line Performance Visualization in TotalGrid P-GRADE PERL-GRID 11/5 Mbit Dedicated job ftp.met.hu netCDF 34 Mbit Shared job netCDF input GRM TRACE GRM TRACE 512 kbit Shared PERL-GRID CONDOR-PVM Task of SZTAKI in the DataGrid project Parallel execution and GRM

  22. PROVE visualization of the delta method

  23. Conclusions • Already several important results that can be used both for • the academic world (Cluster Grid, SuperGrid) • commercial companies (TotalGrid) • Further efforts and projects are needed • to make these Grids • more robust • more user-friendly • having more new functionalities

  24. Thanks for your attention ? Further information: www.lpds.sztaki.hu

More Related