html5-img
1 / 30

OGF28, Munich s.crouch@omii.ac.uk

Running applications on interoperable Grid infrastructures, focusing on OMII-UK supported software or HPC-BP Interoperability Tutorial. OGF28, Munich s.crouch@omii.ac.uk

muniya
Download Presentation

OGF28, Munich s.crouch@omii.ac.uk

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Running applications on interoperable Grid infrastructures, focusing on OMII-UK supported software orHPC-BP Interoperability Tutorial OGF28, Munich s.crouch@omii.ac.uk Steve Crouch, David Wallom, MatteoTurilli, Morris Riedel, ShahbazMemon, Balazs Konya, Gabor Roczei, Peter Stefan, Andrew Grimshaw, Mark Morgan , Katzushige Saga, Justin Bradley, Richard Boardman

  2. Objectives • To give participants practical experience of: • Using individual middleware clients to submit jobs to HPC-BP compliant services • Using the HPC-BP interop demo framework, used for previous HPC-BP demos, to submit jobs to HPC-BP compliant services • To give participants opportunity (and starting point) to learn about: • Basic techniques and approaches for interoperability – what do I need, and how can I do this? • Some of the limitations of standards support across middlewares – what can’t I do?

  3. Tutorial Approach • ‘Presentation-lite’ • Learn at your own pace via online web tutorial… • …or follow my lead • Pragmatic • Generous in terms of time • Tutorial remains available after OGF28 • Ask for help!

  4. Schedule • Session 1: Using individual clients to invoke HPC-BP services • Overview of the demo + demo, Introduction to GridSAM • Download, Install and Configure GridSAM • Submit a Trivial Compute-only JSDL Job to HPC-BP Compliant Services • Download, Build and Configure the BES++ Client • Running the BES++ Client Against HPC-BP Compliant Services • Session 2: Using HPC-BP demo framework to invoke multiple HPC-BP services simultaneously • Download, Install and Configure the Demo Framework • Running the Demo Against Multiple HPC-BP Compliant Services • The Demo in Detail: Adding Another Endpoint to the Demo

  5. The Interoperability Demonstrator

  6. Background • Motivation: • Researchers are often reaching the limits of locally available resources to conduct research • They are beginning to realise the potential of using much larger-scale resources • Compute resources are becoming more numerous and available across Europe • However, using different Grid middleware deployments is traditionally difficult • Middleware clients for different deployments not compatible • Require different security policies/configuration for each

  7. Background • Possible solutions: • Maintain infrastructure that enables use of different clients for each middleware – interoperation • Not scalable - user learning curve, operation and maintenance • Each middleware supports a common service interface, enabled through adoption of accepted open standards – interoperability • Need only learn, use and maintain single client infrastructure • Still leaves security! • What can be practically achieved, in terms of interoperability, with middlewares that adopt OGF compute-related standards? • What is possible? • Limitations? • Demonstrate through proof-of-concept, client-side, application-focused demo

  8. History • Initiated by UK National Grid Service, OMII-UK and FZJ • Initially shown at OGF27, Banff, Canada, Oct 09 • SuperComputing, Nov 09 • ETSI Plugtests, FZJ, UK AHM, Dec 09 • GIN-CG, OGF28, Mar 10 • Demonstrators: David Wallom, Peter Stefan, Morris/ShahbazMemon, Steve Crouch Video available at http://www.omii.ac.uk/wiki/Videos

  9. Architecture OGSA EMS Scenarios (GFD 106) Use Cases Grid Scheduling Use Cases (GFD 64) Education ISV Primer (GFD 141) Compute Related Standards - OGF Agreement WS-Agreement (GFD 107) Job Definition Programming Interface SAGA (GFD 90) Job Description JSDL (GFD 56/136) Uses Programming Interface DRMAA (GFD 22/133) Accounting Usage Record (GFD 98) Application Description HPC Application (GFD 111) Supports Produces Application Description SPMD Application (GFD 115) Extend Job Management OGSA-BES (GFD 108) Job Parameterization Parameter Sweep (GFD. 149) Information GLUE Schema 2.0 (GFD. 147) Describes Profiles File Transfer HPC File Staging (GFD 135) HPC Domain Specific Profile HPC Basic Profile (GFD 114)

  10. Standards/Data Protocols/Security Supported • Standards: • HPC Basic Profile v1.0 • OGSA BES (Basic Execution Service) v1.0 • JSDL (Job Submission Description Language) v1.0 • HPC Profile Application Extension v1.0 • HPC File Staging Profile – UNICORE, GridSAM • Data protocols: • UNICORE, ARC, BES++ – ftp • GridSAM – GridFTP • Security: • Direct middleware -> certificate CA trust (just import CAs)

  11. Participation • Currently: • DEISA/FZJ – UNICORE, SuSE, AMD 64-bit, 1 core • NorduGrid/NIIF – ARC NOX Release, Debian Linux, i686, 16 core • UK NGS/OMII-UK – GridSAM, Scientific Linux 4.7, AMD 64-bit, 256 core • NAREGI-NII/Platform Computing – BES++, 2 nodes • Coming soon: • University of Virginia Campus Grid – GENESIS2, Ubuntu Linux, i686, 8 core • POZNAN Supercomputing Centre – SMOA Computing • Platform Computing BES++ Client used as interop client

  12. Example Application: Plasma Charge Minimization • Provided by David Wallom, NGS • Undergraduate project • Total system energy minimization of point charges around the surface of a sphere • Three different applications • Pre processing – generate input files • Main processing – parallel distributed processing • Post-processing – choose optimal solution

  13. System Requirements • System requirements (for building and running): • Linux - see the Linux client pre-requisites in OMII-UK Development Kit supported platforms • Sun Java JDK 1.6 or above • C compiler - gcc and related development libraries • Lexical analyser - flex – Fast Lexical Analyser • Parser generator - bison • Soon to appear on OGF Forge – hopefully by end of week

  14. <?xml version="1.0" ?> <JobDefinition xmlns="http://schemas.ggf.org/jsdl/2005/11/jsdl"> <JobDescription> <Application> <HPCProfileApplication xmlns="http://schemas.ggf.org/jsdl/2006/07/jsdl-hpcpa"> <Executable>@MINEM_INSTALL_LOCATION@/update_file</Executable> <Argument>input.txt</Argument> <Argument>output.txt</Argument> <Output>stdout.txt</Output> <Error>stderr.txt</Error> @OPTIONAL_WORKING_DIR_ELEMENT@ </HPCProfileApplication> @OPTIONAL_JOBRESOURCE_CREDENTIAL@ </Application> <DataStaging> <FileName>input.txt</FileName> <CreationFlag>overwrite</CreationFlag> <Source> <URI>@INPUT_FILE_URI@</URI> </Source> @OPTIONAL_HPCFSP_CREDENTIAL@ </DataStaging> <DataStaging> <FileName>output.txt</FileName> <CreationFlag>overwrite</CreationFlag> <Target> <URI>@OUTPUT_FILE_URI@</URI> </Target> @OPTIONAL_HPCFSP_CREDENTIAL@ </DataStaging> JSDL Template <DataStaging> <FileName>stdout.txt</FileName> <CreationFlag>overwrite</CreationFlag> <Target> <URI>@STDOUT_FILE_URI@</URI> </Target> @OPTIONAL_HPCFSP_CREDENTIAL@ </DataStaging> <DataStaging> <FileName>stderr.txt</FileName> <CreationFlag>overwrite</CreationFlag> <Target> <URI>@STDERR_FILE_URI@</URI> </Target> @OPTIONAL_HPCFSP_CREDENTIAL@ </DataStaging> </JobDescription> </JobDefinition>

  15. Endpoint Configuration # UNICORE interop config file endpoint_file=unicore.xml application_type=HPCProfileApplication application_type_namespace=http://schemas.ggf.org/jsdl/2006/07/jsdl-hpcpa working_dir= data_mode=ftp data_input_base=ftp://zam1161v01.zam.kfa-juelich.de:8004/ogf27/unicore data_output_base=ftp://zam1161v01.zam.kfa-juelich.de:8004/ogf27/unicore minem_install=/tmp/minem myproxy=no hpcfsp=yes hpcfsp_username=interopdata hpcfsp_password=89zukunft() auth_utoken=yes auth_x509=yes auth_x509_credential=auth/client.pem auth_x509_keypass=not_used auth_x509_cert_dir=auth/certificates auth_utoken_username=ogf auth_utoken_password=ogf

  16. How it Fits Together… Minem BES++ Minem 5 4 1. Create Minem input files FTP minem-interop.pl 1 7 Minem 2. Generate JSDLs from template UNICORE 2 3. Upload input files FTP 4. Submit JSDLs across middlewares BES++ Client MyProxy 3 5. Monitor jobs until completion Minem GridSAM 6 6. Download output files GridFTP 7. Select best result Application 8 8. Generate/upload image to web server FTP Client ARC Minem Job Service Data Service Security Service

  17. The Demo…

  18. Future Work • Standards integration: • Integrate GENESIS II and SMOA Computing • Replacement of BES++ Client with SAGA • SAGA BES adapter currently in development! • Schedule across BES/non-BES endpoints (e.g. Globus) • GLUE2 (e.g. using OMII-UK Grimoires software) • Service discovery (static) • Dynamic allocation (dynamic) • Integrate CREAM-BES? • Security: ‘Static’ trust set up of security, proper VO set up? • Middleware client ‘audit’ of interoperability? • Leads to ability to configure and use different middleware HPC-BP clients… • Use of HARC for advance reservation • Clean up the code, upload to OGF Forge within GIN-CG • Participation very much an open process – if you wish to donate an HPC-BP compliant endpoint, please let me know!

  19. Future Direction • Interface: • Workflow engine integration • To replace/provide alternative to the Perl script • Taverna2 good candidate • Application abstraction • Use of endpoints: • Utilise production-level deployments • Utilise production-level security Future Utilise production-level deployments Verified/Increasing Interoperability Now Abstraction level

  20. Dissemination • Thanks to the OMII-UK publicity machine: • HPCWire: http://www.hpcwire.com/offthewire/European-Grid-Interoperability-Goes-Global-79343767.html • SuperComputing Online: http://www.supercomputingonline.com/latest/european-interoperability-goes-global • EGEE: http://www.eu-egee.com/index.php?id=193&tx_ttnews[tt_news]=125&tx_ttnews[backPid]=65&cHash=90bb3f97cc • http://www.d4science.eu/aggregator/sources/2?page=1 • http://www.it-tude.com/grid_interoperability_eu.html • http://www.beliefproject.org/zero-in/zero-in-fourth-issue-emagazine/news • + numerous OMII-UK website articles & UK NGS articles • Just type ‘European Interoperability Goes Global’ into Google…

  21. GridSAM OMII-UK London e-Science Centre, Imperial College, London Institute of Computing Technology, Chinese Academy of Sciences (Beijing)

  22. GridSAM Overview • What is GridSAM to the resource owners? • A web service to uniformly expose a computational resource • Condor (via local or SSH submission) • Portable Batch Scheduler (PBS) (via local or SSH submission) • Globus • Sun GridEngine • Platform Load Sharing Facility (LSF) • Single machine through Fork or SSH • Acts as a client to these resources • What is GridSAM to end-users? • A means to access computational resources in an open standards-based uniform way • A set of end-user command-line tools and client-side APIs to interact with GridSAM Web Services • Submit and monitor compute jobs • Cross-protocol file transfer (gsiftp, ftp, sftp, WebDav, http, https, soon SRB, iRODS) via Commons-VFS (http://sourceforge.net/projects/commonsvfsgrid)

  23. Supported OGF Standards • OGSA Basic Execution Service (BES) v1.0 • JSDL v1.0 • HPC Basic Profile v1.0 • HPC Profile Application Extension v1.0 • HPC File Staging Profile v1.0 • HPC Common Case Profile: Activity Credential v0.1 • JSDL SPMD Application Extension v1.0

  24. GridSAM – Publications & Enabled Activities + in 2009/2010 – ICHEC Bioinformatics Portal, eSysBio, NAREGI/RENKEI

  25. For Resource Owners… Persistence provided by one of:Hypersonic, PostgreSQL, or existing MySQL X509 certificate DRM Computational Resource Manager GridSAM Service One of:PBS (Torque/OpenPBS/PBSPro)LSF, Condor, Sun GridEngine, Globus, Fork … Tomcat/Axis Tomcat: 5.0.23, 5.0.28, 5.5.23Axis: v1.2.1 Linux Many flavours:RHEL 3,4,5, Fedora 7,8, Scientific Linux 4 Java: JDK 1.5.0+ Linux + Java

  26. For End-Users… MyProxy(for Globus/GridFTP) Globus-style Proxy Certificate Any/all of: GridSAM native interface, OGSA-BES v1.0, HPC Basic Profile v1.0 GridSAM Service Service Interface JSDL + MyProxycredentials Generic BES/HPC Basic ProfileClient HTTPS/HTTP GridSAM Client OSGA-BES … WS-Security: X509User/Password HPC Basic Profile X509 certificate Axis Windows/ Linux + Java Many flavours:RHEL 3,4,5, Fedora 7,8, Debian, Ubuntu, Scientific Linux 4, Windows XP, Windows Vista Java: JDK 1.5.0+

  27. Open Community Development • GridSAM is Open Source, Open Community Development • GridSAM SourceForge project: • 99.03% activity, 1 release/month • SVN source code repository • Developer & discuss mailing lists http://sourceforge.net/projects/gridsam/

  28. Example Pipeline: Condor GridSAM e.g. with Condor • A staged event-driven architecture • Submission pipeline is constructed as a network of stages connected by event queues • Each stage performs a specific action upon incoming events

  29. Planned Future Developments • For end-users: • Full support for JSDL Resource selection across PBS, Globus, Condor & Fork DRMs • JSDL Parameter Sweep Extension • Support for SRB and iRODS • For resource owners: • LCAS/LCMAPS support • Packaging option as a standalone, manually configurable web archive (WAR) file • Direct PBS deployment throughout NGS sites

  30. The tutorial begins… all you need is to go to: http://www.omii.ac.uk/wiki/HPCBPTutorial

More Related