1 / 42

WP2 Data and Compute Cloud Platform

WP2 Data and Compute Cloud Platform Marian Bubak, Piotr Nowakowski, Tomasz Bartyński, Jan Meizner , Adam Belloum, Spiros Koulouzis , Eric Sarries, Stefan Zasada, David Chang. VPH-Share (No 269978). WP2 in the VPH-Share framework.

bevis
Download Presentation

WP2 Data and Compute Cloud Platform

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WP2 Data and Compute Cloud Platform Marian Bubak, Piotr Nowakowski, Tomasz Bartyński, Jan Meizner, Adam Belloum, Spiros Koulouzis, Eric Sarries, Stefan Zasada, David Chang VPH-Share (No 269978)

  2. WP2 intheVPH-Shareframework Mission: To develop a cloud platformwhich will enable easy access to compute and data resources by providing methods and services for: 1. Resourceallocation management dedicated for VPH application scenarios; 2.Execution environment for flexible deployment of scientific software on the virtualized infrastructure; 3.Virtualized access to high performance (HPC) execution environments; 4.Cloud data access for very large objects and data transfer between services; 5.Data reliability and integrity to ensure sound use of biomedical data; 6.Security framework.

  3. WP2: Objectives and tasks • A cloud platform enablingeasy access to compute and data resources • Scientificcoordination of the development of VPH-Sharecloudcomputingsolutions (Task 2.0); • Providing a means by whichthe Cloud resources available to the Project can be managed and allocated on demand (Task 2.1); • Developing and deploying a platform which will managesuch resources and deploycomputationaltasksinsupport of VPHShareapplications (Tasks 2.2 and 2.3); • Ensuringreliable, managedaccess to largebinaryobjectsstoredinvarious Cloud frameworks (Tasks 2.4 and 2.5); • Ensuringsecureoperation of the platform developedintasks 2.1-2.5 (Task 2.6); • Gatheringuserrequirements, liaisewithapplicationteams and advising on migration to Cloud computational and storageresources; testing WP2 solutionsinthescope of real-worldapplicationworkflows (Task 2.7); • Collaborating with p-medicineforthepurposes of sharingexperience with Cloud computingtechnologies (Task 2.8).

  4. Cloud computing • What isCloud computing? • „Unlimited” access to computing power and data storage • Virtualization technology (enables runningmany isolated operating systems on one physical machine); • Lifecycle management (deploy/start/stop/restart); • Scalability; • Pay-per-use accountingmodel; • However, Cloud computing isn’t: • …a magic platform to automaticallyscale your application upfrom your PC; • …a secure place where sensitive data can be stored (this is why we requiresecurity and data anonymization…).

  5. WP2 offer for workflows • Scale your applications inthe Cloud („unlimited” computer power/reliable storage); • Utilize resources in a cost-effective way; • Install/configure eachAtomic Service once – thenuse themmultiple times in different workflows; • Many instances of Atomic Services can be instantiated automatically; • Large-scalecomputation can be delegated from the PC to the cloud/HPC; • Smart deployment: computation canbe executed close to data (or the other way round); • Multitudes of operating systems to choose from; • Install whatever you want (root access to Cloud Virtual Machines).

  6. WP2: Partner Roles

  7. 3 (new) words … Raw OS OS OS Appliance: A running instance of an atomic service, hosted in the Cloud and capable of being directly interfaced, e.g. by the workflow management tools or VPH-Share GUIs. Atomic service: A VPH-Share application (or a component thereof) installed on a Virtual Machine and registered with the WP2 cloud management tools for deployment. ! ! ! Virtual Machine: A self-contained operating system image, registered in the Cloud framework and capable of being managed by VPH-Share mechanisms. VPH-Share app. (or component) External APIs Cloud host VPH-Share app. (or component) External APIs

  8. WP2 visionat a glance (1/3) • Installing a VPH-Shareapplicationinthe Cloud (developer action): • Upon theapplicationdevelopers’ request, theAtmospherecomponent (developedin T2.1 and T2.2) spawns a freshVirtualMachine, whichresidesinthe Cloud and containsallthefeaturestypicallyexpected of an „out of thebox” operating system (virtualizedstorage, standard libraries, rootaccount, initialconfiguration etc.) Ifneeded, many suchVMscan be spawned, eachencapsulating a single VPH-Shareatomic service. • Itistheapplicationdevelopers’ task to installcomponents of theirapplications on thesetemplates so thattheycan be wrapped as atomic services. • WP2 toolscanthenfurthermanagetheatomic services and deploytheirinstances (alsocalledappliances) on Cloud resources as requested by theWorkflowComposer. 1. Browseavail. OS templates 2a. Create VM withselected OS 5. Save as atomic service 2c. Return VM IP 4. Install required software OS template registry T2.1 VM 2b. Spawn VM Atmosphere UI Cloud platform

  9. WP2 visionat a glance (2/3) • Preparing a VPH-Shareapplication for execution (user action): • TheuserrequeststheWorkflowComposer (via the WP6 Master UI) to execute an application. • TheWorkflowComposerinformsAtmospherewhichatomic services to deployinthe Cloud so thattheworkflowmay be executed. • Atmospheretakescare of deployingtherequiredassets and returns a list of service endpoints (typically IP addresses) whereuponworkflowexecutionmaycommence. • Atmospherecan be designed to „talk to” many differentcomputingstacks, and thusinterfaceboth commercial and privateClouds – we arecurrentlyeyeingEucalyptus, OpenStack, OpenNebula and Nimbus. • Depending on theunderlying Cloud computingstack we expect to be able to definedeploymentheuristics, enablingoptimization of resourceusage. 2. Executeapplication Atomic service reg. Cloud resourcereg. VPH Master Int. Atmosphere 1. Log in to MI 3a. Get atomic services Workflow Composer tool 3c. Return list of appliances API 3b. Spawnappliances T2.2 4. Run workflow Appl. #2 Appl. #1 Cloud platform

  10. WP2 visionat a glance (3/3) • Managingbinary data in Cloud storage (developer action): • Atmosphere will contain a registry of binary data for use by VPH-Shareapplications (T2.5) and assumeresponsibility for maintainingsuch data in Cloud storage. • Appliancesmayproduce data and register itwithAtmosphere. • Atmosphere will provide a query API whereauthorizedapplicationsmaylocate and retrieveCloud-based data. • Ifrequired, Atmospheremayalsoshiftbinary data between Cloud storage systems (not depicted) • As an optionaltool, we candevelop a data registry browsing UI for integrationwiththeVPH-Share Master Interface (not depicted). • For access to theunderlying Cloud storage resources, we intend to applytoolsdevelopedinTask 2.4. Atmosphere Binary data registry Cloud storage 2. InformAtmosphere (passing handle to stored data) VPH-Shareapplication API 1a. Generate data 1b. Save data in Cloud storage 3a. Query VPH-Share application 3b. Retrieve handle 4. Get data T2.5

  11. Issue: applicationinterfaces • Problem description • Iftheapplianceis to be accessed by externalusers, itscorrespondingatomic service needs a remoteinterfacethroughwhichitsfunctionalitycan be invoked. According to theTechnicalAnnex, we expectthatallsuchinterfacesassumethe form of Web Services (cf. p. 41) exposed by thehosts on whichtheVPH-Shareappliances will reside. • WhileAtmospherecanmanageatomic services, itfalls to applicationdevelopers to encapsulatethefunctionality of theirapplications(or parts thereof) inthe form of Web Services. We believethatthisis a crucialtask to be addressed by Task 2.7 as soon as feasible. OS OS OS OS 3. Prepare visualization 1. Execute application Cloud host Cloud host Cloud host User host 2. Run calculations 4. Display output VPH-Share Master Interface VPH-Share app. Component #1 VPH-Share app. Component #2 Visualization component

  12. Issue: selection of software stacks for private Cloud installations • Problem description: • Earlystudiessuggestthat we shouldadoptOpenNebula for privateVPH-Sharecloudinstallationsdue to itssimpleyeteffective design – however, we areeager to discussthematterwith WP2 partners. • For largerprivateclouddeploymentsOpenStack(or, atleast, itsObjectStore) seems a goodchoice. OpenStack Eucalyptus OpenNebula Nimbus Advantages Drawbacks Advantages Drawbacks • Excellentcompatibilitywith EC2 • Advanced networkfeatures • Comeswithitsown Cloud storageengine (Walrus) • Overlycomplexarchitecturegiventhefeaturesitoffers • Advancednetworkingmodesrequire a dedicatednetworkswitch with VLANs • PoorWalrusfunctionality and performance • Heavyweight software and communicationprotocols • Simple yeteffective design • Standard communicationprotocols (SSH) • Lightweight technology (Ruby + bash scripts) • Standard Shared Storage (NFS, SAN + Cluster FS) • Contextualizationsupport • Poornetworkautoconfigurationfeatures • No dedicatedstorageengine • Minor but irksomebugsinconfiguration scripts Advantages Advantages Drawbacks Drawbacks • Requiresinstallation of heavyweightcomponents on HEAD node • Largely a conglomerate of varioustechnologies – maycausemaintainaceissues • Advanced networking modes require a dedicated network switch with VLANs (much like Eucalyptus) • Higly complex architecture and convoluted deployment • Questions regarding maturity • Advancedfeatures, includingcomplexcomputing and (object) storageengine • Relativelysimple design giventherichfeature set • Support for varioushypervisors (including KVM, Xen, MS Hyper-V) • IPv6 support • Advanced storageengines – Cumulus- and LANTorrent-based • Support for legacy technology (PBS) • Manageablearchitecture • EC2/S3 compatible

  13. T2.0 Scientific Management • Main goals: • Overseeing the scientific progress and synchronisation of tasks • Interim (6 monthly) and annual reports • Key issues: • Software development should be based on top quality research • Collaboration and exchange of research results within WP, VPH-Share, and with related projects • Encouraging publications (FGCS, IEEE Computer, Internet Computing, …) • Participation in conferences (EuroPar, e-Science, CCGrid, ICCS, …) • Organization of workshops • PhD and MSc research related to the project topics • Promotion of best software engineering practices and research methodologies • Hosting an e-mail list, wiki and telcos (1 per month); managing P2P contacts and WP meetings (semiannually) • Partners involved and contact persons: • CYFRONET, Marian Bubak, Maciej Malawski {bubak,malawski}@agh.edu.pl and task leaders

  14. T2.1 Cloud resourceAllocation Management • Maingoal: Multicriterialoptimization of computingresourceusage (private and public clouds as well as the HPC infrastructureprovided by T2.3); • Keyissues: • Applications (atomic services) and workflowcharacteristics; • Component Registry (T6.5); • Workflowexecutionengineinterfaces (T6.5); • Atomic Services Cloud Facadeinterface (T6.3); • Security (T2.6); • Partners involved and contactpersons: • CYFRONET (Tomasz Bartyński; t.bartynski@cyfronet.pl); • UCL (David Chang; d.chang@uci.ac.uk); • AOSAE (EnricSarries; enric.sarries@atosresearch.eu).

  15. T2.1 Deploymentplanning • Atmosphere will take into account application characteristics and infostructure status to find an optimal deployment and allocation planwhich will specify: • where to deploy atomicservices (partner’s private cloud site, public cloudinfrastructureor hybridinstallation), • should thedata be transferred to thesitewherethe atomicservice is deployed or the other way around, • how many atomicservice instances should be started, • isitpossible to reuse predeployed AS (instances sharedamong workflows)? • Thedeployment plan will be based on theanalysis of: • workflow and atomic service resourcedemands, • volume and location of input and output data, • load of available resources, • cost of acquiring resources on private and public cloudsites, • cost of transferring data betweenprivate and public clouds (alsobetween„availability zones”such as US and Europe ), • cost of using cheaper instances (whenever possible and sufficient; e.g. EC2 Spot Instances or S3 Reduced Redundancy Storage for some noncritical (temporary) data), • public cloudprovider billing model (Amazon charges for a full hour –thus, five 10-minute taskswouldcost 5 moretimes to run than an individual instance).

  16. T2.1 inscope of VPH-Shareproject • Atmosphere will: • receiverequestsfromtheWorkflowExecutionstatingthat a set of atomic services isrequired to process/producecertain data; • queryComponent Registry to determinetherelevant AS and data characteristics; • collectinfostructuremetrics, • analyzeavailable data and prepare an optimaldeployment plan. Workflow Execution ComponentRegistry 1. Informaboutreq. AS and data T6.5 T6.5 Atmosphere Cloud storage API 2. Get metadataabout requiredappliances and data 4. Analyze and prepareoptimaldeployment T2.1 Cloud Computing resources 3. Collectcomputing, storage and networkingstatistics

  17. T2.2 Cloud Execution Environment • Maingoal: Deployment of atomic services inthe Cloud according to T2.1 specifications • Keyissues: • Cloud usage (public providers, privatesetupscontributed by partners; choice of Cloud computing platform); • Interfacinginfostructure: • Public Cloud providers as well as private (partner-operated) Cloud platformsbuiltusingheterogeneous resources? • Data Access services T2.4; • Movingatomic services acrosstheinfostructure; • Atomic Services Invokerinterface (T6.3); • Security (T2.6); • Partners involved and contactpersons: • CYFRONET (Tomasz Bartyński; t.bartynski@cyfronet.pl); • UCL (David Chang; d.chang@uci.ac.uk); • AOSAE (EnricSarries; enric.sarries@atosresearch.eu).

  18. T2.2 Deployment according to plan from T2.1 • T2.2 will receive a deployment plan from T2.1 • It will implementthedeployment plan by instantiatingatomic services on private and/or public Clouds, and moving data using T2.4 tools. • Itmay be required to interface public and/orprivatecloudsbuilt upon differentplatforms so thechoice of Cloud API and client-sidelibraryisimportant; • CyfronetiscurrentlyinvestigatingAmazon EC2, Open Cloud ComputingInterface, OpenStackCompute (Nova); S3 and Swift storageAPIs. Atmosphere AS #2 Cloud storage API Deployappliance / move data T2.2 Cloud Computing resources AS #2

  19. T2.2 Monitoring and scalinginfostructure • Atmosphere will monitor theusage of atomic services • Atomic Services will be scaled: • newinstances will be started for overloaded services; • underutilizedinstances will be shut down. T2.2 Atmosphere Monitor and scale AS API

  20. T2.3 High Performance Execution Environment • Main goals: • Provide virtualised access to high performance execution environments seamlessly provide access to high performance computing to workflows that require more computational power than clouds can provide • Deploy and extend Application Hosting Environment – provides a set of web services to start and control applications on HPC resources • Keyissues: • Cloud computingprovides an infrastructure on which to run so calledcapacityworkloads • Someworkflowsrequireaccess to high performance computingresoruces • Cloud computingparadigmhas not foundwideuptakein HPC Introduces performance overhead • Need to preinstall and optimiseapplications on HPC resources • Should we seek to integrate HPC accesstightlywithcloudcomputingortreatitseparately? • Partners involved and contactpersons: • UCL: David Chang (d.chang@ucl.ac.uk); Stefan Zasada(stefan.zasada@ucl.ac.uk) • Cyfronet: Tomasz Bartynski (t.bartynski@cyf-kr.edu.pl)

  21. T2.3 High Performance Execution Environment Virtualizing access to scientific applications Tasks: • Refactor AHE client API to provide similar/same calls as Eucaliptus/cloud APIs Access Grid/HPC in similar way to cloud • Integrate AHE (via API) with Resource Allocation Management system developed in T 2.1  AHE will publish load information from HPC resources • HPC typically uses pre-stages applications. UCL will build, optimise and host simulation codes in AHE • Extend AHE to stage/access data from cloud data facilities developed in T 2.4 • Integrate AHE/ACD with security framework developed in T2.7 • Application Hosting Environment: • Based on the idea of applications as stateful WSRF web services • Lightweight hosting environment for running unmodified applications on grid and local resources • Community model: expert user installs and configures an application and uses the AHE to share it with others • Launch applications on Unicore and Globus 4 grids by acting as an OGSA-BES and GT4 client • Use advanced reservation to schedule HPC into workflow

  22. SOAP/REST LOB federated storage access Management Optimization Operations layer Abstraction Layer Cloud Client1 Service Client Cloud Client1 Cloud Storage2 Cloud Storage2 Connection Service1 Connection ServiceN T2.4 Data Access for Large Binary Objects • Main goals: • a) Federated cloud storage: uniform access to data storage resources to transparently integrate multiple autonomous storage resources; as a result a file system will be provided as a service, optimizing data placement, storage utilization, speed, etc. • b) Transport protocols: efficient data transfers for replication, migration, and sharing; to avoid centralisation bottlenecks, connection services will be deployed near the data • Keyissues: • Integrate functionalityof the security framework (T2.6) to provide user- and application-level access control as well as ensuring storage privacy(a, b) • Maintaining and synchronizing multiple replicas on different loosely coupled resources (a) • Dealing with errors in storage systems as well as providing a uniform metadata model (a) • Providing an abstraction on a higher-level transport protocol for fast transfers, checkpoints and parallel streaming (b) • Partners involved and contactpersons: • UvA; Spiros Koulouzis; S.Koulouzis@uva.nl, • Adam Belloum;A.S.Z.Belloum@uva.nl

  23. T2.4 Data Access for LargeBinaryObjectsFederated Cloud Storage – Goals • Transparently integrate multiple autonomous storage resources • Provide virtually limitless capacity • Uniform access to data storage resources or a file system like view • Optimize: • Data placement based on access frequency, latency, etc. • Storage utilization/cost; this will have to make sure that storage space is used in the most efficient way • Provide file system as a service; this will provide an intuitive and easy way to access the federated storage space LOB federated storage access Operations layer Management Optimization Abstraction Layer Cloud Client1 Cloud Client2 Data Data Cloud Storage1 Data

  24. T2.4 Data Access for LargeBinaryObjectsFederated Cloud Storage – Issues • Clearing data handling requests with the security framework (T2.6) • Ensuring privacy on the storage locations • Obtaining authorization, authentication, access control, etc. • Synchronizing multiple replicas on different loosely coupled resources • Determining the distance between replicas • Dealing with errors in storage systems beyond our control • Providing a uniform meta-data model • Defining utility functions for optimising multiple targets such as space usage, access latency, cost, etc.

  25. T2.4 Data Access for LargeBinaryObjectsTransferprotocols – Goals LOB federated storage access • Interconnect resources for efficient transport • Investigate the state of the art of protocols designed for large-scale data transfers, such as UDT & GridFTP • Provide a higher-level protocol that will be capable of checkpointing and exploiting parallel streams to boost performance • Deploy connection services next to or near data resources • Use direct streaming • Take advantage of existing transfer protocols, such as HTTP(s) • Take advantage of 3rd party transfer abilities offered by underlying protocols e.g. GridFTP Operations layer Management Optimization Abstraction Layer Cloud Client1 Service Client Control (e.g. checkpoints) Local FS Connection Service Connection Service UDT HTTP(S) GridFTP Transport

  26. T2.4 Data Access for LargeBinaryObjectsTransferprotocols – Issues • In case of migrations, transfers, replications, etc. between different storage systems we have to consider: • Appliances inthe case of IaaS • Web services in the case of PaaS • They will act as connection services, enablingthird party transfers, usage of state-of-the-art transport protocols such as UDR, GridFTP, as well as torrentlike transport models • These connection services will have to: • Encrypt data while in transit from one provider to another • Resume failed transfers • Enable checkpoints in transfers to increase fault tolerance • Use the framework provided by Atmosphere to maintain and deploy such services

  27. T2.4 Data Access for LargeBinaryObjectsInteractionswithother WPs & Tasks Data Access and mediation Task 3.3 Workflow Execution Task 6.5 Cloud Execution Environment Task 2.2 Move Data/Tasks around resources Provide access to raw data High Performance Execution Environment Task 2.3 Deployment/Transport of System Images Move data From/to resources Perform Integrity/checksum on data Fine Grained ACL/ Authorization request Data Reliability And integrity Task 2.5 Security Framework Task 2.6 Service SOAP/REST LOB federated storage access Management Optimization Operations layer Abstraction Layer

  28. T2.5 Data Reliability and Integrity • Maingoals: Provide a mechanismwhich will keeptrack of binary data storedinthe Cloud infrastructure, monitor itsavailability; adviseAtmospherewheninstantiatingatomic services and – whenrequired – shift/replicate data betweenclouds(in collaborationwith T2.4); • Keyissues: Establishing an API whichcan be used by applications and endusers; deciding upon supported set of cloudstacks; • Partners involved and contactpersons: • CYFRONET (Piotr Nowakowski; p.nowakowski@cyfronet.pl) Atmosphere LOB federated storageaccess (sampleprotocolstacks) Binary data registry Register files Get metadata MigrateLOBs Get usagestats (etc.) End-userfeatures (browsing, querying, directaccess to data) T2.4 VPH Master Int. T2.5 ObjectStorage (OpenStack) Store and marshal data Cumulus (Nimbus) Amazon S3 Walrus (Eucalyptus) Data Storage GUI Distributed Cloud storage

  29. T2.5 Data Reliability and Integrity • Enforcing data integrity by means of • accesscontrol (requiresintegrationwith T2.6); • access log (requiresintegrationwith T2.4). • Eachoperationperformed by T2.4 toolscan be loggedintheAtmosphere registry for thepurposes of establishing data provenance. Moreover, Atmospherecanenforcefine-grained data access security by means of policiesdefinedinTask 2.6. Atmosphere Operations on registered data (requested by users orworkflow management tools) Access log LOB federated storageaccess Log access to registered data Store and marshal data VPH Master Interface T2.5 Distributed Cloud storage

  30. T2.6 Security Framework • Main goals:Policy-driven access system for the security framework. Providing a solution for an open-source based access control system based on fine-grained authorization policies. Components: Policy Enforcement, Policy Decision, Policy Management, Registry of conditions and effects definitions and values. • Key issues: • Security by design • Privacy & Confidentiality of eHealthcare data • ExpressingeHealthrequirements & constraints in securitypolicies (compliance) • Deployingsecurity software in clouds (potentialthreats and inconsistencies). • Partners involved and contactpersons: • AOSAE (Enric Sàrries; enric.sarries@atosresearch.eu). • CYFRONET (Jan Meizner; jan.meizner@cyfronet.pl); • UCL (Ali Haidar; ali.haidar@ucl.ac.uk); VPH Clients VPH Security Framework Internet VPH Security Framework VPH Services

  31. T2.6 Security Framework • The Security Components will be located in the frontends of the VPH deployment. There are 3 perimeters in VPH-Share: • User Interfaces • Applications (Workflow execution) • Infostructure • However, the design of the VPH-Share architecture is still not clear. To prevent man-in-the-middle (impersonation) attacks, we must know which are the boundaries, the perimeter to secure. • Security by design implies the following: • Components must not bypass security (services/data not exposed to threats), each message must go to / come from the security framework • Components must be trusted (well specified, standard, “known”) with respect to the security policies • Administrative access to the Security Framework to modify the access rights to the component (system admins can configure security for their software) • Relying on known designs for the whole system. ?

  32. T2.6 Security Framework • The Security Components will providethe following features: • Secure messaging between the frontends of each deployment/platform • Authentication of VPH Users • Resourceaccessauthorization(based on access control policies) • Specifically,for policy-based access control, we will need the following components: • Policy Enforcement Point: It “executes” the policies. It composes the access control request. Once it gets the authorization response from the Policy Decision Point, it enforces it by allowing/denying the requested access. • Policy Decision Point: It analyses the access control request, the policies and the environment, and issues an access control decision: either“Permit” or “Deny”. • Policy Administration Point: Deploys or provisions policies to the PDP. Enablesreconfiguration of access rights to resources. • Policy Information Point: Providesattribute-value pairs needed for the analysis of the access control requests and policies. VPH User Policy Enforcement Point Policy Decision Point Policy Administration Point Security Token Service Policy Information Point VPH Services Storage Service Compute Service RDB Service

  33. T2.6 Security Framework • Deploying secure appliances in the cloud presentsa challenge: • Many different OS templates • Many different VPH-Share appliances • The best way to address this is, inouropinion: • Security Components deployed in the OS templates of Atmosphere; • When deploying a VPH service, the Security component is configured to “proxy” it. OS “Create a virtual machine and install a VPH-Share service in it” OS template registry Atmosphere Cloud host Developer VPH-Share app. (or component) Secure Proxy Secured External APIs

  34. T2.7 Requirements Analysis and Integration of VPH Workflow Services with the Cloud Infrastructure • Maingoals: EnsureintegrationwithVPH-Shareworkflows and deployment of VPH-Shareatomic services on Cloud and HPC resources provided by the partners • Keyissues: Establishingworkflowspecificationdetails and atomic service requirements • Partners involved and contactpersons: • USFD (workflowcoordination – cp. Richard Lycett; Rich.Lycett@gmail.com); • KCL (CardiovascularModeling; T5.5 – cp. TBD); • UPF (NeurovascularModeling; T5.6 – cp. TBD); • IOR (OrthopaedicModeling; T5.4 – cp. TBD); • UvA (HIV Epidemiology; T5.5 – cp. TBD); • CYFRONET (Marek Kasztelnik; m.kasztelnik@cyfronet.pl) • A preliminaryquestionnairehasbeendistributedatthekickoffmeetingin Sheffield; resultsaredue by mid-May 2011.

  35. T2.8 Joint Strategy for Cloud Computing betweenp-medicine and VPH-Share • Maingoals: • Exchange of information on Cloud computing and storage environments with focus on how they may support distributed medical information systems; • Exchange of technical knowledge pertaining to the exploitation of specific Cloud technologies; • Joint assessment of the applicability of Cloud platforms to storing, processing and exposing data in the context of medical applications; • Exchange of prototype software and detailed technical documentation thereof, with the possibility of cross-exploitation of synergies between both projects; • Semiannual collaboration workshops of representatives of both Projects to support the above. • 1st VHP-Share/P-Medicine meeting: about June 15, at UvA,to discuss (amongothers) D2.1 and D6.1 as well as plansfor platform design . • Partners involved and contactpersons: • CYFRONET (cp. Marian Bubak; m.bubak@agh.edu.pl); • USFD (dp. TBD) • UvA (cp. Adam Belloum; A.S.Z.Belloum@uva.nl); • UCL (cp. TBD)

  36. WP2: Services

  37. Key WP2 interactions Workflowexecutionrequests Atomic Service preparationrequests AS creation and management UI Invocations of Atmosphereback-end services Data management UI (possiblyintegratedwithTask 6.3) SharingAtomic Service metadata (common/distributed registry) Sharing LOB metadata (common/distributed registry) Preparation of HPC resources Binary data processingrequests Instantiation and management of Appliancesbased on AS templates (AS templaterepository not depicted) Execution of computationaljobs Low-levelaccess to Cloud resources Enactment of workflowsusingpreinstantiated Cloud and HPC resources (Appliances) Metadata Mgmt Services (4.2) Atomic Service Cloud Facade (6.3) Data Integrity Services (2.5) HPC Exe. Environment (2.3) Cloud Exe. Environment (2.2) Computing Service Broker (2.1) VPH Master Interface (6.4) Binary Data Access (2.4) Workflow Execution (6.5) 13 6 9 2 3 4 1 11 8 9 10 5 12 12 7 WP2 Security (2.6) HPC Infrastructure (e.g. DEISA) Public Clouds Private Clouds

  38. WP2: Interactions

  39. WP2: Workflow Interactions (tbd)

  40. WP2: Measurable Objectives See DoW VPH-Share (269978) 2010-11-08 (pp. 15-17)

  41. WP2: Mapping to Global Objectives

  42. Upcoming WP2 deliverable - D2.1 • D2.1: Analysis of the State of the Art and WP Definition (M3 – end of May 2011) • Requirescontributionsfromtechnicaldevelopersw.r.t. solutionswhich will be considered for useintheirtools. • Proposed TOC and responsibles: • 1. Introduction (incl. objectives and approach) CYFRONET • 2. High-leveloverview of WP2 (incl. generalizedview of WP2 architecture) CYFRONET • 3. Keychallengesin developing a Cloud platform for VPH-ShareCYFRONET • 4. Targeted SOTA for: • Cloud ResourceAllocation Management CYFRONET • Cloud ApplicationExecutionCYFRONET • Access to High-Performance ExecutionEnvironmentsUCL • Access to largebinary data on the Cloud UvA • Data reliability and integrityCYFRONET • Cloud Security frameworksAOSAE • 5. Conclusions (incl. summary and references) CYFRONET • As part of Section 4, we askeachcontributing partner to conform to thefollowingschema: • Problem statement(Whyisthisaspectimportant for VPH-Share?) • SOTA description (alongwith an in-depthdiscussion of theadvantages and drawbacks of availabletechnologies) • Recommendations for VPH-Share(Whichtechnologies to adopt? Isitnecessary to extendthem? If so – why and how?) • Deadline for contributionsisMay 6 (submit to p.nowakowski@cyfronet.pl).

More Related