1 / 49

Middleware Status

Middleware Status. Markus Schulz , Oliver Keeble, Flavia Donno Michael Grønager (ARC) Alain Roy (OSG) ~~~ LCG-LHCC Mini Review 1 st July 2008 Modified for the GDB. Overview. Discussion on the status of gLite Status summary CCRC and issues raised Future prospects OSG Alain Roy ARC

tanaya
Download Presentation

Middleware Status

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Middleware Status Markus Schulz, Oliver Keeble, Flavia Donno Michael Grønager (ARC) Alain Roy (OSG) ~~~ LCG-LHCC Mini Review 1st July 2008 Modified for the GDB

  2. Overview Discussion on the status of gLite Status summary CCRC and issues raised Future prospects OSG Alain Roy ARC Oxana Smirnova Storage and SRM Flavia Donno

  3. gLite: Current Status Current middleware stack is gLite 3.1 Available on SL4 32bit Clients and selected services available also on 64bit Represents ~15 services Only awaiting FTS to be available on SL4 (in certification)‏ Updates are released every week Updates are sets of patches to components Components can evolve independently Release process includes full certification phase Includes DPM, dCache and FTS for storage management LFC and AMGA for catalogues WMS/LB and lcg-CE for workload Clients for WN and UI BDII for the Information System Various other services (eg VOMS)‏

  4. gLite patch statistics New service types are introduced as patches

  5. CCRC08 May Summary During CCRC we Introduced new services Handled security issues Produced the regular stream of updates Responded to CCRC specific issues The software process operated as usual No special treatment for CCRC Priorities were reviewed twice a week in the EMT 4 Updates to gLite 3.1 on 32bit 18 Patches 2 Updates to gLite 3.1 on 64bit 7 Patches 1 Update to gLite 3.0 on SL3 1 Patch Normal Operation

  6. CCRC08 May Highlights CCRC demonstrated that the software process was functioning It did not reveal any fundamental problems with the stack Various minor issues were promptly addressed Scalability is always a concern But didn’t affect CCRC08 The WMS/LB for gLite3.1/SL4 was released An implementation of Job Priorities was released Not picked up by many sites First gLite release of dCache 1.8 was made WN for x86_64 was released An improved LCG-CE had been released the week before Significant performance/scalability improvements Covers our resource access needs until the CREAM-CE reaches production

  7. Component Patch # Status LCG CE Released gLite 3.1 Update 20 FTS (T0)‏ Released gLite 3.0 Update 42 Released gLite 3.1 Update 18 Middleware Baseline at start of CCRC Patch #1752 Patch #1740 FTS (T1)‏ Released gLite 3.0 Update 41 Patch #1671 Released gLite 3.1 Update 20 gFAL/lcg_utils Patch #1738 Patch #1706 DPM 1.6.7-4 • All other packages were assumed to be at the latest version • This led to some confusion….. • An explicit list will have to be maintained from now on

  8. Outstanding middleware issues Scalability of lcg-CE Some improvements were made, but there are architectural limits Additional improvements are being tested at the moment Acceptable performance for the next year Multiplatform support Work ongoing for full support of SL5 and Debian 4 SuSE 9 available with limited support Information system scalability Incremental improvements continue With recent client improvements the system performs well Glue 2 will allow major steps forward Main storage issues SRM 2.2 issues Consistency between SRM implementations Authorisation Covered in more detail in the storage part

  9. Forthcoming attractions CREAM CE In the last steps of certification (weeks) Addresses scalability problems Standards based --> eases interoperability Allows parameter passing No GRAM interface Redundant deployment may be possible within a year WMS/ICE Scalability tests are starting now ICE is required on the WMS to submit to CREAM glexec Entering PPS after some security fixes To be deployed on WNs to allow secure multi-user pilot jobs Requires SCAS for use on large sites

  10. Forthcoming attractions SCAS Under development, stress tests started at NIKHEF Can be expected to reach production in a few months With glexec it ensures site wide user ID synchronization This is a critical new services and requires deep certification FTS New version in certification SL4 support Handling of space tokens Error classification Within 6 months Split of SRM negotiations and gridFTP Improved throughput Improved logging Syslog target Streamlined format for correlation with other logs Full VOMS support Short term changes for “Addendum to the SRM v2.2 WLCG Usage Agreement” Python client libraries

  11. Forthcoming attractions Monitoring Nagios / ActiveMQ Has an impact on middleware and operations, but is not middleware Distribution of middleware clients Cooperation with Application Area Currently via tar balls within the AA repository Active distribution of concurrent versions Technology for this is (almost) ready Large scale tests have to be run and policies defined SL5/VDT1.10 release Execution of the plan has started WN (clients) expected to be ready by end of September Glue2 Standardization within OGF is in the final stages Non backward compatible, but much improved Work on implementation and phase-in plan started

  12. OSG Slides provided by Alain Roy OSG Software OSG Facility CCRC and last 6 months Interoperability and Interoperation Issues and next steps

  13. OSG for the LHCC Mini Review Created by Alain Roy Edited and reviewed by others (including Markus Schulz)

  14. OSG Software Concept • OSG Software • Goal: Provide software needed by VOs within the OSG, and by other users of the VDT, including EGEE and LCG. • Work is mainly software integration and packaging: we do not do software development • Work closely with external software providers • OSG Software is component of the OSG Facility, and works closely with other facility groups

  15. OSG Facility Concept • Engagement • Reach out to new users • Integration • Quality assurance of OSG releases • Operations • Day-to-day running of OSG Facility • Security • Operational security • Software • Software integration and packaging • Troubleshooting • Debug the hard end-to-end problems

  16. During CCRC & Last Six Months Status • Responded to ATLAS/CMS/LCG/EGEE update requests and fixes,   • MyProxy fixed for LCG(threading problem) • in certification now • Globus proxy chain length problem fixed for LCG • in certification now • Fixed security problem (rpath) reported by LCG • OSG 1.0 released—major new release • Added lcg-utils for OSG software installations • Now use system version of OpenSSL instead of Globus’s OpenSSL (more secure) • Introduced and supported SRM V2 based storage software based on dCache, Bestman, and xrootd.

  17. OSG / LCG interoperation Status • Big improvements in reporting • New RSV software collects information about functioning of sites, reports it to GridView. Much improved software, better information reported. • Generic Information Provider • OSG-specific version • greatly improved to provide more accurate information about a site to the BDII.

  18. Ongoing Issues & Work ISSUES • US OSG sites need more LCG client tools for data management (LFC tools, etc.) . • Working to improve interoperability via testing between OSG’s ITB and EGEE’s PPS. • gLite plans to move to new version of VDT: • We’ll help with the transition. • We have regular communication • via Alain Roy’s attendance at the EMT meetings.

  19. ARC Slides provided by Michael Grønager

  20. Michael Grønager Project Director, NDGF July Grid Deployment Board CERN, Geneva, July 9th 2008 ARC middleware statusfor the WLCG GDB

  21. Talk Outline Nordic Acronym Soup Cheat Sheet ARC Introduction Other Infrastructure components ARC and WLCG ARC Status ARC Challenges ARC Future

  22. Nordic Acronym Soup Cheat Sheet Definition of a Grid A loosely coupled collection of resources that are made available to a loosely defined set of users from several organizational entities. There is no single resource uptime guarantee but as a collective whole the Grid Middleware ensure that the user experiences is smooth and stable virtual resource. Definition of an Infrastructure A managed set of resources made available to a managed set of users. An infra-structure operator might very well use Grid Middleware for this purpose.

  23. Nordic Acronym Soup Cheat Sheet Definition of a Grid A loosely coupled collection of resources that are made available to a loosely defined set of users from several organizational entities. There is no single resource uptime guarantee but as a collective whole the Grid Middleware ensure that the user experiences is smooth and stable virtual resource. There is support on an Infrastructure ! Security contact and guarantees for uptime, stability etc. There is no support on a Grid ! No security contact ! No Uptime Guarantee! Definition of an Infrastructure A managed set of resources made available to a managed set of users. An infra-structure operator might very well use Grid Middleware for this purpose.

  24. Nordic Acronym Soup Cheat Sheet Definition of a Grid A loosely coupled collection of resources that are made available to a loosely defined set of users from several organizational entities. There is no single resource uptime guarantee but as a collective whole the Grid Middleware ensure that the user experiences is smooth and stable virtual resource. There is support on an Infrastructure ! Security contact and guarantees for uptime, stability etc. NorduGrid [aka the “Nordu” Grid] The Grid defined by the resources registered in the four Nordic GIIS'es operated by the NorduGrid Consortium partners. I.e. a real “Grid”. There is no support on a Grid ! No security contact ! No Uptime Guarantee! Definition of an Infrastructure A managed set of resources made available to a managed set of users. An infra-structure operator might very well use Grid Middleware for this purpose.

  25. Nordic Acronym Soup Cheat Sheet Definition of a Grid A loosely coupled collection of resources that are made available to a loosely defined set of users from several organizational entities. There is no single resource uptime guarantee but as a collective whole the Grid Middleware ensure that the user experiences is smooth and stable virtual resource. NorduGrid [aka the “Nordu” Grid] The Grid defined by the resources registered in the four Nordic GIIS'es operated by the NorduGrid Consortium partners. I.e. a real “Grid”. There is support on an Infrastructure ! Security contact and guarantees for uptime, stability etc. NorduGrid [aka the NorduGrid Consortium] Group that aims at ensuring continuous funding for developing ARC and to mission all over the world that ARC is the grid best m/w to use. Also the formal owner of the ARC Middleware. There is no support on a Grid ! No security contact ! No Uptime Guarantee! Definition of an Infrastructure A managed set of resources made available to a managed set of users. An infra-structure operator might very well use Grid Middleware for this purpose.

  26. Nordic Acronym Soup Cheat Sheet Definition of a Grid A loosely coupled collection of resources that are made available to a loosely defined set of users from several organizational entities. There is no single resource uptime guarantee but as a collective whole the Grid Middleware ensure that the user experiences is smooth and stable virtual resource. NorduGrid [aka the “Nordu” Grid] The Grid defined by the resources registered in the four Nordic GIIS'es operated by the NorduGrid Consortium partners. I.e. a real “Grid”. NorduGrid [aka the NorduGrid Consortium] Group that aims at ensuring continuous funding for developing ARC and to mission all over the world that ARC is the grid best m/w to use. Also the formal owner of the ARC Middleware. There is support on an Infrastructure ! Security contact and guarantees for uptime, stability etc. There is no support on a Grid ! No security contact ! No Uptime Guarantee! ARC [Advanced Resource Connector] A grid middleware developed by several groups and individuals. Definition of an Infrastructure A managed set of resources made available to a managed set of users. An infra-structure operator might very well use Grid Middleware for this purpose.

  27. Nordic Acronym Soup Cheat Sheet Definition of a Grid A loosely coupled collection of resources that are made available to a loosely defined set of users from several organizational entities. There is no single resource uptime guarantee but as a collective whole the Grid Middleware ensure that the user experiences is smooth and stable virtual resource. NorduGrid [aka the “Nordu” Grid] The Grid defined by the resources registered in the four Nordic GIIS'es operated by the NorduGrid Consortium partners. I.e. a real “Grid”. NorduGrid [aka the NorduGrid Consortium] Group that aims at ensuring continuous funding for developing ARC and to mission all over the world that ARC is the grid best m/w to use. Also the formal owner of the ARC Middleware. There is support on an Infrastructure ! Security contact and guarantees for uptime, stability etc. There is no support on a Grid ! No security contact ! No Uptime Guarantee! KnowARC [Grid-enabled Know-how Sharing Technology Based on ARC Services] An EU Project with the aim of adding Standard compliant Services to ARC. Definition of an Infrastructure A managed set of resources made available to a managed set of users. An infra-structure operator might very well use Grid Middleware for this purpose. ARC [Advanced Resource Connector] A grid middleware developed by several groups and individuals.

  28. Nordic Acronym Soup Cheat Sheet Definition of a Grid A loosely coupled collection of resources that are made available to a loosely defined set of users from several organizational entities. There is no single resource uptime guarantee but as a collective whole the Grid Middleware ensure that the user experiences is smooth and stable virtual resource. NorduGrid [aka the “Nordu” Grid] The Grid defined by the resources registered in the four Nordic GIIS'es operated by the NorduGrid Consortium partners. I.e. a real “Grid”. NorduGrid [aka the NorduGrid Consortium] Group that aims at ensuring continuous funding for developing ARC and to mission all over the world that ARC is the grid best m/w to use. Also the formal owner of the ARC Middleware. There is support on an Infrastructure ! Security contact and guarantees for uptime, stability etc. There is no support on a Grid ! No security contact ! No Uptime Guarantee! NDGF [Nordic DataGrid Facility] A Nordic Infrastructure project operating a Nordic production compute and storage grid. Definition of an Infrastructure A managed set of resources made available to a managed set of users. An infra-structure operator might very well use Grid Middleware for this purpose. ARC [Advanced Resource Connector] A grid middleware developed by several groups and individuals. KnowARC [Grid-enabled Know-how Sharing Technology Based on ARC Services] An EU Project with the aim of adding Standard compliant Services to ARC

  29. NDGF with ARC and dCache

  30. NDGF with ARC and dCache Note ! The coupling between NDGF and ARC is that NDGF uses ARC (as it uses dCache) for running a stable Compute and Storage Infrastructure for the Nordic WLCG Tier-1 and other Nordic e-Science projects. In the best open source tradition NDGF contributes to ARC (and dCache) as a pay back to the community and a way of getting what we need.

  31. ARC Introduction General purpose Open Source European Grid middleware One of the major production grid middlewares Features: Supports all flavors of Linux and general all Unix lavors for the resources Supports PBS, SGE, LL, SLURM, Condor, LSC, and more batch systems supports data handling by the CE no need for grid software on WNs lightweight (client is 4MB!)‏ Used by several NGIs and RGIs: SweGrid, SWISS Grids, FinnishM-Grid, NDGF, Ukraine Grid etc..

  32. Other Infrastr. Components ARC alone does not build an infrastructure... ARC-CE SAM sensors running in production since 2007Q3 SGAS Accounting with export to APEL arc2bdii info-provider to site-BDII support for ARC-CE submission in gLite-WMS“June Release” Further a strong collaborationwith dCache for the storagepart of the infrastructure.

  33. ARC and WLCG ARC supports all the WLCG experiments: Fully integrated with ATLAS (supported by NDGF)‏ now using PANDA in a “fat”-pilot mode NorduGrid cloud has the highest job success rate Adapted to ALICE (supported by NDGF) Used by CMS for the CCRC (supported by NDGF)‏ This is through gLite-WMS interoperability Also direct ProdAgent plugin for ARC Developed for the Finnish CMS Tier-2 Tested to work with also LHCb

  34. ARC Status Current stable release 0.6.3 with support for LFC – LCG File Catalogue (Deprecating RLS)‏ This is now deployed at NDGF SRM2.2 Interacting with SRM2.2 storage Handy user client “ngstage” enable easy Interaction with SRM based SEs Priorities and Shares Support for User/VO-role based priorities Support for Shares using LRMS Accounts

  35. ARC Next release Next stable release 0.6.4 (due for September) will support: the BDII as a replacement for the Globus MDS Rendering the ARC-Schema Optional rendering of the GLUE 1.3 Schema optional scalability improvement packages

  36. ARC Challenges ARC have scalability issues on huge +5k core clusters data up and download is limited by the CE session and cache access is limited on NFS based clusters this issue is absent on clusters running e.g. GPFS These issues will be solved in the 0.8 release (end 2008) Move towards a flexible “dCache like” setup where more nodes can be added based on need e.g. adding more up and downloaders e.g. adding more nfs servers

  37. ARC Future The stable branch of ARC (sometimes called ARC classic) will continue to evolve More and more features will be added from e.g. the KnowARC project. The KnowARC development branch (sometimes called ARC1) adds a lot of services which adheres to standards like GLUE2, BES and JSDL These will be incorporated into the stable ARC branch. Work on dynamic Runtime Environments for e.g. bioinformatics are already being incorporated. There will be no “migrations” but a graduate incorporation of the novel components into the stable branch.

  38. SRM Slides provided by Flavia Donno Goal of the SRM v2.2 Addendum Short term solution CASTOR, dCache, STORM, DPM

  39. The Addendum to the WLCG SRM v2.2 Usage Agreement Flavia Donno CERN/IT LCG-LHCC Mini review CERN 1 July 2008

  40. The requirements • The goal of the SRM v2.2 Addendum is to provide answers to the following (requirements and priorities given by the experiments) • Main requirements : • Space protection (VOMS-awareness) • Space selection • Supported space types (T1D0, T1D1, T0D1) • Pre-staging, pinning • Space usage statistics • Tape optimization (reducing the number of mount operations, etc.) F. Donno, LCG-LHCC Mini review , CERN 1 July 2008

  41. The document • The most recent version is v1.4 available on CCRC/SSWG twiki: https://twiki.cern.ch/twiki/pub/LCG/WLCGCommonComputingReadinessChallenges/WLCG_SRMv22_Memo-14.pdf • 2 main parts: • An implementation-specific with limited capabilities short-term solution that can be made available by the end of 2008 • A detailed description of an implementation-independent full solution • The document has been agreed by storage developers, clients developers, experiments (ATLAS, CMS, LHCb) F. Donno, LCG-LHCC Mini review, CERN 1 July 2008

  42. The recommendations and priorities • 24 June 2004 : WLCG Management Board approval with the following recommendations • Top priority is services functionality, stability, reliability and performance • The implementation plan for the “short term solution” can start • it introduces the minimal new features needed to guarantee protection from resources abuse and support for data reprocessing • It offers limited functionality and is implementation specific with clients hiding differences • The long term solution is a solid starting point • It is what is technically missing/needed • Details and functionalities can be re-discussed with acquired experience F. Donno, LCG-LHCC Mini review , CERN 1 July 2008

  43. The short-term solution • CASTOR • Space Protectionbased on UID/GID. • Administrative interface ready by 3rd quarter of 2008. • Space selection already available. • srmPurgeFromSpace available in 3rd quarter of 2008. • Space types • T1D1 provided. • pinLifeTime parameter negotiated to be always equal to a system defined default. • Space statistics and tape optimization • Already addressed • Implementation plan • Available by the end of 2008 F. Donno, LCG-LHCC Mini review, CERN 1 July 2008

  44. The short-term solution • dCache • Space Protection • Protected creation and usage of “write” space tokens • Controlling access to the tape system by DN's or FQAN's. • Space selection • Based on the IP number of the client, the requested transfer protocol or the path of the file. • Use of SRM special structures for more refined selection of read buffers. • Space types • T1D0 + pinning provided. Releasing pins will be possible for a specific DN or FQAN. • Space statistics and tape optimization • Already addressed • Implementation plan • Available by the end of 2008 F. Donno, LCG-LHCC Mini review, CERN 1 July 2008

  45. The short-term solution • DPM • Space Protection • Support a list of VOMS FQANs for the space write permission check, rather than just the current single FQAN • Space selection • Not available, not necessary at the moment • Space types • Only T0D1 • Space statistics and tape optimization • Space statistics already available. Tape optimization not needed • Implementation plan • Available by the end of 2008 F. Donno, LCG-LHCC Mini review , CERN 1 July 2008

  46. The short-term solution • StoRM • Space Protection • Spaces in StoRM will be protected via DN or FQAN based ACLs. StoRM is already VOMS-aware. • Space selection • Not available • Space types • T0D1 and T1D1 (no tape transitions allowed in WLCG) • Space statistics and tape optimization • Space statistics already available. Tape optimization will be addressed • Implementation plan • Available by November 2008 F. Donno, LCG-LHCC Mini review, CERN 1 July 2008

  47. The short-term solution • Client tools: FTS, lcg-utils/gfal • Space selection • The client tools will pass both the SPACE token and SRM special structures for refined selection of read/write pools. • Pinning • Client tools will internally extend the pinlifetime of newly created copies. • Space Types • The same SPACE Token might be implemented as T1D1 for CASTOR, or T1D0 + pinning in dCache. The clients will perform the needed operations to release copies in both types of spaces transparently. • Implementation plan • Available by the end of 2008 F. Donno, LCG-LHCC Mini review, CERN 1 July 2008

  48. Summary gLite handled the CCRC load Release process was not affected by CCRC Improved lcg-CE can bridge the gap until we move to CREAM Scaling for core services is understood OSG Several minor fixes and improvements User driven released OSG-1.0 SRM-2 support Lcg data management clients Improved interoperation and interoperability Middleware ready for show time

  49. Summary ARC ARC 0.6.3 has been released Stability improvements, bug fixes LFC (file catalogue) support Will be maintained ARC 0.8 release planning started Will address several scaling issues for very large sites (>10k) ARC-1 prototype exists SRM-2.2 SRM v2.2 Addendum Agreement has been reached Short term plan implementation can start Affects Castor, dCache, Storm, DPM and clients Clients will hide differences Expected to be in production by the end of the year Long term plan can still reflect new experiences

More Related