1 / 63

NSF Middleware Initiative: Update and Overview of Release 2

Fall 2002 Internet2 Member Meeting. NSF Middleware Initiative: Update and Overview of Release 2. Alan Blatecky , National Science Foundation John McGee , USC/ISI Ken Klingenstein, Internet2 & University of Colorado-Boulder Mary Fran Yafchak , Southeastern Universities Research Association

erika
Download Presentation

NSF Middleware Initiative: Update and Overview of Release 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fall 2002 Internet2 Member Meeting NSF Middleware Initiative:Update and Overview of Release 2 Alan Blatecky, National Science Foundation John McGee, USC/ISI Ken Klingenstein, Internet2& University of Colorado-Boulder Mary Fran Yafchak, Southeastern Universities Research Association Ann West, EDUCAUSE & Internet2

  2. Session Topics • NSF Middleware Initiative Overview • GRIDS Center • NMI-EDIT • NMI Integration Testbed • NMI Outreach and Participation

  3. NSF Middleware Initiative • Purpose • To design, develop, deploy and support a set of reusable, expandable set of middleware functions and services that benefit applications in a networked environment

  4. NMI Organization • GRIDS Center • ISI, NCSA, U Chicago, UCSD & U Wisconsin • EDIT Team (Enterprise and Desktop Integration Technologies) • EDUCAUSE, Internet2 & SURA Core NMI Team Grants for R & D • Year 1 -- 9 grants • Year 2 -- 9 grants

  5. A Vision for Middleware To allow scientists and engineers the ability to transparently use and share distributed resources, such as computers, data, and instruments To develop effective collaboration and communications tools such as Grid technologies, desktop video, and other advanced services to expedite research and education, and To develop a working architecture and approach which can be extended to Internet users around the world. Middleware is the stuff that makes “transparently use” happen, providing persistency, consistency, security, privacy and capability

  6. NMI Goals • a) facilitate scientific productivity, • b) increase research collaboration through shared data, computing, code, facilities and applications, • c) support the education enterprise, • d) encourage the participation of industry, government labs and agencies for more extensive development and wider adoption and deployment, • e) establish a level of persistence and availability so that other applications developers and disciplines can take advantage of the middleware, • f) encourage and support the development of standards and open source approaches and, • g) enable scaling and sustainability to support the larger research and education communities.

  7. Experimental Software & research applications Early Implementations - GRID services, directories, authentication, etc Research & Education Early Adopters MiddlewareTestbeds - experimental, Beta, scaling & “hardening” Consensus - disciplines - communities - industries Dissemination & Support Middleware deployment NMI Process

  8. First Deliverables: NMI Release 1 • Software • (Globus, Condor, Network Weather Service, KX.509, CPM, Pubcookie) • Object Classes • (eduPerson, eduOrg, commObject) • White Papers (Shibboleth, video directories, etc) • Best Practices (Directories, LDAP) • Policies (campus certificates, account management) • Services (certificate profile registry) www.nsf-middleware.org

  9. NMI Release 2 • Release 2 shipped on Oct-25-2002 • New versions • Globus Toolkit, Condor-G, Network Weather Service, Pubcookie, etc • New components and best practices • OpenSAML 1.0, Shibboleth 1.0, etc • GSI-OpenSSH, Gridconfig Tools • LDAP Analyzer, Metadirectory Practices for Enterprise Directories, etc • Two releases each year: April/October • Release being adopted by projects, agencies • International interest in releases

  10. 3rd Year Program of NMI • Program Announcement in process • March 3, 2003 Proposal deadline • $7M available for FY’03 • http://www.nsf-middleware.org

  11. GRIDS Center Overview • John McGee • USC, Information Sciences Institute • mcgee@isi.edu

  12. GRIDS Center, Part of the NSF Middleware Initiative • One of two NMI teams, the GRIDS Center (Grid Research, Integration, Development & Support) • In late 2001, GRIDS created to: • Define, develop, deploy, and support an integrated national middleware infrastructure for 21st Century S&E • Create robust, tested, packaged, & documented middleware for S&E, including large NSF projects (e.g., NEES, GriPhyN, TeraGrid) • Work with middleware research community to evolve architecture & integrate other components • Provide dedicated operations capability for 24x7 support and monitoring of Grid infrastructure

  13. GRIDS Center Participants • The Information Sciences Institute (ISI), University of Southern California (Carl Kesselman) • The University of Chicago (Ian Foster) • The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (Randy Butler) • The San Diego Supercomputer Center (SDSC) at the University of California at San Diego (Phil Papadoupolus) • The University of Wisconsin at Madison (Miron Livny) 

  14. Elements of Grid Computing • Resource sharing as a fundamental pursuit • Computers, storage, sensors, networks • Sharing is always conditional, based on issues of security, trust, policy, negotiation, payment, etc. • Coordinated problem solving • Beyond client-server: distributed data analysis, computation, collaboration, etc. • Dynamic, multi-institutional “virtual organizations” • Community overlays on classic org structures • Large or small, static or dynamic

  15. Grid-Oriented Projects in eScience

  16. Grid Applications • Science portals • Help scientists overcome steep learning curves of installing and using new software • Distributed computing • High-speed workstations and networks as aggregated computational resources • Large-scale data analysis • Computer-in-the-loop instrumentation • Grids permit quasi-real-time analysis of data from telescopes, synchrotrons, and electron microscopes • Collaborative work • Grids enable collaborative problem formulation, data analysis, and discussion

  17. Grid Portals

  18. Mathematicians Solve NUG30 • Looking for the solution to the NUG30 quadratic assignment problem • An informal collaboration of mathematicians and computer scientists • Condor-G delivered 3.46E8 CPU seconds in 7 days (peak 1009 processors) in U.S. and Italy (8 sites) • 14,5,28,24,1,3,16,15, • 10,9,21,2,4,29,25,22, • 13,26,17,30,6,20,19, • 8,18,7,27,12,11,23 MetaNEOS: Argonne, Iowa, Northwestern, Wisconsin

  19. Home ComputersEvaluate AIDS Drugs • Community = • 1000s of home computer users • Philanthropic computing vendor (Entropia) • Research group (Scripps) • Common goal = advance AIDS research

  20. Galaxy cluster size distribution Chimera Virtual Data System + iVDGL Data Grid (many CPUs) Sloan Digital Sky Survey Analysis Size distribution of galaxy clusters?

  21. Tier0/1 facility Tier2 facility Tier3 facility 10 Gbps link 2.5 Gbps link 622 Mbps link Other link iVDGL: International Virtual Data Grid Laboratory

  22. Network for EarthquakeEngineering Simulation • NEESgrid: US national infrastructure to couple earthquake engineers with experimental facilities, databases, computers, and each other • On-demand access to experiments, data streams, computing, archives, collaboration • NEESgrid is a partnership of Argonne, Michigan, NCSA, UIUC, USC

  23. The 13.6 TF TeraGrid:Computing at 40 Gb/s Site Resources Site Resources 26 HPSS HPSS 4 24 External Networks External Networks 8 5 Caltech Argonne External Networks External Networks NCSA/PACI 8 TF 240 TB SDSC 4.1 TF 225 TB Site Resources Site Resources HPSS UniTree TeraGrid: NCSA, SDSC, Caltech, Argonne www.teragrid.org

  24. Grids and Industry • Grid computing has much in common with major industrial thrusts to decentralize (e.g., B2B, P2P, ASP, etc.) • Sharing issues are not adequately addressed by existing technologies • Companies like IBM, Platform Computing and Microsoft are now substantively involved with the open-source Grid community (e.g., OGSA, which combines Web services and Grid services)

  25. GRIDS Software for NMI • GRIDS Center Software Suite in the first release (NMI-R2) is a package of: • Globus Toolkit™ • Condor-G • Network Weather Service • KX.509 & KCA • GSI-OpenSSH • Gridconfig Tools • Grid Packaging Tools • For RedHat 7.2/7.3 on IA32, Solaris 8 on 32-bit Sparc

  26. Enterprise and Desktop Integration Technologies (EDIT) Consortium • Ken Klingenstein • Director, Internet2 Middleware Initiative • kjk@internet2.edu

  27. NMI-EDIT Consortium • Enterprise and Desktop Integration Technologies Consortium • Internet2 – primary on grant and research • EDUCAUSE – primary on outreach • Southeastern Universities Research Association (SURA) – primary on NMI Integration Testbed • Grant funding is ~$1.2 million a year: • about ½ to short-term partial hiring of campus IT staff to develop and document required standards, best practices, etc. • about ½ to testbeds, dissemination and training sessions • Almost all funding passed through to campuses for work

  28. NMI-EDIT: Goals • Much as at the network layer, create a ubiquitous common, persistent and robust core middleware infrastructure for the R&E community • In support of inter-institutional and inter-realm collaborations, provide tools and services (e.g. registries, bridge PKI components, root directories) as required

  29. NMI-EDIT: Objectives • Foster the development of campus enterprise middleware to leverage both the academic and administrative missions. • Coordinate a common substrate across higher ed middleware implementations that would permit inter-institutional efforts such as Grids, digital libraries, and collaboratories to scale and leverage • In some instances, build collaboration tools for particularly important inter-institutional and government interactions, such as web services, PKI and video. • Insure that distinctive higher-ed requirements, from privacy and academic freedom to multi-realm portals, are served in the marketplace.

  30. A Map of Middleware Land

  31. NMI-EDIT: Core Middleware Scope • Identity and Identifiers – namespaces, identifier crosswalks, real world levels of assurance • Authentication – campus technologies and policies, inter-realm interoperability via PKI, Kerberos • Directories – enterprise directory services architectures and tools, standard object classes, inter-realm and registry services • Authorization – permissions and access controls, delegation, privacy management • Integration Activities – common management tools, use of virtual, federated and hierarchical organizations

  32. NMI-EDIT: Organization • Overall technical direction set by MACE • Middleware Architecture Committee for Education (MACE) • Bob Morgan, University of Washington, Chair • Campus IT architects and representatives from Grids and International Communities • Directions set via • NSF and NMI management team • Internet2 Network Planning and Policy Advisory Council • PKI and Directory Technical Advisory Boards • Internet2 members

  33. Sample NMI-EDIT Process (Directories ) • MACE-DIR Working Group prioritizes needed materials • Subgroups established: • revision of basic documents (LDAP Recipe) • new best practices in groups and metadirectories • standards development for eduPerson 1.5 and eduOrg 1.0 • Subgroups work in enhanced IETF approach: scenarios, requirements, architectures, recommended standards stages • Working group deliverables announced; input and conference call review/feedback processes start; work groups reconvene as needed • Process takes around 4-6 months, depending on product • 6-8 people drive the process with 15-50 schools participating

  34. NMI-EDIT: Participants • Higher Ed – 15-20 leadership institutions, with 50 more campuses represented as members of working groups; readership around 2000 institutions • Corporate - (IBM/Metamerge, Microsoft, SUN, Liberty Alliance, DST, MitreTek, Radvision, Polycom, EBSCO, Elsevier, OCLC, Baltimore) • Government – NSF, NIST, NIH, Federal CIO Council • International – Terena, JISC, REDIRIS, AARnet, SWITCH

  35. A Few Year-One NMI-EDIT Milestones • Sept 1, 2001 – Grant awarded • Oct 2001– eduPerson 1.0 finalized; outreach begins with multiple workshops • Jan 2002 – HEBCA tested; first CAMP workshop held • Feb 2002 – PKI Lite CP/CPS; e-Gov and Management and Leadership Best Practice Awards • April 2002 – Shibboleth alpha ships; NMI testbed selected; NIST/NIH PKI workshop • May 2002 – NMI release, with eduPerson 1.5, pubcookie, KX.509, groups and metadirectories, video white papers • June 2002 – affiliated directories begins; Base CAMP; testbed kickoff • July 2002 – Shibboleth alpha v 2 ships; Advanced CAMP • August 2002 – LDAP Analyzer testing begins; Shibboleth pilot-sites selected; Work with content providers begins • September 2002 – Grant renewed; supplemental grant awarded for outreach; Shibboleth beta ships

  36. NMI-EDIT: Release 1 Deliverables • Software KX.509 and KCA, Certificate Profile Maker, Pubcookie • Object Classes eduPerson 1.0, eduPerson 1.5, eduOrg 1.0, commObject 1.0 • Service Certificate Profile Registry

  37. NMI-EDIT: Release 1 Deliverables • Conventions and Practices • Practices in Directory Groups 1.0, LDAP Recipe 2.0 • Metadirectory Practices for the Enterprise Directory in Higher Education 1.0 • White Papers • Shibboleth Architecture v5 • Policies • Campus Certificate Policy for use at the Higher Education Bridge Certificate Authority (HEBCA) • Lightweight Campus Certificate Policy and Practice Statement (PKI-Lite) • Sample Campus Account Management Policy

  38. NMI-EDIT: Release 1 Deliverables • Works in Progress • Role of Directories in Video-on-Demand • Resource Discovery for Videoconferencing • Directory Services Architecture for Video and Voice Conferencing over IP (commObject)

  39. NMI-EDIT: Release 2 New/Revised Deliverables • Software  Programs and Libraries • OpenSAML 1.0                • Shibboleth 1.0                • Pubcookie 3.0 Directory Schemas • eduPerson             • eduOrg

  40. NMI-EDIT: Release 2 New/Revised Deliverables • Conventions and Practices • LDAP Recipe        • Metadirectory Practices for Enterprise Directories     • Practices in Directory Groups • Architectures • Inter-domain Data Exchange (Draft)        • Services • LDAP Analyzer

  41. The pieces fit together… Campus infrastructure Name space, identifiers, directories Enterprise authentication and authorization Inter-realm infrastructure edu object classes Exchange of attributes Inter-realm Upperware Grids Digital libraries Video

  42. NMI IntegrationTestbed • Mary Fran Yafchak • Testbed Manager,Southeastern Universities Research Association • maryfran@sura.org

  43. NMI Integration Testbed • Focus on the integration of released middleware components with real life use and conditions • Elements: Sites, Manager, Workshop • Integration is the point - could think of it as… • Where “EDIT” meets “GRIDS” • Where enterprise needs meet research needs • Where NMI components meet reality

  44. NMI Integration Testbed • Planning and management by SURA • Participating Sites: • University of Alabama at Birmingham • University of Alabama in Huntsville • University of Florida • Florida State University • Georgia State University • Texas Advanced Computing Center (U Texas/Austin) • University of Virginia • University of Michigan

  45. Core Testbed Sites NMI Participation UAB UAH UFL FSU GSU UMich TACC UVA USERS CONTRIBUTORS future expansion Implementers DEVELOPERS SUPPORTERS ? Target Communities NMI Integration Testbed NMI Integration Testbed NMI Integration Testbed

  46. NMI Integration Testbed - Recent Activities • Testbed Kickoff June 10 - 12, 2002 at GSU • Site Integration Plans completed in July 2002 • Testing of Release 1 completed • Press release & Web site announced 9/4/02 • See http://www.nsf-middleware.org/testbed • Open Testbed BoF here at I2 Members’ Meeting • Wednesday, October 30, 11:45AM-1:15PM

  47. NMI Integration Testbed - Some Highlights from the Sites • Twenty-six very real institutional projects and applications “on board” for NMI integration - with more to come... • Ten projects targeting increased access to their existing or planning scientific grids (including emerging TeraGrid) through NMI Globus • Five sites actively implementing enterprise scale directories, with centralized authentication and integrated applications • Active PKI efforts, from PKI Lite to PKI “heavy” (maintaining HIPPA/FERPA compliance) • New collaborative tools also represented, such as click-to-dial desktop video conferencing and shared calendaring

  48. NMI Integration Testbed - From R1 to R2 • Summarizing evaluation results from R1 - to be made available on the Testbed Web site • Working with Outreach to disseminate lessons learned thus far • R2-specific Component Testing Guidelines under development • Testbed Sites actively refreshing site plans and project sets with respect to R2 • R2 evaluation soon to be underway...

  49. NMI Integration Testbed - Potential for Expansion • Already on our minds... • Increase opportunities for both sponsored and unsponsored participation • Define a role and means of involvement for international participants • Define a role and means of involvement for corporate participants • Develop “hot topic” or application-specific testbeds • E.g., Digital Libraries, Digital Video, Medical middleware, Discipline-specific grids

  50. NMI Outreach and Participation NMI Participation and Outreach • Ann West • NMI-EDIT Outreach,EDUCAUSE/Internet2/Michigan Tech • awest@educause.edu

More Related