1 / 20

ANABAS

ANABAS. Use of Grids in DoD Applications. Geoffrey Fox, Alex Ho SAB Briefing November 16, 2005. General Message I.

lucie
Download Presentation

ANABAS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ANABAS Use of Grids in DoD Applications Geoffrey Fox, Alex HoSAB Briefing November 16, 2005

  2. General Message I • Our proof of concept demonstrates many of the NCOW core enterprise services (CES) implemented using Grid services built on top of the WS-* Web service industry specifications. • We will illustrate the use of the Grid of Grids architecture to integrate heterogeneous systems. The papers describe how all CES can be implemented using Grid technology and this is proposed in phase II SBIR. • Note the adherence to standards with a common line protocol SOAP implies that all service implementations are interoperable and one takes services from multiple sources. Anabas/Indiana University only has to implement some of the key Grid services.

  3. General Message II: Why Grids • Web services gives us interoperability but Grids are essential as we aim at Information Management • Grids are the key idea to manage complexity but applying uniform policies and building managed systems • Grids of Grids allows one to build out the management in a modular fashion • Uniform Grid messaging handles complex networks with managed QoS such as real-time constraints • Managed Services and Messaging gives scalability and performance (later slide)

  4. DoD Core Services and WS-* plus GS-* I

  5. DoD Core Services and WS-* and GS-* II

  6. Major Conclusions I • One can map 7.5 out of 9 NCOW and GiG core capabilities into Web Service (WS-*) and Grid (GS-*) architecture and core services • Analysis of Grids in NCOW document inaccurate (confuse Grids and Globus and only consider early activities) • Some “mismatches” on both NCOW and Grid sides • GS-*/WS-* do not have collaboration and miss some messaging • NCOW does not have at core level system metadata and resource/service scheduling and matching • Higher level services of importance include GIS (Geographical Information Systems), Sensors and data-mining

  7. Major Conclusions II • Criticisms of Web services in a recent paper by Birman seem to be addressed by Grids or reflect immaturity of initial technology implementations • NCOW does not seem to have any analysis of how to build their systems on WS-*/GS-* technologies in a layered fashion; they do have a layered service architecture so this can be done • They agree with service oriented architecture • They seem to have no process for agreeing to WS-* GS-* or setting other standards for CES • Grid of Grids allows modular architectures and natural treatment of legacy systems

  8. Performance • Reduction of message delay jitter to a millisecond. • Dynamic meta-data access latency reduced from seconds to milliseconds using web service context service. • The messaging is distributed with each low end Linux node capable of supporting 500 users at a total bandwidth of 140 Mbits/sec with over 20,000 messages per second. • Systematic use of redundant fault tolerance services supports strict user QoS requirements and fault tolerant Grid enterprise bus supports collaboration and information sharing at a cost that scales logarithmically with number of simultaneous users and resources. • Supporting N users at the 0.5 Mbits/sec level each would require roughly (N/500)log(N/500) messaging servers to achieve full capability.

  9. Script I: Data Mining and GIS Grid • This will show a set of Open Geospatial Consortium (OGC) compatible services implementing a GIS (Geographical Information System) grid supporting streaming of feature and map data. • Intrinsic features of a region are supplemented here by features coming from a data-mining code that is filtering data to predict likely earthquake positions. • This uses discovery, metadata, database, workflow, messaging, data transformation, simulation (data-mining) services. • Note the OGC compatible WFS (Web Feature Service) plays role as a domain specific service interface to a database • This used by Los Alamos for DHS simulations replacing data mining by critical infrastructure simulations

  10. I: Data Mining and GIS Grid Data Mining Grid Databases with NASA, USGS features SERVOGrid Faults NASA WMS WFS3 WFS1 WFS2 WMS handling Client requests UDDI SOAP HTTP WMS Client WMS Client

  11. Filter PI Data Mining Filter WS-Context WFS3 GIS Grid Databases with NASA,USGS features SERVOGrid Faults I: Data Mining Grid WFS4 Pipeline SOAP UDDI HPSearchWorkflow NaradaBrokering System Services

  12. Hot spots calculations--areas of increased earthquake probability in the forecast time-- calculations are re-plotted on the map as features.

  13. Script I: Google Map Grid Service • This first demo also illustrates how the Google map system can be wrapped as a Grid itself front-ended by a OGC Web Map Service. • This is used in a Grid of Grids fashion with Google linked with traditional (NASA) Web Map services. • Illustrates how linking NCOW to commodity Grid technology allows access to major IT resources • Google’s 100,000 computers • DoD MSRC, DoE, NSF Supercomputers

  14. Real Time GPS and Google Maps Subscribe to live GPS station. Position data from SOPAC is combined with Google map clients. Select and zoom to GPS station location, click icons for more information.

  15. Script II: Collaborative Grid Service • This demonstrates how streams can be formed from messages and managed in a uniform way whether maps or video. Collaboration is achieved by multicasting of the input or output streams to Grid services. • Our messaging infrastructure handles all multicasting (using software) transparently to services • First we demonstrate collaborative maps using “shared input ports” on web service

  16. Collaborative Google Mapswith faults from WFS

  17. Script III: Collaboration Grid • Collaboration uses basic Grid services – metadata, discovery, workflow, security plus the XGSP stream management services. • Complex collaboration scenarios correspond to additional services for particular shared applications and to gateways in Grid of Grids fashion to H323, SIP and other protocols. Annotation, record, replay, whiteboards, codec conversion, audio and video mixing become services. • We demonstrate MPEG4 transcoding and video mixing services • Only Grid Web service based collaboration environment • Use of Grids ensures scalability and performance

  18. Gateway Gateway Gateway Gateway XGSP Media Service WS-Context Collaboration Grid NaradaBroker Audio Mixer HPSearch Video Mixer UDDI NaradaBroker Transcoder Thumbnail WS-Security Replay NaradaBroker Record Annotate SharedWS SharedDisplay WhiteBoard

  19. GIS TV GlobalMMCS SWT Client Chat Video Mixer Webcam

  20. Archived stream Annotation / WB e - Annotation e - Annotation e-Annotation Archived Stream Annotated e-Annotation Player Player Stream Player Whiteboard player player Whiteboard Player Archived Real Time Real TimeStream List Stream List Player Real time Real time stream Archieved stream list player stream list

More Related