1 / 10

Computational Grid

Computational Grid. Carl Kesselman Hongsuda Tangmunarunkit David Okaya Kim Olsen. Outline. SCEC Grid Infrastructure Computational Pathway Interaction Plan with Pathway I and AI researchers. Computational Grid Status. Currently we are using USCGrid (since August 2002)

taran
Download Presentation

Computational Grid

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Grid Carl Kesselman Hongsuda Tangmunarunkit David Okaya Kim Olsen

  2. Outline • SCEC Grid Infrastructure • Computational Pathway • Interaction Plan with Pathway I and AI researchers

  3. Computational Grid Status • Currently we are using USCGrid (since August 2002) • Hpc: a cluster of Linux machines • Almaak: a 64CPUs shared memory machine • Terra: an 8CPUs shared memory machine • Any researcher with a USC account can access USCGrid (given that the permission is set in advance)

  4. Computational Grid Plan (I) • Goal: enable grid access from anywhere • Client side: • Setting up for other SCEC researchers in other universities (e.g. Kim Olsen in UCSB) to use the grid • Need to obtain NCSA or NPACI accounts and certificates • The ultimate goal is to enable SCEC grid access through a web-browser

  5. Computational Grid Plan (II) • Goal: expand computational resources • Server Side: gridify resources in • Pittsburgh Supercomputing Center • Already have 2 allocations on 2 platforms • San Diego Supercomputing Center • We have negotiated for SCEC access to SDSC resources • We plan to submit a proposal for resource allocation and usage

  6. Current Computation Pathway on the Grid Pre-computed CVM data (from CMU) + other inputs storage UCSB || code gridFTP gridFTP USCGrid

  7. Computation Pathway Plan (Next Quarter) storage Lat, Long, Depth, etc. UCSB || code CVM (Harold’s) USCGrid USCGrid Visualization display Local visualization Note: Need to be able to access CVM code through command lines

  8. Computational Pathway Plan (in 12 months) UCSB || code CVM1 storage Inputs: (Lat, Long, Depth, etc.) … Other || codes CVMn SCECGrid SCECGrid Visualization display Data cached on a cluster Distributed visualization with basic functionalities

  9. Interaction with Pathway II • Facilitate interactions among different components • Agree on input/output formats for • CVM models • Other physics-based models, e.g. UCSB, SDSU • Define meta-data/ontology for • Simulation codes: owners, version#, inputs, outputs, etc • Data sets: codes, parameter sets, #time steps, coordinates, architecture, etc. • Parallelize CVM code to run on the Grid • The speedup is proportional to #CPUs • Generate 4D data for validation/visualization

  10. Other Plans • Interaction with Pathway I • Run Pathway I simulations on the Grid • Enable parallel executions of pathway I code with different earthquake locations and time • Require data management (e.g. data discovery, meta-data, etc.) • Interaction with AI • Intelligent workflow manager: intelligently generate execution plans that will be scheduled on appropriate Grid resources • Define an ontology for grid resources and policies • Define meta-data/ontology for the current simulation codes and data (required interaction with Pathway II)

More Related