1 / 12

TeraGrid: Logical Site Model

TeraGrid: Logical Site Model. Chaitan Baru Data and Knowledge Systems San Diego Supercomputer Center. National Science Foundation TeraGrid. Prototype for Cyberinfrastructure (the “lower” levels) High Performance Network: 40 Gb/s backbone, 30 Gb/s to each site

axel-walter
Download Presentation

TeraGrid: Logical Site Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TeraGrid:Logical Site Model Chaitan Baru Data and Knowledge Systems San Diego Supercomputer Center

  2. National Science Foundation TeraGrid • Prototype for Cyberinfrastructure (the “lower” levels) • High Performance Network: 40 Gb/s backbone, 30 Gb/s to each site • National Reach: SDSC, NCSA, CIT, ANL, PSC • Over 20 Teraflops compute power • Approx. 1 PB rotating Storage • Extending by 2-3 sites in Fall 2003

  3. Services/Software View of Cyberinfrastructure

  4. Data from sensors SDSC Focus on Data: A Cyberinfrastructure “Killer App” • Over the next decade, data will come from everywhere • Scientific instruments • Experiments • Sensors and sensornets • New devices (personal digital devices, computer-enabled clothing, cars, …) • And be used by everyone • Scientists • Consumers • Educators • General public • SW environment will need to support unprecedented diversity, globalization, integration, scale, and use Data from instruments Data from simulations Data from analysis

  5. Prototype for Cyberinfrastructure

  6. DBMS disk (~10TB) SDSC Machine Room Data Architecture • .5 PB disk • 6 PB archive • 1 GB/s disk-to-tape • Support for DB2 /Oracle • Enable SDSC to be the grid data engine LAN (multiple GbE, TCP/IP) Local Disk (50TB) Power 4 DB Blue Horizon WAN (30 Gb/s) Power 4 HPSS Sun F15K Linux Cluster, 4TF SAN (2 Gb/s, SCSI) SCSI/IP or FC/IP 30 MB/s per drive 200 MB/s per controller FC GPFS Disk (100TB) FC Disk Cache (400 TB) Database Engine Data Miner Vis Engine Silos and Tape, 6 PB, 1 GB/sec disk to tape 32 tape drives

  7. The TeraGrid Logical Site View • Ideally, applications / users would like to see: • One single computer • Global everything: filesystem, HSM, database system • With highest possible performance • We will get there in steps • Meanwhile, the TeraGrid Logical Site View provides a uniform view of sites • A common abstraction supported by every site

  8. Logical Site View • Logical Site View is currently simply provided as a set of environment variables • Can easily become a set of services • This is minimum required to enable a TG application to easily make use of TG storage resources • However, for “power” users, we also anticipate the need to expose mapping from logical to physical resources at each site • Enables applications to take advantage of site-specific configurations and obtain optimal performance

  9. Basic Data Operations • The Data WG has stated as a minimum requirement: • the ability for a user to transfer data between any TG storage resource to memory on any TG compute resource – possibly via the use of an intermediate storage resource • Ability to transfer data between any two TG storage resources

  10. Compute Cluster Staging Area DBMS Staging Area Staging Area Logical Site View “Network” Staging Area HSM Compute Cluster Collection Management DBMS Scratch

  11. Environment Variables • TG_NODE_SCRATCH • TG_CLUSTER_SCRATCH • TG_GLOBAL_SCRATCH • TG_SITE_SCRATCH…? • TG_CLUSTER_HOME • TG_GLOBAL_HOME • TG_STAGING • TG_PFS • TG_PFS_GPFS, TG_PFS_PVFS, TG_PFS_LUSTRE • TG_SRB_STAGING

  12. Issues Under Consideration • Suppose a user wants to run computation, C, on data, D • The TG middleware should automatically figure out • Whether C should move to where D is, or vice versa • Whether data, D, should be pre-fetched, or “streamed” • Whether output data should be streamed to persistent storage, or staged via intermediate storage • Whether prefetch/staging time ought to be “charged” to the user or not

More Related