national science foundation directions n.
Skip this Video
Loading SlideShow in 5 Seconds..
National Science Foundation Directions PowerPoint Presentation
Download Presentation
National Science Foundation Directions

Loading in 2 Seconds...

play fullscreen
1 / 39

National Science Foundation Directions - PowerPoint PPT Presentation

  • Uploaded on

National Science Foundation Directions. Jim Kasdorf Director, Special Projects Pittsburgh Supercomputing Center Höchstleistungsrechenzentrum Stuttgart October 5, 2009. Disclaimer.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'National Science Foundation Directions' - emiko

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
national science foundation directions

National Science Foundation Directions

Jim Kasdorf

Director, Special Projects

Pittsburgh Supercomputing Center

Höchstleistungsrechenzentrum Stuttgart

October 5, 2009



Nothing in this presentation represents an official view or position of the U.S. National Science Foundation nor of the Pittsburgh Supercomputing Center nor of Carnegie Mellon University

the tracks 2005
The “Tracks”: 2005
  • Track 1: PF System for 2011, $200M
  • Track 2A, 2B, 2C, 2D: One each year: $30M
track 2a
Track 2A

NSF Awards TACC $59 Million for Unprecedented High Performance Computing System University of Texas, Arizona State University, Cornell University and Sun Microsystems to deploy the world’s most powerful general-purpose computing system on the TeraGrid 09/28/2006     Marcia Inger

AUSTIN, Texas: The National Science Foundation (NSF) has made a five-year, $59 million award to the Texas Advanced Computing Center (TACC) at The University of Texas at Austin to acquire, operate and support a high-performance computing system that will provide unprecedented computational power to the nation’s research scientists and engineers.

track 2a texas advanced computing center
Track 2A: Texas Advanced Computing Center
  • Sun / AMD / InfiniBand: Capacity System
  • Proposed peak: 421TF
  • Final peak: 529 PF
track 2b
Track 2B
  • University of Tennessee
  • Principal Investigator: Thomas Zacharia, Professor, Electrical Engineering and Computer Science

Petaflop NSF Computing:

The Track2B system at UT/ORNL


Phil Andrews,NICS Director,

Buddy Bland

(I stole everything from him!)

November 13, 2007

Reno, California

timeline synopsis predictions are always hard especially about the future yogi berra
Timeline Synopsis(Predictions are always hard: especially about the future! –Yogi Berra)

Phase-0: Early access to DoE Cray system, Jaguar (Now)

Phase-1: ~40TF NSF Cray System (Valentine’s Day ‘08)

Phase-1a: ~170TF NSF Cray System (Mid-May ‘08)

Phase-2: ~1PF NSF Cray System (1H’09)

Phase-3: (possible) >1PF NSF Cray System (’10)

track 2c
Track 2C

NSB-08-54 May 8, 2008

  • SUBJECT: Major Actions and Approvals at the May 6-7, 2008 Meeting
  • 4. The Board authorized the NSF Director, at his discretion, to make an award to the Mellon Pitt Carnegie (MPC) Corporation for support of proposal entitled, Transforming Science through Productive Petascale Computing.
track 2c1
Track 2C

Silicon Graphics Declares Bankruptcy and Sells Itself for $25 Million by Erik Schonfeldon April 1, 2009

Sadly, this is no April Fool’s joke. Silicon Graphics, the high-end computer computer workstation and server company founded by Jim Clark in 1982, today declared bankruptcy and sold itself to Rackable Systems for $25 million plus the assumption of “certain liabilities.” In its bankruptcy filing, SGI listed debt of $526 million.

track 2c2
Track 2C


Rumor: SGI breaks off NSF petaflops deal with Pittsburgh


About a year ago, the National Science Foundation worked with PSC to prepare for a 1 PetaFlop system to be deployed there and integrated into the TeraGrid, a large global supercomputing network used for academic and public research.  The result was an SGI UltraViolet system, approximately 197 cabinets, 100,000 cores, and all of it for the low price of $30 million dollars.

Well, that was with the old SGI.  News now is that the new SGI has found other customers willing to pay higher “more reasonable” prices for these same cabinets, and has decided not to honor the original offer.  Legally, they don’t have to honor them but it puts PSC and the NSF in a tight spot as they now have $30 million that’s supposed to magically turn into a 1PF supercomputer, and won’t.

track 2d split into four parts
Track 2D: Split into four parts
  • Data-intensive, high-performance computing system
  • Experimental high-performance computing system of innovative design
  • Experimental, high-performance grid test-bed
  • Pool of loosely coupled grid-computing resources.
track 2d data
Track 2D / Data

San Diego Supercomputer Center / UCSD

“Flash Gordon”

  • Appro / Intel / ScaleMP
  • Flash Memory
  • 200 TF Peak
track 2d experimental
Track 2D / Experimental

Keeneland: National Institute for Experimental Computing

  • Georgia Tech + University of Tennessee and Oak Ridge National Laboratory
  • Initially HP + NVIDIA Tesla
  • 2012: New technology, 2PF peak
track 2d experimental grid test bed
Track 2D / Experimental Grid Test Bed

FutureGrid: Indiana University

Testbed to address complex research challenges in computer science related to the use and security of grids and clouds.

  • A geographically distributed set of heterogeneous computing systems
  • A data management system to hold both metadata and a growing library of software images,
  • A dedicated network allowing isolatable, secure experiments.
track 1 proposals rumors
Track 1 Proposals: Rumors

State of California

  • ~1M core IBM Blue Gene (not HPCS system)
  • Sited at Livermore

PSC, et al: ?? (not HPCS system)

Oak Ridge National Laboratory

  • Cray Cascade (HPCS system)

NCSA, et al

  • IBM PERCS (HPCS system)
track 1 blue waters ncsa
Track 1: Blue Waters / NCSA

(Rumored specs – “The Register”)

  • IBM Power 7 @ 4GHz
  • 38,900 eight-core chips, 10PF peak
  • 620 TB memory
  • 1.3 PB/s interconnect
  • 26 PB storage
  • Exabyte archive
teragrid phase iii extreme digital resources for science and engineering xd
TeraGrid Phase III: eXtreme Digital Resources for Science and Engineering (XD)
  • High-Performance Computing and Storage Services: Four to six nodes, Track 2 and its successors
  • High-Performance Remote Visualization and Data Analysis Services – up to two
  • Technology Audit and Insertion Service
  • Advanced User Support Service
  • Training, Education and Outreach Service
xd continued
XD (continued)
  • Coordination and Management Service
    • Design XD grid architecture
    • Manage its implementation,
    • Coordinate regular reporting of XD activities to NSF
    • Manage accounting, authorization, authentication, allocation and security services
    • Coordinate XD component services
    • Maintain a responsive, user-centric operational posture for XD
    • Coordinate service providers that provide access to physical resources to maintain a XD network that meets the needs of the user community.
xd status
XD Status
  • Preliminary proposals reviewed
  • More planning needed before full proposals
    • Coordination and Management Services
    • Advanced User Support Services
    • Training, Education and Outreach Services
  • Technical Audit and Insertion Services
    • Full proposals June 2009
xd remote visualization and data
XD Remote Visualization and Data
  • September 28, 2009
  • AUSTIN, Texas — The National Science Foundation (NSF) has awarded a $7 million grant to the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for a three-year project that will provide a new computing resource and the largest, most comprehensive suite of visualization and data analysis (VDA) services to the open science community.
  • The new compute resource, "Longhorn," will provide unprecedented VDA capabilities and will enable the national and international science communities to interactively visualize and analyze datasets of near petabyte scale (a quadrillion bytes or 1,000 terabytes) for scientists to explore, gain insight and develop new knowledge.
xd remote visualization and data1
XD Remote Visualization and Data
  • University of Tennessee: Center for Remote Data Analysis and Visualization (RDAV)
    • with: Oak Ridge National Laboratory (ORNL), Lawrence Berkeley National Laboratory, the University of Wisconsin and the National Center for Supercomputing Applications
    • SGI shared-memory system, “Nautilus”,1,024 cores, 4,096 GB memory, and 16 graphics processing units (NVIDIA GT300 Fermi)
teragrid extension
TeraGrid Extension
  • To bridge to XD
  • One year: $30M
what s next planning
What’s Next: Planning!

NSF Advisory Committee for Cyber Infrastructure - ACCI

acci task forces update



Jim Bottum

September 22, 2009

task force introduction
Task Force Introduction

Timeline 12-18 months or less from June 2009

Led by NSF Advisory Committee on Cyberinfrastructure

Co-led by NSF PD’s (OCI)

Membership from community

Include other agencies: DOE, EU, etc


Program recommendations

We then go back and develop programs

task force leads
Task Force Leads

Chair – Jim Bottum

Consultant – Paul Messina

NSF – Ed Seidel, Carmen Whitson, Jose Munoz

task forces acci leads
Task Forces & ACCI Leads

Campus Bridging

Craig Stewart, Indiana University

Software Infrastructure

David Keyes, Columbia University

Data & Visualization

Shenda Baker, Harvey Mudd College


Thomas Zacharia, U of Tennessee, ORNL

Grand Challenge Communities

Tinsley Oden, U of Texas

Learning & Workforce Development

Diana Oblinger, EDUCAUSE


TF are functionally interdependent

TF leaders talk regularly with each other, NSF

Monthly conference calls with TF chairs, co-chairs, Paul M, NSF team

TF Chairs and ACCI members: please work with ADs! This is NSF wide!

Wiki site

Public; anyone can contribute to this

NSF team will cycle through each TF

Joint workshops between TFs encouraged

software infrastructure charge
Software Infrastructure Charge

Identify specific needs and opportunities across the spectrum of scientific software infrastructure

Design responsive approaches

Address issue of institutional barriers

campus bridging charge
Campus Bridging Charge

Identification of best practices for

general process of bridging to national infrastructure

interoperable identification and authentication

Dissemination of and use of shared data collections

Vetting and sharing definitive, open use educational materials

Suggest common elements of software stacks widely usable across nation/world to promote interoperability/economy of scale

Recommended policy documents that any research university have in place

Identify solicitations to support this work

data visualization charge
Data & Visualization Charge

Examine the increasing importance of data, its development cycle(s) and their integral relationships within exploration, discovery, research engineering and educations aspects

Address the increasing interaction and interdependencies of data within the context of a range of computational capacities to catalyze the development of a system of science and engineering data collections that is open, extensive and evolvable

Emphasis will be toward identifying the requirements for digital data cyberinfrastructure that will enable significant progress in multiple fields of science and engineering and education – including visualization and inter-disciplinary research and cross-disciplinary education

hpc charge
HPC Charge

To provide specific advice on the broad portfolio of HPC investments that NSF could consider to best advance science and engineering over the next five to ten years. Recommendations:

should be based on input from the research community and from experts in HPC technologies

should include hardware, software and human expertise

encompass both

infrastructure to support breakthrough research in science and engineering and

research on the next-generation of hardware, software and training.

grand challenge communities charge
Grand Challenge Communities Charge

Which grand challenges requireprediction and which do not

What are the generic computational and social technologies that belong to OCI and are applicable to all grand challenges

How can OCI make the software and other technical investments that are useful and cut across communities

What are the required investments in data as well as institutional components needed for GCC’s

How can we help communities (outreach) work effectively that do not yet know what they need or how to work together.

grand challenge communities charge 2
Grand Challenge Communities Charge (2)

How to conceive of and enable grand challenge communities that make use of cyberinfrastructure.

What type of CI is needed (hardware, networking, software, data, social science knowledge, etc.).

How to deal with the issues of data gathering and inoperability for both static and dynamic, real time problems.

What open scientific issues transcend NSF Directorates

Can we develop a more coherent architecture including data interoperability, a software environment people can build on, applications to be built on this environment, common institutional standards, etc.

learning workforce development charge
Learning & WorkforceDevelopment Charge

Foster the broad deployment and utilization of CI-enabled learning and research environments

Support the development of new skills and professions needed for full realization of CI-enabled opportunities;

Promote broad participation of underserved groups, communities and institutions, both as creators and users of CI;

Stimulate new developments and continual improvements of CI-enabled learning and research environments;

Facilitate CI-enabled lifelong learning opportunities ranging from the enhancement of public understanding of science to meeting the needs of the workforce seeking continuing professional development;

Support programs that encourage faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research in computational science and computational science curriculum development;

Support the development of programs that connect K-12 students and educators with the types of computational thinking and computational tools that are being facilitated by cyberinfrastructure.


Task force charges and membership reviewed at June ACCI meeting

NSF staff leads assigned to each TF (staffing still ramping up over summer)

Workshops held or being planned

GCC and Software Infrastructure TFs drafting a recommendation regarding CS&E program

das ende
Das Ende

Jim Kasdorf