National Science Foundation Directions. Jim Kasdorf Director, Special Projects Pittsburgh Supercomputing Center Höchstleistungsrechenzentrum Stuttgart October 5, 2009. Disclaimer.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Director, Special Projects
Pittsburgh Supercomputing Center
October 5, 2009
Nothing in this presentation represents an official view or position of the U.S. National Science Foundation nor of the Pittsburgh Supercomputing Center nor of Carnegie Mellon University
NSF Awards TACC $59 Million for Unprecedented High Performance Computing System University of Texas, Arizona State University, Cornell University and Sun Microsystems to deploy the world’s most powerful general-purpose computing system on the TeraGrid 09/28/2006 Marcia Inger
AUSTIN, Texas: The National Science Foundation (NSF) has made a five-year, $59 million award to the Texas Advanced Computing Center (TACC) at The University of Texas at Austin to acquire, operate and support a high-performance computing system that will provide unprecedented computational power to the nation’s research scientists and engineers.
The Track2B system at UT/ORNL
Phil Andrews,NICS Director,
(I stole everything from him!)
November 13, 2007
Phase-0: Early access to DoE Cray system, Jaguar (Now)
Phase-1: ~40TF NSF Cray System (Valentine’s Day ‘08)
Phase-1a: ~170TF NSF Cray System (Mid-May ‘08)
Phase-2: ~1PF NSF Cray System (1H’09)
Phase-3: (possible) >1PF NSF Cray System (’10)
NSB-08-54 May 8, 2008
Silicon Graphics Declares Bankruptcy and Sells Itself for $25 Million by Erik Schonfeldon April 1, 2009
Sadly, this is no April Fool’s joke. Silicon Graphics, the high-end computer computer workstation and server company founded by Jim Clark in 1982, today declared bankruptcy and sold itself to Rackable Systems for $25 million plus the assumption of “certain liabilities.” In its bankruptcy filing, SGI listed debt of $526 million.
Rumor: SGI breaks off NSF petaflops deal with Pittsburgh
About a year ago, the National Science Foundation worked with PSC to prepare for a 1 PetaFlop system to be deployed there and integrated into the TeraGrid, a large global supercomputing network used for academic and public research. The result was an SGI UltraViolet system, approximately 197 cabinets, 100,000 cores, and all of it for the low price of $30 million dollars.
Well, that was with the old SGI. News now is that the new SGI has found other customers willing to pay higher “more reasonable” prices for these same cabinets, and has decided not to honor the original offer. Legally, they don’t have to honor them but it puts PSC and the NSF in a tight spot as they now have $30 million that’s supposed to magically turn into a 1PF supercomputer, and won’t.
San Diego Supercomputer Center / UCSD
Keeneland: National Institute for Experimental Computing
FutureGrid: Indiana University
Testbed to address complex research challenges in computer science related to the use and security of grids and clouds.
State of California
PSC, et al: ?? (not HPCS system)
Oak Ridge National Laboratory
NCSA, et al
(Rumored specs – “The Register”)
NSF Advisory Committee for Cyber Infrastructure - ACCI
September 22, 2009
Timeline 12-18 months or less from June 2009
Led by NSF Advisory Committee on Cyberinfrastructure
Co-led by NSF PD’s (OCI)
Membership from community
Include other agencies: DOE, EU, etc
We then go back and develop programs
Chair – Jim Bottum
Consultant – Paul Messina
NSF – Ed Seidel, Carmen Whitson, Jose Munoz
Craig Stewart, Indiana University
David Keyes, Columbia University
Data & Visualization
Shenda Baker, Harvey Mudd College
Thomas Zacharia, U of Tennessee, ORNL
Grand Challenge Communities
Tinsley Oden, U of Texas
Learning & Workforce Development
Diana Oblinger, EDUCAUSE
TF are functionally interdependent
TF leaders talk regularly with each other, NSF
Monthly conference calls with TF chairs, co-chairs, Paul M, NSF team
TF Chairs and ACCI members: please work with ADs! This is NSF wide!
Public; anyone can contribute to this
NSF team will cycle through each TF
Joint workshops between TFs encouraged
Identify specific needs and opportunities across the spectrum of scientific software infrastructure
Design responsive approaches
Address issue of institutional barriers
Identification of best practices for
general process of bridging to national infrastructure
interoperable identification and authentication
Dissemination of and use of shared data collections
Vetting and sharing definitive, open use educational materials
Suggest common elements of software stacks widely usable across nation/world to promote interoperability/economy of scale
Recommended policy documents that any research university have in place
Identify solicitations to support this work
Examine the increasing importance of data, its development cycle(s) and their integral relationships within exploration, discovery, research engineering and educations aspects
Address the increasing interaction and interdependencies of data within the context of a range of computational capacities to catalyze the development of a system of science and engineering data collections that is open, extensive and evolvable
Emphasis will be toward identifying the requirements for digital data cyberinfrastructure that will enable significant progress in multiple fields of science and engineering and education – including visualization and inter-disciplinary research and cross-disciplinary education
To provide specific advice on the broad portfolio of HPC investments that NSF could consider to best advance science and engineering over the next five to ten years. Recommendations:
should be based on input from the research community and from experts in HPC technologies
should include hardware, software and human expertise
infrastructure to support breakthrough research in science and engineering and
research on the next-generation of hardware, software and training.
Which grand challenges requireprediction and which do not
What are the generic computational and social technologies that belong to OCI and are applicable to all grand challenges
How can OCI make the software and other technical investments that are useful and cut across communities
What are the required investments in data as well as institutional components needed for GCC’s
How can we help communities (outreach) work effectively that do not yet know what they need or how to work together.
How to conceive of and enable grand challenge communities that make use of cyberinfrastructure.
What type of CI is needed (hardware, networking, software, data, social science knowledge, etc.).
How to deal with the issues of data gathering and inoperability for both static and dynamic, real time problems.
What open scientific issues transcend NSF Directorates
Can we develop a more coherent architecture including data interoperability, a software environment people can build on, applications to be built on this environment, common institutional standards, etc.
Foster the broad deployment and utilization of CI-enabled learning and research environments
Support the development of new skills and professions needed for full realization of CI-enabled opportunities;
Promote broad participation of underserved groups, communities and institutions, both as creators and users of CI;
Stimulate new developments and continual improvements of CI-enabled learning and research environments;
Facilitate CI-enabled lifelong learning opportunities ranging from the enhancement of public understanding of science to meeting the needs of the workforce seeking continuing professional development;
Support programs that encourage faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research in computational science and computational science curriculum development;
Support the development of programs that connect K-12 students and educators with the types of computational thinking and computational tools that are being facilitated by cyberinfrastructure.
Task force charges and membership reviewed at June ACCI meeting
NSF staff leads assigned to each TF (staffing still ramping up over summer)
Workshops held or being planned
GCC and Software Infrastructure TFs drafting a recommendation regarding CS&E program