1 / 69

Information Centric Super Computing

This presentation explores the shift towards information-centric supercomputing and the need for investment in scientific information management and visualization tools. It examines the changing nature of problems demanding supercomputing and the implications for systems design. The goal is to optimize human attention as a resource and improve information quality in the digital age.

melissap
Download Presentation

Information Centric Super Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Information Centric Super Computing Jim Gray Microsoft Research gray@microsoft.com Talk at http://research.microsoft.com/~gray/talks 20 May 2003 Presentation to Committee on the Future of Supercomputing of the National Research Council's Computer Science and Telecommunications Board

  2. Committee Goal … assess the status of supercomputing in the United States, including the characteristics of relevant systems and architecture research in government, industry, and academia and the characteristics of the relevant market. The committee will examine key elements of context--the history of supercomputing, the erosion of research investment, the needs of government agencies for supercomputing capabilities--and assess options for progress. Key historical or causal factors will be identified. The committee will examine the changing nature of problems demanding supercomputing (e.g., weapons design, molecule modeling and simulation, cryptanalysis, bioinformatics, climate modeling) and the implications for systems design. It will seek to understand the role of national security in the supercomputer market and the long-term federal interest in supercomputing.

  3. Summary: It’s the Software… • superComputing is Information centric • Scientific computing is Beowulf computing • Scientific computing becoming Info-centric. • Adequate investment in files/OS/networking • Underinvestment in Scientific Information management and visualization tools. • Computation Grid moves too much data, DataGrid (or App Grid) is right concept

  4. Thesis • Most new information is digital(and old information is being digitized) • A Computer Science Grand Challenge: • Capture • Organize • Summarize • Visualize This information • Optimize Human Attention as a resource • Improve information quality

  5. Information Avalanche • The Situation • We can record everything • Everything is a LOT! • The Good news • Changes science, education, medicine, entertainment,…. • Shrinks time and space • Can augment human intelligence • The Bad News • The end of privacy • Cyber Crime / Cyber Terrorism • Monoculture • The Technical Challenges • Amplify human intellect • Organize, summarize and prioritize information • Make programming easy

  6. Super Computers • IntraNets • Wal-Mart • Federal Reserve • Amex • 1 Tflops • All more than 1PB • You and Others use Every day • Google, Inktomi,… • AOL, MSN, Yahoo! • Hotmail, MSN,… • eBay, Amazon.com,… • All are more than 10 Tops • All more than 1PB They are ALL Information Centric

  7. Q: How can I recognize a SuperComputer?A: Costs 10M$Gordon Bell’s Seven Price Tiers 10$: wrist watch computers (sensors) 100$: pocket/ palm computers (phone/camera) 1,000$: portable computers (tablet) 10,000$: personal computers (workstation) 100,000$: departmental computers (closet) 1,000,000$: site computers (glass house) 10,000,000$: regional computers (glass castle SC) Super Computer / “Mainframe” Costs more than 1M$ Must be an array of processors, disks comm ports

  8. Computing is Information Centricthat’s why they call it IT • Programs capture, organize, abstract, filter, present Information to people. • Networks carry Information. • File is wrong abstraction: Information is typed / schematized words, pictures, sounds, arrays, lists,.. • Notice that none of the examples on prev slideserve files – they serve typed information. • Recommendation:Increase Research investments ABOVE the OS levelInformation Management/Visualization

  9. Summary: It’s the Software… • Computing is Information centric • Scientific computing is Beowulf computing • Scientific computing becoming Info-centric. • Adequate investment in files/OS/networking • Underinvestment in Scientific Information management and visualization tools. • Computation Grid moves too much data, DataGrid (or App Grid) is right concept

  10. Anecdotal Evidence,Everywhere I go I see Beowulfs • Clusters of PCs (or high-slice-price micros) • True: I have not visited Earth Simulator, but… Google, MSN, Hotmail, Yahoo, NCBI, FNAL, Los Alamos, Cal Tech, MIT, Berkeley, NARO, Smithsonian, Wisconsin, eBay, Amazon.com, Schwab, Citicorp, Beijing, Cern, BaBar, NCSA, Cornell, UCSD, and of course NASA and Cal Tech

  11. skip Super ComputingThe Top 10 of Top 500

  12. skip Seti@HomeThe worlds most powerful computer • 61 TF is sum of top 4 of Top 500. • 61 TF is 9x the number 2 system. • 61 TF more than the sum of systems 2..10

  13. skip And… • Google: • 10k cpus, 2PB,… as of 2 years ago • 40 Tops • AOL, MSN, Hotmail, Yahoo!, … -- all ~10K cpus -- all have ~ 1PB …10PB storage • Wal-Mart is a PB poster child • Clusters / Beowulf everywhere you go.

  14. Scientific == Beowulf (clusters) • Scientific/ Beowulf/ Grid computing 70’s style computing: process / file / socket byte arrays, no data schema or semantics batch job scheduling manual parallelism (MPI)poor / no Information management supportpoor / no Information visualization toolkits • Recommendation: Increase investment in Info-Management Increase investment in Info-Visualization

  15. Summary: It’s the Software… • Computing is Information centric • Scientific computing is Beowulf computing • Scientific computing becoming Info-centric. • Adequate investment in files/OS/networking • Underinvestment in Scientific Information management and visualization tools. • Computation Grid moves too much data, DataGrid (or App Grid) is right concept

  16. The Evolution of Science • Observational Science • Scientist gathers data by direct observation • Scientist analyzes Information • Analytical Science • Scientist builds analytical model • Makes predictions. • Computational Science • Simulate analytical model • Validate model and makes predictions • Science - InformaticsInformation Exploration Science Information captured by instrumentsOr Information generated by simulator • Processed by software • Placed in a database / files • Scientist analyzes database / files

  17. How Discoveries Made?Adapted from slide by George Djorgovski • Conceptual Discoveries: e.g., Relativity, QM, Brane World, Inflation … Theoretical, may be inspired by observations • Phenomenological Discoveries: e.g., Dark Matter, QSOs, GRBs, CMBR, Extrasolar Planets, Obscured Universe …Empirical, inspire theories, can be motivated by them New Technical Capabilities Observational Discoveries Theory Phenomenological Discoveries: • Explore parameter space • Make new connections (e.g., multi-)Understanding of complex phenomena requires complex, information-rich data (and simulations?)

  18. Comp-Science generating Information avalanche comp-chem, comp-physics, comp-bio, comp-astro, comp-linguistics, comp-music, comp-entertainment, comp-warfare Science-Info generating Information avalanche bio-info, astro-info, text-info, The Information Avalancheboth comp-X and X-infogenerating Petabytes

  19. Information Avalanche Stories • Turbulence: 100 TB simulation then mine the Information • BaBar: Grows 1TB/day 2/3 simulation Information 1/3 observational Information • CERN: LHC will generate 1GB/s 10 PB/y • VLBA (NRAO) generates 1GB/s today • NCBI: “only ½ TB” but doubling each year very rich dataset. • Pixar: 100 TB/Movie

  20. Astro-InfoWorld Wide Telescopehttp://www.astro.caltech.edu/nvoconf/http://www.voforum.org/ • Premise: Most data is (or could be online) • Internet is the world’s best telescope: • It has data on every part of the sky • In every measured spectral band: optical, x-ray, radio.. • As deep as the best instruments (2 years ago). • It is up when you are up.The “seeing” is always great(no working at night, no clouds no moons no..). • It’s a smart telescope: links objects and data to literature on them.

  21. ROSAT ~keV DSS Optical IRAS 25m 2MASS 2m GB 6cm WENSS 92cm NVSS 20cm IRAS 100m Why Astronomy Data? • It has no commercial value • No privacy concerns • Can freely share results with others • Great for experimenting with algorithms • It is real and well documented • High-dimensional data (with confidence intervals) • Spatial data • Temporal data • Many different instruments from many different places and many different times • But, it’s the same universe so comparisons make sense & are interesting. • Federation is a goal • There is a lot of it (petabytes) • Great sandbox for data mining algorithms • Can share cross company • University researchers • Great way to teach both Astronomy and Computational Science

  22. Summary: It’s the Software… • Computing is Information centric • Scientific computing is Beowulf computing • Scientific computing becoming Info-centric. • Adequate investment in files/OS/networking • Underinvestment in Scientific Information management and visualization tools. • Computation Grid moves too much data, DataGrid (or App Grid) is right concept

  23. Data Mining Algorithms Miners Scientists Science Data & Questions Database To store data Execute Queries Plumbers Question & AnswerVisualization Tools What X-info Needs from us (cs)(not drawn to scale)

  24. You can GREP 1 MB in a second You can GREP 1 GB in a minute You can GREP 1 TB in 2 days You can GREP 1 PB in 3 years. Oh!, and 1PB ~5,000 disks At some point you need indices to limit searchparallel data search and analysis This is where databases can help You can FTP 1 MB in 1 sec You can FTP 1 GB / min (= 1 $/GB) … 2 days and 1K$ … 3 years and 1M$ Data Access is hitting a wallFTP and GREP are not adequate

  25. Next-Generation Data Analysis • Looking for • Needles in haystacks – the Higgs particle • Haystacks: Dark matter, Dark energy • Needles are easier than haystacks • Global statistics have poor scaling • Correlation functions are N2, likelihood techniques N3 • As data and processing grow at same rate, we can only keep up with N logN • A way out? • Discard notion of optimal (data is fuzzy, answers are approximate) • Don’t assume infinite computational resources or memory • Requires combination of statistics & computer science • Recommendation: invest in data mining research both general and domain-specific.

  26. Analysis and Databases • Statistical analysis deals with • Creating uniform samples • data filtering & censoring bad data • Assembling subsets • Estimating completeness • Counting and building histograms • Generating Monte-Carlo subsets • Likelihood calculations • Hypothesis testing • Traditionally these are performed on files • Most of these tasks are much better done inside a databaseclose to the data. • Move Mohamed to the mountain, not the mountain to Mohamed. • Recommendation: Invest in database research: extensible databases: text, temporal, spatial, … data interchange, parallelism, indexing, query optimization

  27. Goal: Easy Data Publication & Access • Augment FTP with data query: Return intelligent data subsets • Make it easy to • Publish: Record structured data • Find: • Find data anywhere in the network • Get the subset you need • Explore datasets interactively • Realistic goal: • Make it as easy as publishing/reading web sites today.

  28. Data Federations of Web Services • Massive datasets live near their owners: • Near the instrument’s software pipeline • Near the applications • Near data knowledge and curation • Super Computer centers become Super Data Centers • Each Archive publishes a web service • Schema: documents the data • Methods on objects (queries) • Scientists get “personalized” extracts • Uniform access to multiple Archives • A common global schema Federation

  29. Web Services: The Key? Your program Web Server http • Web SERVER: • Given a url + parameters • Returns a web page (often dynamic) • Web SERVICE: • Given a XML document (soap msg) • Returns an XML document • Tools make this look like an RPC. • F(x,y,z) returns (u, v, w) • Distributed objects for the web. • + naming, discovery, security,.. • Internet-scale distributed computing Web page Your program Web Service soap Data In your address space objectin xml

  30. The Challenge • This has failed several times before– understand why. • Develop • Common data models (schemas), • Common interfaces (class/method) • Build useful prototypes (nodes and portals) • Create a community that uses the prototypes and evolves the prototypes.

  31. Grid and Web Services Synergy • I believe the Grid will be many web services • IETF standards Provide • Naming • Authorization / Security / Privacy • Distributed Objects Discovery, Definition, Invocation, Object Model • Higher level services: workflow, transactions, DB,.. • Synergy: commercial Internet & Grid tools

  32. Summary: It’s the Software… • Computing is Information centric • Scientific computing is Beowulf computing • Scientific computing becoming Info-centric. • Adequate investment in files/OS/networking • Underinvestment in Scientific Information management and visualization tools. • Computation Grid moves too much data, DataGrid (or App Grid) is right concept

  33. Recommendations • Increase Research investments ABOVE the OS level Information Management/Visualization • Invest in database research: extensible databases: text, temporal, spatial, … data interchange, parallelism, indexing, query optimization • invest in data mining research both general and domain-specific

  34. Stop Here • Bonus slides on Distributed Computing Economics

  35. Distributed Computing Economics • Why is Seti@Home a great idea • Why is Napster a great deal? • Why is the Computational Grid uneconomic • When does computing on demand work? • What is the “right” level of abstraction • Is the Access Grid the real killer app? Based on: Distributed Computing Economics, Jim Gray, Microsoft Tech report, March 2003, MSR-TR-2003-24 http://research.microsoft.com/research/pubs/view.aspx?tr_id=655

  36. Computing is Free • Computers cost 1k$ (if you shop right) • So 1 cpu day == 1$ • If you pay the phone bill (and I do)Internet bandwidth costs 50 … 500$/mbps/m(not including routers and management). • So 1GB costs 1$ to send and 1$ to receive

  37. Why is Seti@Home a Good Deal? • Send 300 KB for costs 3e-4$ • User computes for ½ day: benefit .5e-1$ • ROI: 1500:1

  38. Why is Napster a Good Deal? • Send 5 MB costs 5e-3$ • ½ a penny per song • Both sender and receiver can afford it. • Same logic powers web sites (Yahoo!...): • 1e-3$/page view advertising revenue • 1e-5$/page view cost of serving web page • 100:1 ROI

  39. The Cost of Computing:Computers are NOT free! • Capital Cost of a TpcC system is mostly storage and storage software (database) • IBM 32 cpu, 512 GB ram 2,500 disks, 43 TB(680,613 tpmC @ 11.13 $/tpmc available 11/08/03)http://www.tpc.org/results/individual_results/IBM/IBMp690es_05092003.pdf • A 7.5M$ super-computer • Total Data Center Cost: 40% capital &facilities 60% staff(includes app development)

  40. Computing Equivalents1 $ buys • 1 day of cpu time • 4 GB ram for a day • 1 GB of network bandwidth • 1 GB of disk storage • 10 M database accesses • 10 TB of disk access (sequential) • 10 TB of LAN bandwidth (bulk)

  41. Some consequences • Beowulf networking is 10,000x cheaper than WAN networkingfactors of 105 matter. • The cheapest and fastest way to move a Terabyte cross country is sneakernet.24 hours = 4 MB/s50$ shipping vs 1,000$ wan cost. • Sending 10PB CERN data via networkis silly: buy disk bricks in Geneva, fill them, ship them. TeraScale SneakerNet: Using Inexpensive Disks for Backup, Archiving, and Data Exchange Jim Gray; Wyman Chong; Tom Barclay; Alex Szalay; Jan vandenBerg Microsoft Technical Report may 2002, MSR-TR-2002-54 http://research.microsoft.com/research/pubs/view.aspx?tr_id=569

  42. SpeedMbps Rent$/month $/TBSent Context $/Mbps Time/TB 0.04 40 1,000 3,086 6 years Home phone Home DSL 0.6 70 117 360 5 months T1 1.5 1,200 800 2,469 2 months T3 43 28,000 651 2,010 2 days OC3 155 49,000 316 976 14 hours OC 192 9600 1,920,000 200 617 14 minutes 100 Mpbs 100 1 day Gbps 1000 2.2 hours How Do You Move A Terabyte? Source: TeraScale Sneakernet, Microsoft Research, Jim Gray et. all

  43. Computational Grid Economics • To the extent that computational grid is like Seti@Home or ZetaNet or Folding@home or… it is a great thing • The extent that the computational grid is MPI or data analysis, it fails on economic grounds: move the programs to the data, not the data to the programs. • The Internet is NOT the cpu backplane. • The USG should not hide this economic fact from the academic/scientific research community.

  44. Computing on Demand • Was called outsourcing / service bureaus in my youth. CSC and IBM did it. • It is not a new way of doing things: think payroll.Payroll is standard outsource. • Now we have Hotmail, Salesforce.com, Oracle.com,…. • Works for standard apps. • Airlines outsource reservations.Banks outsource ATMs. • But Amazon, Amex, Wal-Mart, ...Can’t outsource their core competence. • So, COD works for commoditized services.

  45. What’s the right abstraction level for Internet Scale Distributed Computing? • Disk block? No too low. • File? No too low. • Database? No too low. • Application? Yes, of course. • Blast search • Google search • Send/Get eMail • Portals that federate astronomy archives(http://skyQuery.Net/) • Web Services (.NET, EJB, OGSA) give this abstraction level.

  46. Access Grid • Q: What comes after the telephone? • A: eMail? • A: Instant messaging? • Both seem retro technology: text & emotons. • Access Grid could revolutionize human communication. • But, it needs a new idea. • Q: What comes after the telephone?

  47. Distributed Computing Economics • Why is Seti@Home a great idea? • Why is Napster a great deal? • Why is the Computational Grid uneconomic • When does computing on demand work? • What is the “right” level of abstraction? • Is the Access Grid the real killer app? Based on: Distributed Computing Economics, Jim Gray, Microsoft Tech report, March 2003, MSR-TR-2003-24 http://research.microsoft.com/research/pubs/view.aspx?tr_id=655

  48. Turbulence, an old problem Observational Described 5 centuries ago by Leonardo Theoretical Best minds have tried and …. “moved on”: • Lamb: … “When I die and go to heaven…” • Heisenberg, von Weizsäcker …some attempts • Partial successes: Kolmogorov, Onsager • Feynman “…the last unsolved problem of classical physics” Adapted from ASCI ASCP gallery http://www.cacr.caltech.edu/~slombey/asci/fluids/turbulence-volren.med.jpg

  49. Simulation: Comp-Physics • How does the turbulent energy cascade work? • Direct numerical simulation of “turbulence in a box” • Pushing comp-limits along specific directions: • 81922, but only two-dimensional • Three-dimensional (5123 - 4,0963), • but only static information Ref: Chen & Kraichnan Ref: Cao, Chen et al. Slide courtesy of Charles Meneveau @ JHU

  50. Data-Exploration: Physics-Info We can now “put it all together”: • Large scale range, scale-ratio O(1,000) • Three-dimensional in space • Time-evolution and Lagrangian approach (follow the flow) Turbulence data-base: • Create a 100 TB database of O(2,000) consecutive snapshots of a 1,0243 turbulence simulation. • Mine the database to understand flows in detail Slide courtesy of Charles Meneveau, Alex Szalay @ JHU

More Related