1 / 79

Goal of the Committee

pippa
Download Presentation

Goal of the Committee

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The goal of the committee is to assess the status of supercomputing in the United States, including the characteristics of relevant systems and architecture research in government, industry, and academia and the characteristics of the relevant market. The committee will examine key elements of context--the history of supercomputing, the erosion of research investment, the needs of government agencies for supercomputing capabilities--and assess options for progress. Key historical or causal factors will be identified. The committee will examine the changing nature of problems demanding supercomputing (e.g., weapons design, molecule modeling and simulation, cryptanalysis, bioinformatics, climate modeling) and the implications for systems design. It will seek to understand the role of national security in the supercomputer market and the long-term federal interest in supercomputing. Goal of the Committee

  2. NRC-CSTB Future of Supercomputing Committee22 May 2003NRC Brooks-Sutherland Committee11 March 1994NRC OSTP Report18 August 1984 Gordon Bell Microsoft Bay Area Research Center San Francisco

  3. Community re-Centric Computing vs. Computer Centers Background: Where we are and how we got here. Performance (t). Hardware trends and questions If we didn’t have general purpose computation centers, would we invent them? Atkins Report: Past and future concerns…be careful what we wish for Appendices: NRC Brooks-Sutherland 94 comments; CISE(gbell) Concerns re. Supercomputing Centers (’87); CISE(gbell) //ism goals (‘87); NRC Report 84 comments. Bottom line, independent of the question: It has been and always will be the software and apps, stupid!And now it’s the data, too! Outline

  4. Goal: Enable technical communities to create their own computing environments for personal, data, and program collaboration and distribution. Design based on technology trends, especially networking, apps programs maintenance, databases, & providing web and other services Many alternative styles and locations are possible Service from existing centers, including many state centers Software vendors could be encouraged to supply apps svcs NCAR style center around data and apps Instrument-based databases. Both central & distributed when multiple viewpoints create the whole Wholly distributed with many individual groups Community re-centric Computing...

  5. Community Centric Community is responsible Planned & budget as resources Responsible for its infrastructure Apps are community centric Computing is integral to sci./eng. In sync with technologies 1-3 Tflops/$M; 1-3 PBytes/$M to buy smallish Tflops & PBytes. New scalables are “centers” fast Community can afford Dedicated to a community Program, data & database centric May be aligned with instruments or other community activities Output = web service; an entire community demands real-time web service Centers Centric Center is responsible Computing is “free” to users Provides a vast service array for all Runs & supports all apps Computing grant disconnected Counter to technologies directions More costly. Large centers operate at a dis-economy of scale Based on unique, fast computers Center can only afford Divvy cycles among all communities Cycles centric; but politically difficult to maintain highest power vs more centers Data is shipped to centers requiring, expensive, fast networking Output = diffuse among gp centers;Are centers ready or able to support on-demand, real time web services? Community re-Centric ComputingTime for a major change

  6. Background: scalability at last • Q: How did we get to scalable computing and parallel processing? • A: Scalability evolved from a two decade old vision and plan starting at DARPA & NSF. Now picked up by DOE & row. • Q: What should be done now? • A: Realize scalability, the web, and now web services change everything. Redesign to get with the program! • Q: Why do you seem to be wanting to de-center? • A: Besides the fact that user demand has been and is totally de-coupled from supply, I believe technology doesn’t necessarily support users or their mission, and that centers are potentially inefficient compared with a more distributed approach.

  7. Steve Squires & Gordon Bell at our “Cray” at the start of DARPA’s SCI program c1984.20 years later: Clusters of Killer micros become the single standard

  8. ACRI Alliant American Supercomputer Ametek Applied Dynamics Astronautics BBN CDC Cogent Convex > HP Cray Computer Cray Research > SGI > Cray Culler-Harris Culler Scientific Cydrome Dana/Ardent/Stellar/Stardent Denelcor Encore Elexsi ETA Systems Evans and Sutherland Computer Exa Flexible Floating Point Systems Galaxy YH-1 Goodyear Aerospace MPP Gould NPL Guiltech Intel Scientific Computers International Parallel Machines Kendall Square Research Key Computer Laboratories searching again MasPar Meiko Multiflow Myrias Numerix Pixar Parsytec nCube Prisma Pyramid Ridge Saxpy Scientific Computer Systems (SCS) Soviet Supercomputers Supertek Supercomputer Systems Suprenum Tera > Cray Company Thinking Machines Vitesse Electronics Wavetracer RIP Lost in the search for parallelism

  9. A brief, simplified history of HPC • Cray formula smPvevolves for Fortran. 60-02 (US:60-90) • 1978: VAXen threaten computer centers… • NSF response: Lax Report. Create 7-Cray centers 1982 - • SCI: DARPA searches for parallelism using killer micros • Scalability found: “bet the farm” on micros clustersUsers “adapt”: MPI, lcd programming model found. >95Result: EVERYONE gets to re-write their code!! • Beowulf Clusters form by adopting PCs and Linus’ Linux to create the cluster standard! (In spite of funders.)>1995 • ASCI: DOE gets petaflops clusters, creating “arms” race! • “Do-it-yourself” Beowulfs negate computer centers since everything is a cluster and shared power is nil! >2000. • 1997-2002: Tell Fujitsu & NEC to get “in step”! • High speed nets enable peer2peer & Grid or Teragrid • Atkins Report: Spend $1B/year, form more and larger centers and connect them as a single center…

  10. The Virtuous Economic Cycle drives the PC industry… & Beowulf Attracts suppliers Greater availability @ lower cost Competition Volume Standards DOJ Utility/value Innovation Creates apps, tools, training, Attracts users

  11. Lessons from Beowulf • An experiment in parallel computing systems ‘92 • Established vision- low cost high end computing • Demonstrated effectiveness of PC clusters for some (not all) classes of applications • Provided networking software • Provided cluster management tools • Conveyed findings to broad community • Tutorials and the book • Provided design standard to rally community! • Standards beget: books, trained people, software … virtuous cycle that allowed apps to form • Industry began to form beyond a research project Courtesy, Thomas Sterling, Caltech.

  12. Technology: peta-bytes, -flops, -bpsWe get no technology before its time • Moore’s Law 2004-2012: 40X • The big surprise will be the 64 bit micro • 2004: O(100) processors = 300 GF PAP, $100K • 3 TF/M, not diseconomy of scale for large systems • 1 PF => 330M, but 330K processors; other paths • Storage 1-10 TB disks; 100-1000 disks • Internet II killer app – NOT teragrid • Access Grid, new methods of communication • Response time to provide web services

  13. Performance metrics (t) 1987-2009 ES 110% 100% 60%

  14. Perf (PAP) = c x 1.6**(t-1992); c = 128 GF/$300M ‘94 prediction: c = 128 GF/$30M

  15. Performance(TF) vs. cost($M) of non-central and centrally distributed systems Performance + Centers (old style super) Centers allocation range Cost

  16. National Semiconductor Technology Roadmap (size) + 1Gbit

  17. Disk Density Explosion • Magnetic disk recording density (bits per mm2) grew at 25% per year from 1975 until 1989. • Since 1989 it has grown at 60-70% per year • Since 1998 it has grown at >100% per year • This rate will continue into 2003 • Factors causing accelerated growth: • Improvements in head and media technology • Improvements in signal processing electronics • Lower head flying heights Courtesy Richie Lary

  18. National Storage Roadmap 2000 100x/decade =100%/year ~10x/decade = 60%/year

  19. Disk / Tape Cost Convergence • 3½” ATA disk could cost less than SDLT cartridge in 2004. • If disk manufacturers maintain 3½”, multi-platter form factor • Volumetric density of disk will exceed tape in 2001. • “Big Box of ATA Disks” could be cheaper than a tape library of equivalent size in 2001 Courtesy of Richard Lary

  20. Disk Capacity / Performance Imbalance 100 Capacity • Capacity growth outpacing performance growth • Difference must be made up by better caching and load balancing • Actual disk capacity may be capped by market (red line); shift to smaller disks (already happening for high speed disks) 140x in 9 years (73%/yr) 10 Performance 3x in 9 years (13%/yr) 1 1992 1995 1998 2001 Courtesy of Richard Lary

  21. There is little rational support for general purpose centers Scalability changes the architecture of the entire Cyberinfrastructure No need to have a computer bigger than the largest parallel app. They aren’t super. World is substantially data driven, not cycles driven. Demand is de-coupled from supply planning, payment or services Scientific / Engineering computing has to be the responsibility of each of its communities Communities form around instruments, programs, databases, etc. Output is web service for the entire community Re-Centering to Community Centers

  22. Radical machines will come from low cost 64-bit explosion Today’s desktop has and will increasingly trump yesteryear’s super simply due to memory size emplosion Pittsburgh Alpha: 3D MRI skeleton computing/viewing using a large memory is a desktop problem given a large memory Tony Jamieson: “I can models an entire 747 on my laptop!” Grand Challenge (c2002) problems become desktop (c2004) tractableI don’t buy problem growth mantra 2x res. >2**4 (yrs.)

  23. Pittsburgh: 6; NCAR: 10;LSU: 17; Buffalo: 22; FSU: 38; San Diego: 52; NCSA: 82; Cornell: 88 Utah: 89; 17 Universities, world-wide in top 100 Massive upgrade is continuously required: Large memories: machines aren’t balanced and haven’t been. Bandwidth 1 Byte/flop vs. 24 Bytes/flop File Storage > Databases Since centers systems have >4 year lives, they start as obsolete and overpriced…and then get worse. Centers aren’t very super…

  24. The US builds scalable clusters, NOT supercomputers Scalables are 1 to n commodity PCs that anyone can assemble. Unlike the “Crays” all are equal. Use is allocated in small clusters. Problem parallelism sans ∞// has been elusive (limited to 1K) No advantage of having a computer larger than a //able program User computation can be acquired and managed effectively. Computation is divvied up in small clusters e.g. 128 nodes that individual groups can acquire and manage effectively The basic hardware evolves, doesn’t especially favor centers 64-bit architecture. 512Mb x 32/dimm =2GB x 4/system = 8GB Systems >>16GB(Centers machine will be obsolete, by memory / balance rules.) 3 year timeframe: 1 TB disks at $0.20/TB Last mile communication costs are not decreasing to favor centers or grids. Centers: The role going forward

  25. 1984: Japanese are coming. CMOS and killer Micros. Build // machines. 40+ computers were built & failed based on CMOS and/or micros No attention to software or apps 1994: Parallelism and Grand Challenges Converge to Linux Clusters (Constellations nodes >1 Proc.) & MPI No noteworthy middleware software to aid apps & replace Fortran e.g. HPF failed. Whatever happened to the Grand Challenges?? 2004: Teragrid has potential as a massive computer and massive research We have and will continue to have the computer clusters to reach a <$300M Petaflop Massive review and re-architecture of centers and their function. Instrument/program/data/community centric (CERN, Fermi, NCAR, Calera) Review the bidding

  26. Give careful advice on the Atkins Report (It is just the kind of plan that is likely to fly.) Community Centric Computing Community/instrument/data/programcentric (Calera, CERN, NCAR) Small number of things to report Forget about hardware for now…it’s scalables. The die has been cast. Support training, apps, and any research to ease apps development. Databases represent the biggest gain. Don’t grep, just reference it. Recommendations

  27. The End

  28. Suggestions (gbell) Centers to be re-centered in light of data versus flops Overall re-architecture based on user need & technology Highly distributed structure aligned with users who plan their facilities Very skeptical “gridized” projects e.g. tera, GGF Training in the use of databases is needed! It will get more productivity than another generation of computers. The Atkins Report 1.02 Billion per year recommendation for research, buy software, and spend $600 M to build and maintain more centers that are certain to be obsolete and non-productive Atkins Report: Be careful of what you ask for

  29. Summary to Atkins Report 2/15/02 15:00 gbell • Same old concerns: “I don’t have as many flops as users at the national labs.” • Many facilities should be distributed and with build-it yourself Beowulf clusters to get extraordinary cycles and bytes. • Centers need to be re-centered see Bell & Gray, “What’s Next in High Performance Computing?”, Comm. ACM, Feb. 2002, pp91-95. • Scientific computing needs re architecting based on networking, communication, computation, and storage. Centrality versus distributed depends on costs and the nature of the work e.g. instrumentation that generates lots of data. (Last mile problem is significant.) • Fedex’d hard drive is low cost. Cost of hard drive < network cost. Net is very expensive! • Centers flops and bytes are expensive. Distributed likely to be less so. • Many sciences need to be reformulated as a distribute computing/dbase • Network costs (last mi.) are a disgrace. $1 billion boondoggle with NGI, Internet II. • Grid funding: Not in line with COTS or IETF model. Another very large SW project! • Give funding to scientists in joint grants with tool builders e.g. www came from user • Database technology is not understood by users and computer scientists • Training, tool funding, & combined efforts especially when large & distributed • Equipment, problems, etc are dramatically outstripping capabilities! • It is still about software, especially in light of scalable computers that require reformulation into a new, // programming model

  30. build real synergy between computer and information science research and development, and its use in science and engineering research and education; capture the cyberinfrastructure commonalities across science and engineering disciplines; use cyberinfrastructure to empower and enable, not impede, collaboration across science and engineering disciplines; 4) exploit technologies being developed commercially and apply them to research applications, as well as feed back new approaches from the scientific realm into the larger world; 5) engage social scientists to work constructively with other scientists and technologists. Atkins Report: the critical challenges

  31. fundamental research to create advanced cyberinfrastructure ($60M); research on the application of cyberinfrastructure to specific fields of science and engineering research ($100M); acquisition and development of production quality software for cyberinfrastructure and supported applications ($200M); provisioning and operations (including computational centers, data repositories, digital libraries, networking, and application support) ($600M). archives for software ($60M) Atkins Report: Be careful of what you ask for

  32. NRC Review Panel on High Performance Computing11 March 1994Gordon Bell

  33. Dual use: Exploit parallelism with in situ nodes & networksLeverage WS & mP industrial HW/SW/app infrastructure! No Teraflop before its time -- its Moore's Law It is possible to help fund computing: Heuristics from federal funding & use (­50 computer systems and 30 years) Stop Duel Use, genetic engineering of State Computers •10+ years: nil pay back, mono use, poor, & still to come •plan for apps porting to monos will also be ineffective -- apps must leverage, be cross-platform & self-sustaining•let "Challenges" choose apps, not mono use computers•"industry" offers better computers & these are jeopardized•users must be free to choose their computers, not funders •next generation State Computers "approach" industry •10 Tflop ... why? Summary recommendations Position

  34. >4 Interconnect & comm. stds: POTS & 3270 terms. WAN (comm. stds.) LAN (2 stds.) Clusters (prop.) mainframes clusters mainframes ASCII & PC terminals IBM & propritary mainframe world '50s Principle computingEnvironmentscirca 1994 -->4 networks tosupportmainframes,minis, UNIX servers,workstations &PCs POTs net for switching terminals 3270 (&PC) terminals Wide-area inter-site network clusters minicomputers Data comm. worlds minicomputers 70's mini (prop.) world & '90s UNIX mini world ASCII & PC terminals X terminals '80s Unix distributed workstations & servers world UNIX Multiprocessor servers operated as traditional minicomputers NFS servers Compute & dbase uni- & mP servers UNIX workstations Token-ring (gateways, bridges, routers, hubs, etc.) LANs Ethernet (gateways, bridges, routers, hubs, etc.) LANs Late '80s LAN-PC world Novell & NT servers PCs (DOS, Windows, NT)

  35. Legacy mainframes & minicomputers servers & terms Legacy mainframe & minicomputer servers & terminals ComputingEnvironmentscirca ­ 2000 Wide-area global ATM network NT, Windows & UNIX person servers Local & global data comm world ATM† & Local Area Networks for: terminal, PC, workstation, & servers multicomputers built from multiple simple, servers NT, Windows & UNIX person servers* Centralized & departmental uni- & mP servers (UNIX & NT) † also10 - 100 mb/s pt-to-pt Ethernet Centralized & departmental scalable uni- & mP servers* (NT & UNIX) ??? TC=TV+PC home ... (CATV or ATM) NFS, database, compute, print, & communication servers * Platforms: X86 PowerPC Sparc etc. Universal high speed data service using ATM or ??

  36. HPCS, corporate R&D, and technical users must have the goal to design, install and support parallel environments using and leveraging: • every in situ workstation & multiprocessor server • as part of the local ... national network. Parallelism is a capability that all computing environments can & must possess! --not a feature to segment "mono use" computers Parallel applications become a way of computing utilizing existing, zero cost resources -- not subsidy for specialized ad hoc computers Apps follow pervasive computing environments Beyond Dual & Duel Use Technology:Parallelism can & must be free!

  37. Although Problem x Machine Scalability using SIMD for simulating some physical systems has been demonstrated, given extraordinary resources, the efficacy of larger problems to justify cost-effectiveness has not. Hamming:"The purpose of computing is insight, not numbers." The "demand side" Challenge users have the problems and should be drivers. ARPA's contractors should re-evaluate their research in light of driving needs. Federally funded "Challenge" apps porting should be to multiple platforms including workstations & compatible, multis that support // environments to insure portability and understand main line cost-effectiveness Continued "supply side"programs aimed at designing, purchasing, supporting, sponsoring, & porting of apps to specialized, State Computers, including programs aimed at 10 Tflops, should be re-directed to networked computing. User must be free to choose and buy any computer, including PCs & WSs, WS Clusters, multiprocessor servers, supercomputers, mainframes, and even highly distributed, coarse grain, data parallel, MPP State computers. Computer genetic engineering & species selection has been ineffective

  38. The teraflops Performance (t) • Bell Prize

  39. Flops = f(t,$), not f(t) technology plans e.g. BAA 94-08 ignores $s! All Flops are not equal (peak announced performance-PAP or real app perf. -RAP) FlopsCMOSPAP*< C x 1.6**(t-1992) x $; C = 128 x 10**6 flops / $30,000 FlopsRAP =FlopsPAP x 0.5 for real apps, 1/2 PAP is a great goal Flopssupers = FlopsCMOS x 0.1; improvement of supers 15-40%/year; higher cost is f(need for profitability, lack of subsidies, volume, SRAM) 92'-94': FlopsPAP/$ =4K; Flopssupers/$=500; Flopsvsp/$ =50 M (1.6G@$25) *Assumes primary & secondary memory size & costs scale with time memory = $50/MB in 1992-1994 violates Moore's Law disks = $1/MB in1993, size must continue to increases at 60% / year When does a Teraflop arrive if only $30 million** is spent on a super? 1 TflopCMOS PAP in 1996 (x7.8) with 1 GFlop nodes!!!; or 1997 if RAP 10 TflopCMOS PAP will be reached in 2001 (x78) or 2002 if RAP How do you get a teraflop earlier? **A $60 - $240 million Ultracomputer reduces the time by 1.5 - 4.5 years. We get no Teraflop before it's time: it's Moore's Law!

  40. Genetic engineering of computers has not produced a healthy strain that lives more than one, 3 year computer generation. Hence no app base can form. •No inter-generational, MPPs exist with compatible networks & nodes. •All parts of an architecture must scale from generation to generation! •An archecture must be designed for at least three, 3 year generations! High price to support a DARPA U. to learn computer design -- the market is only $200 million and R&D is billions-- competition works far better Inevitable movement of standard networks and nodes can or need not be accelerated, these best evolve by a normal market mechanism through driven by users Dual use of Networks & Nodes is the path to widescale parallelism, not weird computers Networking is free via ATM Nodes are free via in situ workstationsApps follow pervasive computing environments Applicability was small and getting smaller very fast with many experienced computer companies entering the market with fine products e.g. Convex/HP, Cray, DEC, IBM, SGI & SUN that are leveraging their R&D, apps, apps, & apps Japan has a strong supercomputer industry. The more we jeprodize ours by mandating use of weird machines that take away from use, the weaker it becomes. MPP won, mainstream vendors have adopted multiple CMOS. Stop funding! environments & apps are needed, but are unlikely because the market is small Re-engineering HPCS

  41. Goal: By 2000, massive parallelism must exist as a by-products that leverages a widescale national network & workstation/multi HW/SW nodes Dual use not duel use of products and technology or the principle of "elegance" -one part serves more than one function network companies supply networks, node suppliers use ordinary workstations/servers with existing apps will leverage $30 billion x 10**6 R&D Fund high speed, low latency, networks for a ubiquitous service as the base of all forms of interconnections from WANs to supercomputers (in addition, some special networks will exist for small grain probs) Observe heuristics in future federal program funding scenarios ... eliminate direct or indirect product development and mono-use computers Fund Challenges who in turn fund purchase, not product development Funding or purchase of apps porting must be driven by Challenges, but builds on binary compatible workstation/server apps to leverage nodes be cross-platform based to benefit multiple vendors & have cross-platform use Review effectiveness of State Computers e.g., need, economics, efficacy Each committee member might visit 2-5 sites using a >>// computer Review // program environments & the efficacy to produce & support apps Eliminate all forms of State Computers & recommend a balanced HPCS program: nodes & networks; based on industrial infrastructure stop funding the development of mono computers, including the 10Tflopit must be acceptable & encouraged to buy any computer for any contract Recommendations to HPCS

  42. D. Bailey warns that scientists have almost lost credibility.... Focus on Gigabit NREN with low overhead connections that will enable multicomputers as a by-product Provide many small, scalable computers vs large, centralized Encourage (revert to) & support not so grand challenges Grand Challenges (GCs) need explicit goals & plans --disciplines fund & manage (demand side)... HPCC will not Fund balanced machines/efforts; stop starting Viet Nams (efforts that are rat holes that you can’t get out of) Drop the funding & directed purchase of state computers Revert to university research -> company & product development Review the HPCC & GCs program's output ... *High Performance Cash Conscriptor; Big Spenders Gratis advice for HPCC* & BS*

  43. The law of scalability Three kinds: machine, problem x machine, & generation (t) How do flops, memory size, efficiency & time vary with problem size? What's the nature problems & work for the computers? What about the mapping of problems onto the machines? Scalability: The Platform of HPCS

  44. This talk may appear inflammatory ... i.e. the speaker may have appeared "to flame". It is not the speaker's intent to make ad hominem attacks on people, organizations, countries, or computers ... it just may appear that way. Disclaimer

  45. Backups

  46. 1. Demand side works i.e., we need this product/technology for x; Supply side doesn't work! Field of Dreams": build it and they will come. 2. Direct funding of university research resulting in technology and product prototypes that is carried over to startup a company is the most effective. -- provided the right person & team are backed with have a transfer avenue. a. Forest Baskett > Stanford to fund various projects (SGI, SUN, MIPS) b. Transfer to large companies has not been effective c. Government labs... rare, an accident if something emerges 3. A demanding & tolerant customer or user who "buys" products works best to influence and evolve products (e.g., CDC, Cray, DEC, IBM, SGI, SUN) a. DOE labs have been effective buyers and influencers, "Fernbach policy"; unclear if labs are effective product or apps or process developers b. Universities were effective at influencing computing in timesharing, graphics, workstations, AI workstations, etc. c. ARPA, per se, and its contractors have not demonstrated a need for flops. d. Universities have failed ARPA in defining work that demands HPCS -- hence are unlikely to be very helpful as users in the trek to the teraflop. 4. Direct funding of large scale projects" is risky in outcome, long-term, training, and other effects. ARPAnet established an industry after it escaped BBN! Funding Heuristics(50 computers & 30 years of hindsight)

  47. 5. Funding product development, targeted purchases, and other subsidies to establish "State Companies"in a vibrant and overcrowded market is wasteful, likely to be wrong , likely to impede computer development, (e.g. by having to feed an overpopulated industry). Furthermore, it is likely to have a deleterious effect on a healthy industry (e.g. supercomputers). A significantly smaller universe of computing environments is needed. Cray & IBM are given; SGI is probably the most profitable technical; HP/Convex are likely to be a contender, & others (e.g., DEC) are trying. No state co (intel,TMC, Tera) is likely to be profitable & hence self-sustaining. 6. "University-Company collaboration is a new area of government R&D. So far it hasn't worked nor is it likely to, unless the company invests. Appears to be a way to help company fund marginal people and projects. 7. CRADAs or co-operative research and development agreement are very closely allied to direct product development and are equally likely to be ineffective. 8. Direct subsidy of software apps or the porting of apps to one platform, e.g., EMI analysis are a way to keep marginal computers afloat. If government funds apps, they must be ported cross-platform! 9. Encourage the use of computers across the board, but discourage designs from those who have not used or built a successful computer. Funding Heuristics-2

  48. Mono use aka MPPs have been, are, and will be doomed The law of scalability Four scalabilities: machine, problem x machine, generation (t), & now spatial How do flops, memory size, efficiency & time vary with problem size? Does insight increase with problem size? What's the nature of problems & work for monos? What about the mapping of problems onto monos? What about the economics of software to support monos? What about all the competitive machines? e.g. workstations, workstation clusters, supers, scalable multis, attached P? Scalability: The Platform of HPCS& why continued funding is unnecessary

  49. Special because it has non-standard nodes & networks -- with no apps Having not evolved to become mainline -- events have over-taken them. It's special purpose if it's only in Dongarra's Table 3. Flop rate, execution time, and memory size vs problem size shows limited applicability to very large scale problems that must be scaled to cover the inherent, high overhead. Conjecture: a properly used supercomputer will provide greater insight and utility because of the apps and generality -- running more, smaller sized problems with a plan produces more insight The problem domain is limited & now they have to compete with: •supers -- do scalars, fine grain, and work and have apps •workstations -- do very long grain, are in situ and have apps •workstation clusters -- have identical characteristics and have apps •low priced ($2 million) multis -- are superior i.e., shorter grain and have apps•scalable multiprocessors -- formed from multis are in design stage Mono useful (>>//) -- hence, are illegal because they are not dual use Duel use -- only useful to keep a high budget in tact e.g., 10 TF Special, mono-use MPPs are doomed...no matter how much fedspend!

  50. There exists a problem that can be made sufficiently large such that any network of computers can run efficiently given enough memory, searching, & work -- but this problem may be unrelated to no other problem. A ... any parallel problem can be scaled to run on an arbitrary network of computers, given enough memory and time Challenge to theoreticians: How well will an algorithm run? Challenge for software: Can package be scalable & portable? Challenge to users: Do larger scale, faster, longer run times, increase problem insight and not just flops? Challenge to HPCC: Is the cost justified? if so let users do it! The Law of Massive Parallelism isbased on application scale

More Related