1 / 58

Super Computers ---Parallel Computers

CS147 Lecture 20. Super Computers ---Parallel Computers. Prof. Sin-Min Lee Department of Computer Science.

connie
Download Presentation

Super Computers ---Parallel Computers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS147 Lecture 20 Super Computers ---Parallel Computers Prof. Sin-Min Lee Department of Computer Science

  2. By 1960, at the age of 34, Seymour had established his reputation for genius in designing high performance computers. He had completed the design of the Control Data 1604, the first computer to be fully transistorized and had begun the design of the first system that earned the title of supercomputer, the CDC 6600 which was also the first major system to employ three-dimensional packaging and an instruction set that was later to be referred to as RISC.

  3. Even as a child, Seymour was a problem solver. His sister tells the story about when Seymour was a young boy, he rigged a Morse Code connection between his bedroom and his sister's so that they could communicate after lights out. His father became aware of the late night clicking and told Seymour to shut down the system because it was bothering the rest of the household. Seymour's solution was to convert the clickers to lights and to continue to communicate with his sister.

  4. Robert Frost's, "The Road Not Taken" "I shall be telling this with a sighSomewhere ages and ages hence:Two roads diverged in a wood, and I--I took the one less traveled by,And that has made all the difference."

  5. Seymour liked to work with fundamental and simple tools. Generally only a piece of paper and a pencil. But he admitted that some of his work required more sophisticated tools. Once when told that Apple Computer bought a CRAY to simulate their next Apple computer design, Seymour remarked, "Funny, I am using an Apple to simulate the CRAY-3." His selection of people for his projects also reflected fundamentals. Once asked why he often hires new graduates to help him with early R&D work, he replied, "Because they don't know that what I'm asking them to do is impossible, so they try."

  6. Since the first supercomputer, the Cray-1, was installed at Los Alamos National Laboratory in 1976, computational speed has leaped 500,000 times. The Cray-1 was capable of 80 megaflops (80 million operations a second). The Blue Gene/L machine that will be completed next year will be five million times faster.

  7. 40 trillion calculations per second (teraflop) June-2004 1: Earth Simulator Center, Japan 2: Intel Itanium2 Tiger4 1.4GHz, Quadrics 3: ASCI Q - AlphaServer SC45, 1.25 GHz 4: Blue Gene/L DD1 Prototype (0.5GHz PowerPC 440 w/Custom) 5: PowerEdge 1750, P4 Xeon 3.06 GHz, Myrinet 6: eServer pSeries 690 (1.9 GHz Power4+) 7: Riken Super Combined Cluster 8: Blue Gene/L DD2 Prototype (0.7 GHz PowerPC 440) 9: Integrity rx2600 Itanium2 1.5 GHz, Quadrics 10: Dawning 4000A, Opteron 2.2 GHz, Myrinet

  8. November-2004

  9. Its peak theoretical performance is expected to be 360 teraflops, and will fit into 64 full racks. It will also cut down on the amount of heat generated by the massive power, a big problem for supercomputers. The final machine will help scientists work out the safety, security and reliability requirements for the US's nuclear weapons stockpile, without the need for underground nuclear testing. IBM's senior vice president of technology and manufacturing, Nick Donofrio, believes that by 2006, Blue Gene will be capable of petaflop computing. This means it would be capable of doing 1,000 trillion operations a second.

  10. NASA to build 10,000-processor Linux computer IDG News Service 7/28/04The National Aeronautics and Space Administration (NASA) has given the green light to a project that will build the largest ever supercomputer based on Silicon Graphics Inc.'s (SGI) 512-processor Altix computers. Called Project Columbia, the 10,240-processor system will be used by researchers at the Advanced Supercomputing Facility at NASA's Ames Research Center in Moffett Field, California. . "

  11. Scientists will use Columbia to design equipment, simulate future space missions and model weather patterns. A portion of the US$160 million system will also be made available to other government agencies and educational facilities, said Bill Thigpen, manager of Project Columbia. "We need to look at working with other agencies to provide them with access to this system because it is a unique system," he said. What makes Project Columbia unique is the size of the multiprocessor Linux systems, or nodes, that it clusters together. It is common for supercomputers to be built of thousands of two-processor nodes, but the Ames system uses SGI's NUMAlink switching technology and ProPack Linux operating system enhancements to connect 512-processor nodes, each of which will have more than 1,000G bytes of memory

  12. "We use a very large single-system image," said Jeff Greenwald, senior director of server product marketing with SGI. "The other guys come with a very thin node cluster, and try to screw them all together." The Altix nodes will use Intel Corp.'s Itanium 2 microprocessors, and the entire 20-node system is expected to be fully assembled by year's end, he said. SGI has used this large-node technology to build a number of smaller Altix systems with between 3,000 and 6,000 processors, but Project Columbia will be the largest to date, Greenwald said

  13. The Earth Simulator has held on to the top spot since June 2002. It is dedicated to climate modelling and simulating seismic activity

  14. SINGAPORE (CNN) -- A group of researchers from Singapore has created a computer chip that has the power of 100 standard computers. The group of five, all working at Ngee Ann Polytechnic, will commercialize their development by January and sell it to the pharmaceutical industry, where they say the invention will save time and money. Lead researcher Darran Nathan, 24, explains that unlike standard computer chips, which function using software, his is based on a computer's hardware.

  15. "An ordinary computer chip will interpret instructions from the software and execute a command," he says. "Our chip is a reconfigurable chip, which means it downloads an actual file to the chip and rewires it according to subsequent processing done in the hardware." Nathan says the process is highly technical but, put simply, is a computer chip that works at a speed of 100 standard computers combined. He says the super chip was originally created with the telecommunications industry in mind, but soon after work on the project began two years ago, they realized the benefits would be much more useful to life sciences.

  16. "It is 100 times quicker than your standard computer. Most people do not need such a powerful computer, but in the area of designing and developing drugs, it is hugely important," says Nathan. "It basically means getting essential drugs on the street quicker, at a cheaper cost." Nathan says the device will cost between US$30,000 and US$61,000, and its key point of difference between other supercomputers is its small size. The team, which calls itself Project Proteus, after the shape-shifting Greek god, are aged between 24 and 27. Last week they showcased their chip at the Global Entrepolis convention in Singapore where Mr Nathan says they received a lot of positive feedback.

  17. A Supercomputer at $5.2 million Virginia Tech 1,100 node Macs. G5 supercomputer

  18. The Virginia Polytechnic Institute and State University has built a supercomputer comprised of a cluster of 1,100 dual-processor Macintosh G5 computers. Based on preliminary benchmarks, Big Mac is capable of 8.1 teraflops per second. The Mac supercomputer still is being fine tuned, and the full extent of its computing power will not be known until November. But the 8.1 teraflops figure would make the Big Mac the world's fourth fastest supercomputer

  19. Big Mac's cost relative to similar machines is as noteworthy as its performance. The Apple supercomputer was constructed for just over US$5 million, and the cluster was assembled in about four weeks. In contrast, the world's leading supercomputers cost well over $100 million to build and require several years to construct. The Earth Simulator, which clocked in at 38.5 teraflops in 2002, reportedly cost up to $250 million.

  20. October 28Time: 7:30pm - 9:00pmLocation: Santa Clara Ballroom Srinidhi Varadarajan, Ph.D.Dr. Srinidhi Varadarajan is an Assistant Professor of Computer Science at Virginia Tech. He was honored with the NSF Career Award in 2002 for "Weaving a Code Tapestry: A Compiler Directed Framework for Scalable Network Emulation." He has focused his research on building a distributed network emulation system that can scale to emulate hundreds of thousands of virtual nodes.

  21. Parallel Computers • Two common types • Cluster • Multi-Processor

  22. Cluster Computers

  23. Clusters on the Rise Using clusters of small machines to build a supercomputer is not a new concept. Another of the world's top machines, housed at the Lawrence Livermore National Laboratory, was constructed from 2,304 Xeon processors. The machine was build by Utah-based Linux Networx. Clustering technology has meant that traditional big-iron leaders like Cray (Nasdaq: CRAY) and IBM have new competition from makers of smaller machines. Dell (Nasdaq: DELL) , among other companies, has sold high-powered computing clusters to research institutions.

  24. Cluster Computers • Each computer in a cluster is a complete computer by itself • CPU • Memory • Disk • etc • Computers communicate with each other via some interconnection bus

  25. Cluster Computers • Typically used where one computer does not have enough capacity to do the expected work • Large Servers • Cheaper than building one GIANT computer

  26. Although not new, supercomputing clustering technology still is impressive. It works by farming out chunks of data to individual machines, adding that clustering works better for some types of computing problems than others. For example, a cluster would not be ideal to compete against IBM's Deep Blue supercomputer in a chess match; in this case, all the data must be available to one processor at the same moment -- the machine operates much in the same way as the human brain handles tasks. However, a cluster would be ideal for the processing of seismic data for oil exploration, because that computing job can be divided into many smaller tasks.

  27. Cluster Computers • Need to break up work among the computers in the cluster • Example: Microsoft.com Search Engine • 6 computers running SQL Server • Each has a copy of the MS Knowledge Base • Search requests come to one computer • Sends request to one of the 6 • Attempts to keep all 6 busy

  28. The Virginia Tech Mac supercomputer should be fully functional and in use by January 2004. It will be used for research into nanoscale electronics, quantum chemistry, computational chemistry, aerodynamics, molecular statics, computational acoustics and the molecular modeling of proteins.

  29. Device Controller CPU CPU Multiprocessors Bus CPU Device I/O Port Memory

  30. Multiprocessors • Systems designed to have 2 to 8 CPUs • The CPUs all share the other parts of the computer • Memory • Disk • System Bus • etc • CPUs communicate via Memory and the System Bus

  31. MultiProcessors • Each CPU shares memory, disks, etc • Cheaper than clusters • Not as good performance as clusters • Often used for • Small Servers • High-end Workstations

  32. MultiProcessors • OS automatically shares work among available CPUs • On a workstation… • One CPU can be running an engineering design program • Another CPU can be doing complex graphics formatting

  33. Specialized Processors • Vector Processors • Massively Parallel Computers

  34. Vector Processors For (I=0;I<n;I++) { array1[I] = array2[I] + array3[I] } This is an array (vector) operation

  35. Vector Processors Special instructions to operate on vectors (arrays) • Vector instruction specifies • Starting addresses of all 3 arrays • Loop count • Saves For Loop overhead • Can more efficiently access memory • Also Known as SIMD Computers • Single Instruction Multiple Data

  36. Vector Processors • Until the 1990s, the world’s fastest supercomputers were implemented as vector processors • Now, Vector Processors are typically special peripheral devices that can be installed on a “regular” computer

  37. Massively Parallel Computers • IBM ASCI Purple • Cluster of 196 computers • Each computer has • 64 CPUs • 256 Gigabytes of RAM • 10,000 GB of Disk

  38. Massively Parallel Computer • How will ASCI Purple be used? • Simulation of molecular dynamics • Research into repairing damaged DNA • Analysis of seismic waves • Earthquake research • Simulation of star evolution • Simulation of Weapons of Mass Destruction

  39. According to the article, the supercomputer, powered by 2,200 IBM G5 processors, has been initially rated at computing 7.41 trillion operations per second. The final number could be much higher, according to school officials, but if not, it would rank as the #4 fastest supercomputing cluster in the world. Japan's US$250M Earth Simulator, which is currently the world's fastest computer Lawrence Livermore's US$10-15M cluster system, which is made up of 2,304 Intel Xeon processors. IBM recently installed "Pacific Blue" at the Lawrence Livermore Laboratories for $94 million

More Related