1 / 22

Interconnection Networks and Clusters

Interconnection Networks and Clusters. by Onur Ozyer School of Electrical Engineering and Computer Science University of Central Florida. Outline. Interconnection Networks Network Topology Centralized Switching Distributed Switching Clusters Case Study: Google

Download Presentation

Interconnection Networks and Clusters

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Interconnection Networks and Clusters by Onur Ozyer School of Electrical Engineering and Computer Science University of Central Florida

  2. Outline • Interconnection Networks • Network Topology • Centralized Switching • Distributed Switching • Clusters • Case Study: Google • Case Study: Cluster Project at UCF • References

  3. Interconnection Networks • Connection of components within a computer. • Connecting computers to build information network. End Users End Users Interconnection Network Interconnection Network Internetworking

  4. Header Data Check Sum Interconnection Networks Message Bandwidth = Propagation rate to the link Transmission Delay = Message Size / Bandwidth Propagation Delay: Time to propagate over the link Total Delay = Processing Delay + Transmission Delay + Propagation Delay

  5. Interconnection Media • Twisted Pair wires • Level 3 ~ 10 Mbit/s • Level 5 ~ 100 Mbit/s • Cat 5 ~ 1000 Mbit/s • Coaxial cable ~ 10 Mbit/s • Fiber optics ~100 Mbit/s – 1Gbit/s (one way) • Light Source, laser diode or LED • Fiber optic cable • Light detector

  6. P0 P1 P2 P3 Network Topology- Centralized Switching A) Crossbar Topology: Any node connected to any node. (Fully connected) • n2 switches. • Low Contention. a) Source Routing b) Destination Routing

  7. Straight Swap Lower Broadcast Upper Broadcast Network Topology - Switch Boxes

  8. P0 P1 P2 P3 Network Topology- Centralized Switching B) Omega Network: Nodes connected to switch boxes. Each switch box has 4 switches. • Less switch (n/2 lgn) • More contention (blocking)

  9. Network Topology- Centralized Switching C) Fat Tree: Nodes and switches form a tree. Bandwidth is added higher in the tree. • Multiple paths (load balance, failure recovery) • Doubling nodes need one more level of switches Switches End Users

  10. Network Topology- Distributed Switching Distributed Switching: Each node has own switch Ring Network: Sequence of nodes connected together. • Average message delay: n/2 switches. • Simultaneous message transfer on the ring. • Token rings

  11. Network Topology- Distributed Switching • d-dimensional array • n = kd-1 X ...X kO nodes • described by d-vector of coordinates (id-1, ..., iO) • d-dimensional k-ary mesh: N = kd • k = dÖN • described by d-vector of radix k coordinate • d-dimensional k-ary torus (or k-ary d-cube)? 3D Cube 2D Grid 2D Torus

  12. 0-D 1-D 2-D 3-D 4-D 5-D ! Network Topology - Hypercubes • Also called binary n-cubes. # of nodes = N = 2d. • O(logN) Hops • Good bisection BW • Complexity • Out degree is d Bisection BW: The bandwidth between two equal logical subparts.

  13. Network Topology- Distributed Switching Topology Degree Diameter Ave Dist Bisection BW 1D Array 2 N-1 N / 3 1 1D Ring 2 N/2 N/4 2 2D Mesh 4 2 (N1/2 - 1) 2/3 N1/2 N1/2 2D Torus 4 N1/2 1/2 N1/2 2N1/2 k-ary n-cube 2n nk/2 nk/4 nk/4 Hypercube n =log N n n/2

  14. Network Topology - Real World

  15. Network Topology- Distributed Switching Problems • 2d mapping of 3d topologies. • Internal speed of the switch is constant, • Bandwidth can be bottleneck

  16. Cluster vs. Multiprocessors A Cluster is coordinated use of interconnected computers in a machine room. Challenges for Clustering • I/O Bus is slower and has more conflicts than memory bus. • Administration problems • Low memory usage efficiency …but memory cost is going down.

  17. Cluster vs. Multiprocessors Advantages • Fault Isolation , easy to replace failures • Scalability, expandability without stopping the application • Low cost, large scale multiprocessors cost more • Increasing communications bandwidth • Separate address space limits contamination error. • Hotmail, Google Inktomi, Aol, Amazon, Yahoo using clustered computers.

  18. Case Study - Google • Stores and indexes Web combining more than 15 000 commodity-class PC’s in 1 petabyte (=1 000 000 GB) disk storage. • 1 query =100 MB data+ 106 CPU cycle. • About 1000 query/s at peak time. • Crawls web and updates indexes every 4 weeks • 3 collocation sites ( 2 California + 1 Virginia) • Service time < 0.5 sec

  19. Case Study - Google • Each site has 2488 Mbit/sec connection to Internet. • Sites linked to sister sites for emergencies. • Each site has 2 switches of 128 1 Gbit/s Ethernet link. Switches are connected to racks. • 40 Racks at each site and each rack has 80 PCs’. • PC range from Celeron5300 to 1.4 GHz Intel Pentium III with 80 Gbyte hard disk running Linux.

  20. Spell Checker Google Web Server(GWS) GWS GWS GWS Ad Server GWS GWS 5 2. 4 3. Document Servers Index Servers Google- How It Works? 1.Search Query Google Cluster

  21. Cluster Project at UCF Parts Ordered Costs (135) AMD T-Bird 900MHz Processors $24,975.00 (135) ASUS -A7V Motherboards $20,925.00. (15) Asante Interstack 8000 Switch, Hub, Card. $12,880.25 (15) Asante Interstack 8000 Switch, Hub, $12,778.00 (144) HD's, (5) RAID controllers $15,881.48 (128) ATI Rage Pro AGP video cards $4,480.00 (150) Netgear 10/100 NICs $2,589.00 (135) PC133 DIMM 256MB $15,120.00 CasesSelection PendingMisc.(Racks,cables, UPS,etc.)$2,000.00

  22. References • J. L. Hennessy and D. A. Patterson. Computer Architecture: A Quantitative Approach. Morgan Kaufmann, San Mateo, CA, 2001. • J. F. Kurose and K. W. Ross, ComputerNetworks: A Top-Down Approach Featuring the Internet, 2nd edition. Addison Wesley, 2002. • A. DeCegama: Technology of Parallel Processing, 1989. • L. A. Barraso, J. Dean and U. Holzle. Web Search For A Planet: The Google Cluster Architecture. IEEE icro. 2003. • http://www.seecs.ucf.edu/cluster/index.html

More Related