1 / 9

Beowulf – Cluster Nodes & Networking Hardware

Beowulf – Cluster Nodes & Networking Hardware. Garrison Vaughan. Cluster Nodes. Beowulf clusters can be built with off of the shelf components. This allows you to fine-tune each node of the cluster to your specifications.

ronni
Download Presentation

Beowulf – Cluster Nodes & Networking Hardware

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Beowulf – Cluster Nodes & Networking Hardware Garrison Vaughan

  2. Cluster Nodes • Beowulf clusters can be built with off of the shelf components. This allows you to fine-tune each node of the cluster to your specifications. • It is encouraged to use custom rack mountable nodes to save space when building your clusters, but this will make it slightly more complicated to keep the nodes cool and may drive the cost of the project up.

  3. Node Hardware • On a tight budget, dual processor motherboards are recommended since the price/performance curve falls off for motherboards with quad or more cores. The downside to doing this is that there will be more stress on the network cards for each node. • Get as much memory per node as possible. Memory is an important area to not cut corners in. • Graphics cards do not need to be top of the line unless your project utilizes heavy GPU acceleration.

  4. Benchmarking Hardware Configuration • A good suite of tools for benchmarking clusters is the Beowulf Performance Suite. It consists of benchmarks like stream, nas, unixbench, netpipe, and bonnie. • Its good to use different tests for different configurations and architectures to make the best decision for your project.

  5. Node Software • Unix and Unix like operating systems are ideal for Beowulf Cluster nodes. Linux in particular is used very often since it is open source. • Libraries like PVM and MPI can be used for parallel processing between the Linux nodes.

  6. Network Topology • Possible Network Topologies • Shared multi-drop passive cable • Tree structure of hubs and switches • Custom complicated switching technology • One big switch.

  7. Network Topology • Direct wire - Two machines can be connected directly by a Ethernet cable (usually a Cat 5e cable) without needing a hub or a switch. With multiple NICs per machine, we can create networks but then we need to specify routing tables to allow packets to get through. The machines will end up doing double-duty as routers. • Hubs and Repeaters - All nodes are visible from all nodes and the CSMA/CD protocol is still used. A hub/repeater receives signals, cleans and amplifies, and redistributes to all nodes. • Switches- Accepts packets, interprets destination address and sends packets down the segment that has the destination node. Allows half the machines to communicate directly with the other half (subject to bandwidth constraints of the switch hardware). Multiple switches can be connected in a tree or sometimes other schemes. The root switch can become a bottleneck. The root switch can be a higher bandwidth switch.

  8. Switches Continued Switches can be managed or unmanaged. Managed switches are more expensive but they also allow many useful configurations. Here are some examples. • Port trunking-Allows up to 4 ports to be treated as one logical port. For example, this would allow a 4 Gbits/sec connection between two Gigabit switches. • Linux Channel Bonding - Channel bonding means to bond together multiple NICs into one logical network connection. This requires the network switch to support some form of port trunking. • Switch Meshing - Allows up to 24 ports between switches to be treated as a single logical port, creating a very high bandwidth connection. Useful for creating custom complicated topologies. • Stackable, High bandwidth Switches - Stackable switches with special high band- width interconnect in-between the switches. For examples, Cisco has 24-port Gigabit stackable switches with a 32 Gbits/sec interconnect. Up to 8 such switches can be stacked together. All the stacked switches can be controlled by one switch and managed as a single switch. If the controlling switch fails, the remaining switches hold an election and a new controlling switch is elected. Baystack also has stackable switches with a 40 Gbits/sec interconnect.

  9. Cluster Network

More Related