110 likes | 195 Views
This research explores the efficiency of file I/O operations and memory-mapped files in cluster computing environments, analyzing factors such as memory latency, bandwidth, and network hardware. It includes benchmarks on PCI, SCSI, Ethernet, and various network configurations. The study also investigates the impact of drivers on performance metrics such as throughput and CPU utilization.
E N D
Microbenchmarking IHPCL clusters Neil Bright 3/26/2001
fd = open() buffer = mmap(fd, size) close(fd) x = buffer[4] munmap(buffer, size) File I/O vs. mmap • fd = open() • read(fd, buffer, size) • x = buffer[4] • close(fd)
PCI 2.0 33Mhz, 32 bit • 132 Mbytes/sec • PCI 2.0 66Mhz, 64 bit • 528 Mbytes/sec • SCSI 160 • 160 Mbytes/sec • Quantum Atlas 10KII SCSI160 • 60 Mbytes/sec • Gigabit Ethernet • 125 Mbytes/sec
2.2 Myrinet 2.2 Gigabit Ethernet 2.4 Gigabit Ethernet Network Hardware • TCP Bandwidth • UDP Bandwidth • TCP Latency
TCP Bandwidth Throughput CPU utilization (Mb/sec) Sender Receiver 256.09 48.37 54.33 Linux 2.2.16-3smp gm 1.2.3 212.23 28.20 27.46 Linux 2.2.16-3smp e1000 3.0.7 273.70 2.09 0.35 Linux 2.4.3-pre4-smp-pae e1000 3.0.7 Confidence Intervals Throughput Local CPU Remote CPU 0.00% 0.00% 0.00% Linux 2.2.16-3smp gm 1.2.3 0.00% 0.00% 0.00% Linux 2.2.16-3smp e1000 3.0.7 3.50% 50.60% 537.30% Linux 2.4.3-pre4-smp-pae e1000 3.0.7 8,192 byte messages 57,344 byte buffers
Impact of driver on performance Throughput CPU utilization (Mb/sec) Sender Receiver 167.67 35.81 51.19 Linux 2.2.16-3smp e1000 2.0.6 186.55 31.70 48.30 Linux 2.2.16-3smp e1000 2.5.14 212.23 28.20 27.46 Linux 2.2.16-3smp e1000 3.0.7 8,192 byte messages 57,344 byte buffers