1 / 40

The development in Network Performance

The development in Network Performance. And it’s impact on the computing model of tomorrow. GRID. And High Performance Networks. The GRiD. Named after the power-grid Sometimes referred to as the information power grid Like the power-grid GRID should be powered by large installations

ziva
Download Presentation

The development in Network Performance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The development in Network Performance And it’s impact on the computing model of tomorrow

  2. GRID And High Performance Networks

  3. The GRiD • Named after the power-grid • Sometimes referred to as the information power grid • Like the power-grid GRID should be powered by large installations • not individual generators

  4. Philosophy • Current ’Internet’ only allows access to information • The GRiD should provide access to any desired resource • CPU/SuperComputers • Storage • Applications

  5. Performance Improvement since 1988

  6. Rules of the GameCopenhagen-Stockholm • 1988 • Latency 40 ms • Bandwidth 64 kb/s • 2005 • Latency 10 ms • Bandwidth 10Gb/s • Networking is much better

  7. Is There an Improvement? Whether we have an improvement depend on our watch!

  8. Is There an Improvement? Whether we have an improvement depend on our watch! CPU

  9. Is There an Improvement? Whether we have an improvement depend on our watch! OuterClock CPU InnerClock

  10. Rules of the GameCopenhagen-Stockholm • Using the inner clock • 1988 • 1B: 0.8M CPU cycles • 1GB: 2T CPU cycles • 2005 • 1B: 39M CPU cycles • 1GB: 3G CPU cycles • Latency is much worse • But bandwidth is much better

  11. Rules of the GameHarddrive-Memory • Using the inner clock • 1988 • 1B: 1M CPU cycles • 1GB: 1G CPU cycles • 2005 • 1B: 13M CPU cycles • 1GB: 38G CPU cycles • Hard-drives are also much worse

  12. Development as seen from the CPU

  13. Why is GRID? • Network bandwidth is now here

  14. File Access Using High Performance Networks

  15. Transparent Remote File Access • Huge input files incur a number of problems: • Download time vs. total execution time • Job execution on the resource is delayed • Storage requirements on resources • Often only small scattered fragments of input files are needed • How about automatic on-demand download of needed data?

  16. Example int fd = open(“inputfile”, O_RDONLY); while ((i=read(fd, &buffer, 2000)) >0){ /* process buffer */ } User applications need not be recompiled or rewritten!

  17. Communication Protocol • HTTP supports a “range” parameter in get request: GET /inputfile HTTP/1.1 HOST: MiG_server.imada.sdu.dk Range: bytes=2000-3000 • No range support in put requests • In order to support writing to remote files, a custom web server is developed

  18. Overriding file-access • Override a subset of file manipulating routines: • open, close, read, write, seek, dup, sync, etc. • Preload this library using the LD_PRELOAD environment variable • Requires user apps to be dynamically linked • Forward local file access to the native file system using the dlfcn library

  19. Efficient Access • Simple solution: general purpose block size based on n/2-analysis • Advanced solution: depends on the user application: • The nature of the application (sequential vs non-sequential file access) • The block size used in the application • Introduce prefetching (1 block read-ahead) • Adjust the block size dynamically based on the prefetching and the time taken to transfer a block

  20. Experiments • 4 experiments: • Overhead: read a one byte file • I/O intensive application: Checksum a 1 GB file • I/O balanced application: Process a 1 GB file • Partial file traversal: Search a 360 MB B+ tree for a random key • 3 test setups: • Local execution • Copy model • Remote access model

  21. Baseline Performance100Mb net

  22. Latency tests

  23. Checksum

  24. Balanced

  25. B+ Tree

  26. Display Using High Performance Networks

  27. True End of the PC? • If we can eliminate the disk we eliminate >60% of the errors in the PC • But perhaps we don’t need the PC • The average PC utilizes less that 5% of its capacity (Source: Intel) • Reality is that the PC is • Much too powerful most of the time • Not nearly powerful enough the rest of the time • So we eliminate the PC?

  28. Bandwidth for Remote users • A graphics intensive user • Screen size: 1600x1400 • Frequency: 50Hz • Color depth: 32b • Compression 1:10 • Required bandwidth: 0.33 Gb/s • Translates into 30 users per 10Gb line

  29. Bandwidth for Remote users • A typical user • Screen size: 1280x1024 • Frequency: 30Hz • Color depth: 24b • Compression 1:100 • Required bandwidth: 0.008 Gb/s • Translates into 1138 users per 10Gb line

  30. Resource User GRID Disk GRID Resource World of Tomorrow?

  31. The Grid Terminal

  32. Grid terminal

  33. But we have seen this before? • Is this not just another thin client? • No! • Thin clients work against dedicated servers • Grid has no single point of failure • And Grid has competition

  34. Low latency Using High Performance Networks

  35. Distributed Shared Memory

  36. DSM Test – the problem…

  37. The Results

  38. The Future

  39. Conclusion and Predictions • No reason to expect any change in the development of performance • Networks will be increasingly slower • But bandwidth is limited only by demand • Grid will allow users to ignore computer maintenance and backups • Even individual home-users will join Grid

More Related