1 / 19

Parallel processing for AstroWise

Parallel processing for AstroWise. Some considerations for implementation of parallel processing. Current Developments. Beowulf for the masses Oscar User setup Linux cluster infrastructure Add-on software PBS,PVM,MPI,C3 Includes Itanium support www.OpenClustergroup.org.

amadahy
Download Presentation

Parallel processing for AstroWise

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel processing for AstroWise Some considerations for implementation of parallel processing AstroWise pre kick-off Meeting

  2. Current Developments • Beowulf for the masses • Oscar • User setup Linux cluster infrastructure • Add-on software PBS,PVM,MPI,C3 • Includes Itanium support • www.OpenClustergroup.org AstroWise pre kick-off Meeting

  3. Current Developments • In-a-Box initiative • NCSA Alliance layered software • Cluster-in-a-Box (CiB) • Grid-in-a-Box (Gib) • Display Wall-in-a-Box (DBox) • Access Grid-in-a-Box (AGiB) AstroWise pre kick-off Meeting

  4. Cluster-in-a-Box • Builds on OSCAR • Simplify installing and running Linux clusters • Compatible with Alliance’s production clusters • Software foundation for • Grid toolkits • Display Walls AstroWise pre kick-off Meeting

  5. Display Wall-in-a-Box • Tiled display wall • WireGL, VNC, NCSA Pixel Blaster • Building instructions AstroWise pre kick-off Meeting

  6. Example Sites • NIKHEF D0 Farm • D0 Monte Carlo production AstroWise pre kick-off Meeting

  7. Nodes • Dell Precision Workstation 220 • CPU: Dual Pentium III processor 800 MHz / 256 kB cache each • Memory: 512 MB PC800 ECC RDRAM • Hard Disk: 40 GB (7200 rpm) ATA-66 • CD-ROM: 20-48x EIDE • Floppy drive: 3.5" 1,44 MB • Graphics card: Diamond Viper V770D, 32 MBI AGP • Network: integrated 3Com Fast Ethernet card (10/100 TPO) Wake up on Lan functionality AstroWise pre kick-off Meeting

  8. Farm Server • Dell Precision Workstation 620 • CPU: Dual Pentium III Xeon processor 1GHz / 256 kB cache each • Memory: 512 (2x256) MB PC800 ECC RDRAM • Hard Disk: 72.8 Gbyte (10,000 rpm) U160/M SCSI • CD ROM: 20-48x EIDI • Floppy drive: 3.5" 1,44 MB • Graphics card: ELSA Synergy Force Graphics Card • Monitor: Dell Mainstream 17" (15.9 VIS) FST • Network: Integrated 3Com10/100 MB ethernet controller with Wake up on Lan functionality AstroWise pre kick-off Meeting

  9. File Server • ELONEX EIDE Server • CPU: Dual Pentium III 700 MHz • Memory: 512 (4 x 128) MB SDRAM DIMM ECC • Floppy drive: 3.5" 1,44 MB • Hard Disk: 2 x 10 Gbyte IDE (system disk) • RAID Controller: 3WARE 3W5800L 8 ports • Data Disks: 16 IBM EIDE 75 GB DTLA - 307075 EIDE (total of 1.2 Tbyte) • Network: Gigabit Netgear GA620 • Graphics card: IntegratedCirrus Logic GD5480 2 MB • Monitor: 17" flat screen AstroWise pre kick-off Meeting

  10. Switch • 3Com Superstack II Switch 3300 • 24 x 10/100 Base TX (autosensing) • 1 x 1 Gbps matrix port for stacking purpose • optional high speed slot; 1000BaseSX, 100BaseFX or Matrix module (4 x 1 Gbps) AstroWise pre kick-off Meeting

  11. Lorentz Institute • 64 AMD 1300 Mhz Rack mount PC’s • Monte Carlo Simulation • See tour • Additional conditions • Power consumption • Heat production AstroWise pre kick-off Meeting

  12. Configuration Considerations • Motherboard • FSB speed (200, 266 MHz) • Memory Rambus DRAM • CPU support > 1.5 Ghz • Pentium 4 (SMP?) • Itanium AstroWise pre kick-off Meeting

  13. Parallel implementation • Single OS, multiple CPU’s • MOSIX • Fork and forget • Does load balancing, adaptive resource allocation • Cluster of Machines • MPI, PMV (Message passing) • PVFS (parallel virtual file system) • PBS (Portable Batch system) • MAUI (Job scheduling) AstroWise pre kick-off Meeting

  14. Network • Latency • Overhead in packing data • Grand copy and wait (4kx2k in 4 sec at 100Mb/s) • Bandwidth • Physical capacity of connections • 200KPix/s • Switching technology, server at > 1 Gb/s AstroWise pre kick-off Meeting

  15. Parallellization • Simple scripting • Rendezvous problem • Load balancing • Code level • MPI Programming • How deep? • Loops • Matrix splitting • Sparse array coding AstroWise pre kick-off Meeting

  16. Granularity • Fine • Small tasks • Frequent comm. • Many processes • Coarse • Large tasks • Much comp. work • Infrequent comm. AstroWise pre kick-off Meeting

  17. Costs • Programmer’s time • Analyze, recode • Complicated debugging • Loss of portability • Total CPU time greater with parallel • Initialize & terminate tasks • Comminications among tasks • Replication of code, more memory AstroWise pre kick-off Meeting

  18. Reading/Writing data • One process distributes data • Lots of I/O, no local disk? • Each process can read/write • Read/write their own file • Read/write own section of big file • Master/Slave based on • Dataflow AstroWise pre kick-off Meeting

  19. Issues • Parallellization • On what level, at what cost • Should be serializable as well • What kind of cluster • Cluster of Machines • Single OS (large SMP) • Grid connectivity AstroWise pre kick-off Meeting

More Related