1 / 14

Volunteer Computing with BOINC

David P. Anderson Space Sciences Laboratory University of California, Berkeley. Volunteer Computing with BOINC. High-throughput computing. Goal: finish lots of jobs in a given time Paradigms: Supercomputing Cluster computing Grid computing Cloud computing Volunteer computing.

kylep
Download Presentation

Volunteer Computing with BOINC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. David P. Anderson Space Sciences Laboratory University of California, Berkeley Volunteer Computingwith BOINC

  2. High-throughput computing • Goal: finish lots of jobs in a given time • Paradigms: • Supercomputing • Cluster computing • Grid computing • Cloud computing • Volunteer computing

  3. Cost of 1 TFLOPS-year • Cluster: $145K • Computing hardware; power/AC infrastructure; network hardware; storage; power; sysadmin • Cloud: $1.75M • Volunteer: $1K - $10K • Server hardware; sysadmin; web development

  4. Performance • Current • 500K people, 1M computers • 6.5 PetaFLOPS (3 from GPUs, 1.4 from PS3s) • Potential • 1 billion PCs today, 2 billion in 2015 • GPU: approaching 1 TFLOPS • How to get 1 ExaFLOPS: • 4M GPUs * 0.25 availability • How to get 1 Exabyte: • 10M PC disks * 100 GB

  5. History of volunteer computing 2005 2005 1995 2000 now distributed.net, GIMPS SETI@home, Folding@home Applications Applications Climateprediction.net Predictor@home IBM World Community Grid Einstein@home Rosetta@home ... Academic: Bayanihan, Javelin, ... Middleware Commercial: Entropia, United Devices, ... BOINC

  6. The BOINC computing ecosystem projects • Projects compete for volunteers • Volunteers make their contributions count • Optimal equilibrium volunteers LHC@home CPDN attachments WCG

  7. What apps work well? • Bags of tasks • parameter sweeps • simulations with perturbed initial conditions • compute-intensive data analysis • Native, legacy, Java, GPU • soon: VM-based • Job granularity: minutes to months

  8. Data size issues • Can handle moderately data-intensive apps Commodity Internet ~ 1 Mbps (450 MB/hr) possibly sporadic non-dedicated underutilized ~ 1 Gbps non-dedicated underutilized Institution

  9. Example projects • Einstein@home • Climateprediction.net • Rosetta@home • IBM World Community Grid • GPUGRID.net

  10. Creating a volunteer computing project • Set up a server • Port applications, develop graphics • Develop software for job submission and result handling • Develop web site • Ongoing: • publicity, volunteer communication • system, DB admin (Linux, MySQL)

  11. How many CPUs will you get? • Depends on: • PR efforts and success • public appeal • 12 projects have > 10,000 active hosts • 3 projects have > 100,000 active hosts

  12. Security • Code signing • Client: account-based sandbox Project Hacker Volunteer

  13. Organizational issues • Creating a volunteer computing project has startup costs and requires diverse skills • This limits its use by individual scientists and research groups • Better model: umbrella projects • Institutional • Lattice, VTU@home • Corporate • IBM World Community Grid • Community • AlmereGrid

  14. Summary • Volunteer computing is an important paradigm for high-throughput computing • price/performance • performance potential • Low technical barriers to entry (due to BOINC) • Organizational structure is critical • Use GPUs if developing new app

More Related