David p anderson space sciences laboratory u c berkeley 22 oct 2009
1 / 26

BOINC - PowerPoint PPT Presentation

  • Uploaded on

David P. Anderson Space Sciences Laboratory U.C. Berkeley 22 Oct 2009. BOINC The Year in Review. Volunteer computing. Throughput is now 10 PetaFLOPS mostly [email protected] Volunteer population is constant 330K BOINC, 200K [email protected] Volunteer computing still unknown in HPC world

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'BOINC' - RexAlvis

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
David p anderson space sciences laboratory u c berkeley 22 oct 2009 l.jpg

David P. Anderson

Space Sciences Laboratory

U.C. Berkeley

22 Oct 2009

BOINCThe Year in Review

Volunteer computing l.jpg
Volunteer computing

  • Throughput is now 10 PetaFLOPS

    • mostly [email protected]

  • Volunteer population is constant

    • 330K BOINC, 200K [email protected]

  • Volunteer computing still unknown in

    • HPC world

    • scientific computing world

    • general public

Exaflops l.jpg

  • Current PetaFLOPS breakdown:

  • Potential: ExaFLOPS by 2010

    • 4M GPUs * 1 TFLOPS * 0.25 availability

Projects l.jpg

  • No significant new academic projects

    • but signs of life in Asia

  • No new umbrella projects

  • [email protected]: D-Wave systems

  • Several hobbyist projects

Boinc funding l.jpg
BOINC funding

  • Funded into 2011

  • New NSF proposal

Facebook apps l.jpg
Facebook apps

  • Progress thru Processors (Intel/GridRepublic)

    • Web-only registration process

    • lots of fans, not so many participants

  • BOINC Milestones


Research l.jpg

  • Host characterization

  • Scheduling policy analysis

    • EmBOINC: project emulator

  • Distributed applications

    • Volpex

  • Apps in VMs

  • Volunteer motivation study

Fundamental changes l.jpg
Fundamental changes

  • App versions now have dynamically-determined processor usage attributes (#CPUs, #GPUs)

  • Server can have multiple app versions per (app, platform) pair

  • Client can have multiple versions per app

  • An issued job is linked to an app version

Scheduler request l.jpg
Scheduler request

  • Old (CPU only)

    • requested # seconds

    • current queue length

  • New: for each resource type (CPU, NVIDIA, ...)

    • requested # seconds

    • current high-priority queue length

    • # of idle instances

Schedule reply l.jpg
Schedule reply

  • Application versions include

    • resource usage (# CPUs, # GPUs)

    • FLOPS estimate

  • Jobs specify an app version

  • A given reply can include both CPU and GPU jobs for a given application

Client work fetch policy l.jpg
Client: work fetch policy

  • When? From which project? How much?

  • Goals

    • maintain enough work

    • minimize scheduler requests

    • honor resource shares

  • per-project “debt”







Work fetch for gpus goals l.jpg
Work fetch for GPUs: goals

  • Queue work separately for different resource types

  • Resource shares apply to aggregate

    Example: projects A, B have same resource share

    A has CPU and GPU jobs, B has only GPU jobs






Work fetch for gpus l.jpg
Work fetch for GPUs

  • For each resource type

    • per-project backoff

    • per-project debt

      • accumulate only while not backed off

  • A project’s overall debt is weighted average of resource debts

  • Get work from project with highest overall debt

Client job scheduling l.jpg
Client: job scheduling

  • GPU job scheduling

    • client allocates GPUs

    • GPU prefs

  • Multi-thread job scheduling

    • handle a mix of single-, multi-thread jobs

    • don’t overcommit CPUs

Gpu odds and ends l.jpg
GPU odds and ends

  • Default install is non-service

  • Dealing with sporadic usability

    • e.g. Remote Desktop

  • Multiple non-identical GPUs

  • GPUs and anonymous platform

Other client changes l.jpg
Other client changes

  • Proxy auto-detection

  • Exclusive app feature

  • Don’t write state file on each checkpoint

Screensaver l.jpg

  • Screensaver coordinator

    • configurable

  • New default screensaver

  • Intel screensaver

Scheduler feeder l.jpg

  • Handle multiple app versions per platform

  • Handle requests for multiple resources

    • app selection

    • completion estimate, deadline check

  • Show specific messages to users

    • “no work because you need driver version N”

  • Project-customized job check

    • jobs need different # of GPU processors

  • Mixed locality and non-locality scheduling

Server l.jpg

  • Automated DB update

  • Protect admin web interface

Manager l.jpg

  • Terms of use feature

  • Show only projects supporting platform

    • need to extend for GPUs

  • Advanced view is keyboard navigable

  • Manager can read cookies (Firefox, IE)

    • web-only install

Slide21 l.jpg

  • Enhanced wrapper

    • checkpointing, fraction done

  • PyMW: master/worker Python system

Community contributions l.jpg
Community contributions

  • Pootle-based translation system

    • projects can use this

  • Testing

    • alpha test project

  • Packaging

    • Linux client, server packages

  • Programming

    • lots of flames, little code

What didn t get done l.jpg
What didn’t get done

  • Replace runtime system

  • Installer: deal with “standby after X minutes”

  • Global shutdown switch

Things on hold l.jpg
Things on hold

  • BOINC on mobile devices

  • Replace Simple GUI

Important things to do l.jpg
Important things to do

  • New system for credit and runtime estimation

    • we have a design!

  • Keep track of GPU availability separately

  • Steer computers with GPUs towards projects with GPU apps

  • Sample CUDA app

Boinc development l.jpg
BOINC development

  • Let us know if you want something

  • If you make changes of general utility:

    • document them

    • add them to trunk