1 / 13

A Task Development Infrastructure

A Task Development Infrastructure. Rob Kolstad , Ph.D. USA Computing Olympiad. USA/USACO Background. Population 307M Area 9.8M km 2 (low density  expensive to gather ) 29,500 high schools 15M high school students + 0.3M ‘home schooled’ 3.2M seniors graduated in 2009

vail
Download Presentation

A Task Development Infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Task DevelopmentInfrastructure Rob Kolstad, Ph.D. USA Computing Olympiad

  2. USA/USACO Background • Population 307M • Area 9.8M km2 (low density  expensive to gather) • 29,500 high schools • 15M high school students + 0.3M ‘home schooled’ • 3.2M seniors graduated in 2009 • ~160 USACO participants  0.001% (pitiful) • 6-24 coaches, 1-2 administrators, all volunteer effort

  3. USACO Annual Task Budget • Qualification round (sets bronze/silver/gold level) • Six contests  ~3 divisions  3 tasks/contest  54 tasks • Bonus contests: 1-2 contests  3 tasks/contest  3-6 tasks • Camp contenders: 6 contests  3 tasks/contest  18 tasks • Camp 2nd tier: 4 contests  3 tasks/contest  ~12 tasks Total: 75+ new tasks annually

  4. USACO Task Properties • Clear/complete text • Easy to edit/repair text • Printable/web-able • Task ratings and evaluation from multiple coaches • Algorithm type • Solution time  More…

  5. USACO Task Properties, II • Test data • Validator (mechanically created!?) • Easy to manipulate (add, remove, reorder) • Feedback style (one case? Many? Full?) • Scoring: multiple tests-per-case • Task-style (batch, reactive, output-file) • Solutions (with execution, of course) • Answer-grader & output format-checker • Analysis Text • Translations

  6. Task Text Contents • Names (short & full) • Author & Owner • Presentation order • Difficulty • Text body • Input & output formats • Input & output samples/explanations • Limits on time/memory

  7. USACO Milieu • Distributed coaching staff • Annually from 6 to 24 coaches • Distributed across USA, Canada & other continents • Distributed across almost all timezones • Medium-speed network link (1.5Mb/sec down; 0.7 up) • Small number of servers  • Requires web-style development

  8. Automation • Easy navigation of tasks and contests • Front page w/pool of tasks including status • Contests • Tasks • Configuration (start time, duration, etc.) • Status • Standard task requirements • Sandbox, queuer, runner, checker • Matches contest & grading environments

  9. Challenges/Requirements • Security/authentication • Multiple task development systems (!) • Paradigms for contest export and administration • Functionality with reliability • Meaningful user interfaces • Easy to navigate • “Shiny” (currently a failure) • Efficient use of bandwidth

  10. Current Status • In production for more than half a decade • Able to produce 12-20 contests/year with strict deadlines • Much reduced stress (due to checking tools) • Lower skill level required for task and contest creation • Perhaps reduced staff/time requirements as well • Vanishingly small number of complaints/clarifications

  11. Current Deficiencies • Linux task timing is not reliably repeatable • Web displays are not consistent in their layout • Web displays for list-of-tasks and list-of-contests are becoming unwieldy due to size • Entire web layout is not “shiny” and “slick” • Can’t make one-off custom contests for training (yet) • Training pages not fully integrated • Features like tracking solving times not yet implemented

  12. Futures • Configurable contests for individual training • ‘Programming bees’ • Modern, consistent web interface (better layout, CSS, better use of color, improved ‘confirmers’)

  13. Conclusion • The system enables creation of a large volume of good-quality contests (high quality contests have better test data and perhaps better analyses) • The system virtually eliminates administrative time expenditures and errors • The system is now maintainable/repairable and expandable • Basically: It works for us and can provide a model for those who need its use/functionality

More Related