1 / 35

Scalability Tools: Automated Testing (30 minutes)

Scalability Tools: Automated Testing (30 minutes). Overview Hooking up your game  external tools  internal game changes Applications & Gotchas  engineering, QA, operations  production & management Summary & Questions. (2). High-level, actionable reports for many audiences.

sandra_john
Download Presentation

Scalability Tools: Automated Testing (30 minutes)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scalability Tools: Automated Testing(30 minutes) Overview Hooking up your game  external tools  internal game changes Applications & Gotchas  engineering, QA, operations  production & management Summary & Questions

  2. (2) High-level, actionable reports for many audiences Review: controlled tests & actionable results useful for many purposes (1) Repeatable tests, using N synchronized game clients Test Game Programmer Development Director Executive

  3. TSO case study: developer efficiency Strong test support Weak test support Automated testing accelerates large-scale game development & helps predictability Better game earlier Ship Date % Complete Oops autoTest Time Time Target Launch Project Start

  4. First Passing Test Now Measurable targets & projected trends give you actionable progress metrics, early enough to react Target Oops Any test (e.g. # clients) Time Any Time (e.g. Alpha)

  5. Success stories • Many game teams work with automated testing • EA, Microsoft, any MMO, … • Automated testing has many highly successful applications outside of game development • Caveat: there are many ways to fail…

  6. How to succeed • Plan for testing early • Non-trivial system • Architectural implications • Fast, cheap test coverage is a major change in production, be willing to adapt your processes • Make sure the entire team is on board • Deeper integration leads to greater value • Kearneyism: “make it easier to use than not to use”

  7. Repeatable, Sync’ed Test I/O Collection & Analysis Scriptable Test Client(s) Emulated User Play Session(s) Multi-client synchronization Report Manager Raw Data Collection Aggregation / Summarization Alarm Triggers Automated testing components Any Game Startup & Control Test Manager Test Selection/Setup Control N Clients RT probes

  8. Input systems for automated testing scripted algorithmic recorders Game code Multiple test applications are required, but each input type differs in value per application. Scripting gives the best coverage.

  9. Hierarchical automated testing subsystem system unit Multiple levels of testing gives you • Faster ways to work with each level of code • Incremental testing avoids noise & speeds defect isolation

  10. Input (Scripted Test Clients) Pseudo-code script of users play the game, and what the game should do in response Command steps createAvatar [sam] enterLevel 99 buyObject knife attack [opponent] … Validation steps checkAvatar [sam exists] checkLevel 99 [loaded] checkInventory [knife] checkDamage [opponent] …

  11. Test Client (Null View) Game Client Or, load both State State Scripted Players: Implementation Script Engine Game GUI Commands Presentation Layer Game Logic

  12. Test-specific input & output via a data-driven test client gives maximum flexibility Load Regression Reusable Scripts & Data Input API Test Client Output API Key Game States Pass/Fail Responsiveness Script-Specific Logs & Metrics

  13. A Presentation Layer is often unique to a game • Some automation scripts should read just like QA test scripts for your game • TSO examples • routeAvatar, useObject • buyLot, enterLot • socialInteraction (makeFriends, chat, …) NullView Client

  14. Repeatable Debugging & benchmarking Edge cases & real world performance  Random Input (data sets) Repeatable tests in development, faster load, edge conditions  Mock data Unpredictable user element finds different bugs  Real data

  15. Common Gotchas • Not designing for testability • Retrofitting is expensive • Blowing the implementation • Brittle code • Addressing perceived needs, not real needs • Use automated testing incorrectly • Testing the wrong thing @ the wrong time • Not integrating with your processes • Poor testing methodology

  16. Build Acceptance Tests (BAT) • Stabilize the critical path for your team • Keep people working by keeping critical things from breaking Final Acceptance Tests (FAT) • Detailed tests to measure progress against milestones • “Is the game done yet?” tests need to be phased in Testing the wrong time at the wrong time Applying detailed testing while the game design is still shifting and the code is still incomplete introduces noise and the need to keep re-writing tests

  17. More gotchas: poor testing methodology & tools • Case 1: recorders • Load & regression were needed; not understanding maintenance cost • Case 2: completely invalid test procedures • Distorted view of what really worked (GIGO) • Case 3: poor implementation planning • Limited usage (nature of tests led to high test cost & programming skill required) • Case 4: not adapting development processes • Common theme: no senior engineering analysis committed to the testing problem

  18. Automated Testing for Online Games Overview Hooking up your game  external tools  internal game changes Applications  engineering, QA, operations  production & management Summary & Questions

  19. Automated testing: strengths • Repeat massive numbers of simple, easily measurable tasks • Mine the results • Do all the above, in parallel, for rapid iteration “The difference between us and a computer is that the computer is blindingly stupid, but it is capable of being stupid many, many millions of times a second.” Douglas Adams (1997 SCO Forum)

  20. Manual Testing • Creative bug hunting, visuals • Judgment calls, playability • Reacting to change, • Evaluating autoTest results Semi-automated testing is best for game development Testing Requirements • Rote work (“does door108 still open?”) • Scale • Repeatability • Accuracy • Parallelism Automation Integrate the two for best impact

  21. Plan your attack with stakeholders(retire risk early: QA, Production, Management) • Tough shipping requirements (e.g.) • Scale, reliability • Regression costs • Development risk • Cost / risk of engineering & debugging • Impact on content creation • Management risk • Schedule predictability & visibility

  22. Critical path stability  Keep team going forward Non-determinism  Gets in the way of everything Content regression  Massive, recurring $$  Compatibility & install Improves life for you & user Automation focus areas (Larry’s “top 5”) Performance  Scale is hard to get right

  23. Yikes, that all sounds very expensive! • Yes, but remember, the alternative costs are higher and do not always work • Costs of QA for a 6 player game – you need at least 6 testers at the same time • Testers • Consoles, TVs and disks & network • Non-determinism • MMO regression costs: yikes2 • 10s to 100s of testers • 10 year code life cycle • Constant release iterations

  24. Stability: keep the team working!(TSO use case: critical path analysis) Test Case: Can an Avatar Sit in a Chair? use_object () • Failures on the Critical Path block access to much of the game buy_object () enter_house () buy_house () create_avatar () login ()

  25. Prevent critical path code breaks that take down your team Candidate code Development Safe code Sniff Test Pass / fail, diagnostics Checkin

  26. Code Repository Compilers Reference Servers Stability & non-determinism (monkey tests) Continual Repetition of Critical Path Unit Tests

  27. AutoTest addresses non-determinism • Detection & reproduction of race condition defects • Even low probability errors are exposed with sufficient testing (random, structured, load, aging) • Measurability of race condition defects • Occurs x% of the time, over 400x test runs

  28. Monkey test: enterLot ()

  29. Monkey test: 3 * enterLot ()

  30. Four different behaviors in thirty runs!

  31. Content testing (areas) • Regression • Error detection • Balancing / tuning • This topic is a tutorial in and of itself • Content regression is a huge cost problem • Many ways to automate it (algorithmic, scripted & combined, …) • Differs wildly across game genres

  32. Content testing (more examples) • Light mapping, shadow detection • Asset correctness / sameness • Compatibility testing • Armor / damage • Class balances • Validating against old userData • … (unique to each game)

  33. Automated Testing for Online Games(One Hour) Overview Hooking up your game  external tools  internal game changes Applications  engineering, QA, operations  production & management Summary & Questions

  34. Summary: automated testing • Start early & make it easy to use • Strongly impacts your success • The bigger & more complex your game, the more automated testing you need • You need commitment across the team • Engineering, QA, management, content creation

  35. Q&A & other resources • My email: larry.mellon_@_emergent.net • More material on automated testing for games • http://www.maggotranch.com/mmp.html • Last year’s online engineering slides • This year’s slides • Talks on automated testing & scaling the development process • www.amazon.com: “Massively Multiplayer Game Development II” • Chapters on automated testing and automated metrics systems • www.gamasutra.com: Dag Frommhold, Fabian Röken • Lengthy article on applying automated testing in games • Microsoft: various groups & writings • From outside the gaming world • Kent Beck: anything on test-driven development • http://www.martinfowler.com/articles/continuousIntegration.html#id108619: Continual integration testing • Amazon & Google: inside & outside our industry

More Related