Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Testing Battlelog, LiU Guest lecture

Testing Battlelog, LiU Guest lecture

Guest lecture given in the Software Testing course at the Department of Computer and Information Science, Linköping University

Micael Sjölund

May 07, 2014
Tweet

Other Decks in Education

Transcript

  1. “There are two hard things in computer science: Cache invalidation,

    naming things and off-by-one errors Unknown
  2. AGENDA → BACKGROUND → ESN STUDIO → COMPANION → BATTLELOG

    → TECH → RELEASES → CORE IDEAS → TOOLS → TESTING → FUNCTIONAL → LOAD
  3. STUDIO FOUNDED IN 2003, JOINED ELECTRONIC ARTS IN 2012. CREATES

    COMPANION PRODUCTS AND COMPANION TECHNOLOGY
  4. COMPANION OPPORTUNITY 9AM TRACKING 3PM LEADERBOARDS & FORUMS 5PM CHAT

    & MISSIONS LUNCH TABLET COMMANDER! 11PM BATTLE REPORTS
  5. WEB TECH • Benefits with web tech – Rapid UI

    development – Lots of tools – Update whenever (server-side UI) – Runs on many platforms – Organizational • The industry is moving in the web direction
  6. RECAP – BACKGROUND 1. The companion extends the core game

    loop 2. Web tech is great for 2D UI 22
  7. AGENDA → BACKGROUND → ESN → COMPANION → BATTLELOG →

    TECH → RELEASES → CORE IDEAS → TOOLS → TESTING → FUNCTIONAL → LOAD
  8. MANUAL TESTING • Answers questions that automated test cannot answer

    – Is the product fun? – Does the product look good? – Does the product add value to the game? • Essential for the team – Focus on the relevant issues – Using your own product • Many methods: Exploratory testing, focused click tests, etc.
  9. TESTING UI • Verifying that the front-end behaves as intended

    • Tests the product end-to-end • Great for regression testing (UI doesn’t change often) • We use Selenium (http://docs.seleniumhq.org/)
  10. AUTOMATED TESTING • Unit/Integration tests – Keep them close to

    the development loop – We use them for our ”black box” functionality
  11. RECAP – FUNCTIONAL TESTING 1. Use manual testing to rate

    fun/cool/nice/value within the team 2. Keep automated testing real for the user 3. Keep automated testing inside the dev process 36
  12. SIMULATE LOAD • We need to be able to simulate

    launch-week users • Five requirements 1. Easy to code 2. Realistic behaviour 3. Easy to run 4. Needs to be scalable 5. Monitor results
  13. 5. MONITOR RESULTS • Find issues fast, fix fast –

    Same team runs test and fix issues • Metrics – PSU (Peak Simultaneous Users) – RPS – Requests per second – Fail % – Response time
  14. GOALS 1. Set clear goals 2. For a certian amount

    of users, provide a max fail % and max response time 3. Example: For 1000 users, provide below 0.3% fail percent and a maximum of 200ms response time. 50
  15. RECAP – LOAD TESTING 1. We need to simulate realistic

    user behavior 2. Integrate implementation of load tests in the development workflow 3. Optimizations should be done as a core part of the testing effort 51
  16. AGENDA → BACKGROUND → ESN → COMPANION → BATTLELOG →

    TECH → RELEASES → CORE IDEAS → TOOLS → TESTING → FUNCTIONAL → LOAD
  17. RELEASING UPDATES • BATTLELOG IS MOSTLY WEB-BASED • BF4 LAUNCH

    WEEK – OVER 7 ZERO-DOWNTIME UPDATES – CONTAINING OVER 450 COMMITS • WE UPDATE ABOUT ONCE A WEEK
  18. IN-HOUSE • Do as much as possible in-house • We

    do our own – Deployments – Operations – Performance testing
  19. TOOLS 1. ZERO DOWNTIME UPDATES 2. FEATURE ROLLOUT 3. DARK

    LAUNCHING 4. KILL SWITCHES 5. INSTANT CONFIG CHANGES 6. METRICS 7. MAINTENANCE CONSOLE 62
  20. RECAP - TOOLS 1. ZERO DOWNTIME UPDATES 2. FEATURE ROLLOUT

    3. DARK LAUNCHING 4. KILL SWITCHES 5. INSTANT CONFIG CHANGES 6. METRICS 7. MAINTENANCE CONSOLE 71
  21. RECAP - RELEASES 1. “Release early, release often” 2. “Failure

    recovery is more important than failure avoidance” 3. “Do as much as possible in house” 72
  22. → BACKGROUND → ESN → COMPANION → BATTLELOG → TECH

    → RELEASES → CORE IDEAS → TOOLS → TESTING → FUNCTIONAL → LOAD AGENDA
  23. KEY TAKEAWAYS 1. Know your product – Embedded system vs.

    Entertainment – Box product vs. Service 2. Pick the right tools – Manual vs. Automated testing – Developers vs. End-users for testing