More Better Quality Coverage

More Better Quality Coverage

From StirTrek 2017.

A talk on getting great quality coverage--testing the right things at the right time, and how to think about properly splitting off automated versus exploratory testing.

Fd80f9c58b06270d42356dd77a32defa?s=128

Jim Holmes

May 05, 2017
Tweet

Transcript

  1. 7.

    • Customer needs • Feature fit • Solves the scenarios

    we know about • Secure • Performant • Works well in production • Plays nicely with our craptistic data
  2. 17.

    Release Planning => Are scope and basic function right? What

    integrations? What User Acceptance criteria? What scenarios and data needed?
  3. 18.

    Iteration Planning => Do we know “Why?” Do we know

    (sort of) “How?” Good acceptance criteria? Good data?
  4. 21.
  5. 25.
  6. 26.

    System must take in to account • Model production rules

    • Inventory rules • Capacity restrictions
  7. 28.
  8. 29.

    Stakeholder: “I want to create, edit and view future years’

    model configs. I want to use it on web and mobile.”
  9. 33.

    • Riskiest part of biz idea? • Biggest value for

    biz idea? • Customer audience? • Target platform(s)? • Integrations with existing systems? • Architectural impacts?
  10. 34.

    “Our basic config system’s data access is unstable, and we

    have data consistency/accuracy errors.”
  11. 35.

    “You’re asking for a wide range of mobile device support—that

    explodes our testing and development effort.”
  12. 36.

    “You said you want to scale to support concurrent access

    by all of China. “We currently have six people who do this task.”
  13. 39.

    Considerations • What platforms? • What’s reasonable load? • How

    secure? • What’s biz value? • What happens if info is leaked?
  14. 43.

    Initial design ideas • Use central data mart • Pull

    existing inventory data • Kendo UI for grid
  15. 47.

    Considerations • Business scenarios • Acceptance criteria • Infrastructure /

    integration points • Data and environment needs • Performance needs • Security
  16. 48.

    Outcomes • Concerns of perf on client systems • NOT

    testing Kendo Grid • Significant test data requirements • Comfortable with security approach
  17. 53.

    • Why are we building this? • Do we have

    test data yet? • Environments ready?
  18. 55.

    Outcomes • Most, not all, test data ready • What’s

    not ready can be tested later • Dependencies in place • Good to move forward
  19. 56.
  20. 58.
  21. 59.

    Dev-Tester Collaboration Example • “This use case isn’t clear!” •

    “What about this edge case?” • “Sweet, now I understand REST!”
  22. 60.

    Considerations • What isn’t clear? • Did we miss something?

    • Tests in the right place • Integration, Unit, JS in UI, functional
  23. 61.

    Outcomes • New use case discovered, resolved • Added to

    test data • BUILT AND SHIPPED WORKING STUFF!
  24. 62.

    UAT

  25. 63.

    Total focus on Quality • No wasted effort • UAT

    focuses only on gaps • Earlier efforts pay off
  26. 64.
  27. 65.
  28. 72.