Upgrade to Pro — share decks privately, control downloads, hide ads and more …

You don't know what you don't know: Quality thr...

Sponsored · Ship Features Fearlessly Turn features on and off without deploys. Used by thousands of Ruby developers.

You don't know what you don't know: Quality through collaboration

Talk with a mix of testing (coverage), results of testing and ensemble programming/testing, with a twist of AI changing things.

Avatar for Maaret Pyhäjärvi

Maaret Pyhäjärvi

April 14, 2026

More Decks by Maaret Pyhäjärvi

Other Decks in Programming

Transcript

  1. © 2026 CGI Inc. 1 You don’t know what you

    don’t know Quality through collaboration Maaret Pyhäjärvi April 2026
  2. © 2026 CGI Inc. 2 The real quality challenge is

    that organizations suffer from well- maintained illusions.
  3. © 2026 CGI Inc. 3 How Would You Test This?

    Raster Reveal by James Lyndsay https://www.workroom- productions.com/raster-reveal/
  4. © 2026 CGI Inc. 4 Product is my external imagination

    External imagination Better ideas on… what the problems are dimensions of coverage unknown unknowns one-upping non-human recalling expected
  5. © 2026 CGI Inc. 5 Accidental Learning by being intentional

    about learning. Fooled by the unknown unknowns. You cannot know what you don’t know but you recognize it when you see it.
  6. © 2026 CGI Inc. 6 Solo – Pair - Ensemble

    Getting best out of everyone into the work we are doing
  7. © 2026 CGI Inc. 7 A social software development (incl.

    testing) approach In Ensemble programming, we connect the software creation work the team does through a single computer the whole team shares. All ideas flow through someone else’s hands, and get reviewed and improved through continuous, structured collaboration.
  8. © 2026 CGI Inc. 8 Designated Navigator brains Driver hands

    Navigator(s) voices No decisions on the keyboard Navigate on highest level of abstraction Navigate the navigator Learning or contributing The best out of everyone into the work we’re doing Bias to action Rotate 3-15 minutes Roles and rules
  9. © 2026 CGI Inc. 9 Comparison Strong Style Traditional I

    have an idea… Please take the keyboard I have an idea… Give me the keyboard
  10. © 2026 CGI Inc. 10 Willingness to try new things

    Planning to evolve our strategies Dealing with setback Feeling about being wrong MINDSET See: Carol Dweck FIXED Static, like height Look good Avoid Defines your identity For those with no talent Helplessness GROWTH Can grow, like muscle To learn Seek and embrace Provides information Path to mastery Resilience  → Ability Goal Challenge Failure Effort Reaction to challenge
  11. © 2026 CGI Inc. 11 If you want to go

    fast, go alone. If you want to go far, go together.
  12. © 2026 CGI Inc. 12 AGE OF AI Actionable feedback

    that challenges well-maintained illusions has never been more important.
  13. © 2026 CGI Inc. 13 Stakeholders happy, even delighted –Quality

    Information Good Team’s Output –Quality Information Less than Good Team’s Output –Quality Information Results Gap Surprise! Results Gap on a Team that thinks Testers == Testing Pick up the pizza boxes… ”Find (some of) What Others May Have Missed”
  14. © 2026 CGI Inc. 14 Results gap NEED TO DO

    BETTER Human tester: 45 62% Human tester with AI: 73 100% AI with human: 40 55 % AI without human: 4 5 % Traditional testers: 13.5 18% Future talent ’26: 11.5 16% https://github.paeuinsource.ent.cgi.com/AI-Exchange/result-benchmark-playwright-agents-todo
  15. © 2026 CGI Inc. 15 Experiential learning case example AN

    EXAMPLE WOULD BE GOOD, RIGHT ABOUT HERE What we did: Started with a repo with application code Tested remote control by creating list of participants while doing introductions Generated a list of features by access to code Generated a list of bugs (11) by access to code Installed playwright and playwright agents Generated a list of bugs (21) by access to the UI Combined the two lists of bugs to identify overlap Generated test cases with playwright agents Compared the list of features to test cases for coverage analysis Automated a failing test for bug number 3 Fixed the bug by requesting a fix for bug number 3 Run the test automation to see it passing Refactored the automated tests to POM While doing all this, very meta, learning testing and tools by testing: Learned to discuss our intent and options Discussed why Selenium WebDriver BiDi and Vibium is comparable (but better) than Playwright and Playwright agents Discussed availability of MCPs as way of extending what we could get it to do Discovered and named a lot of the GitHub Copilot functionality from model selection, to GitHub Copilot Agent Mode to Playwright agents, to context window and premium request consumption Discussed how the experience with contemporary exploratory testing is very different to what participants had learned to expect from exploratory testing AI as externa l imagin ation a nd task expansio n Intentionally bug-filled test target by Christine Pinto
  16. © 2026 CGI Inc. 16 Smarter handoffs in value chain:

    task expansion in talent AI-ENHANCED APPLICATION TESTING 01 Tasks of Testing From logistics of test targets to information, test cases and issue reports Docu ment tes ts with t est auto mation Repo rt bugs with a f ix for revie w Secon d pair of eyes for qual ity pers pective through reviews Find issues f irst, te st cases follo w Start earlier Continue further CGI Tes t Intell igence M esh – Agen tic toolk it to lev el up qu ality re sults in talent
  17. © 2026 CGI Inc. 18 Insights you can act on

    Founded in 1976, CGI is among the largest IT and business consulting services firms in the world. We are insights-driven and outcomes-focused to help accelerate returns on your investments. Across hundreds of locations worldwide, we provide comprehensive, scalable and sustainable IT and business consulting services that are informed globally and delivered locally. cgi.com