Slide 1

Slide 1 text

1 Human Intelligence Software Testing (HIST) a textbook example of what makes a testing approach bad Iosif Itkin co-CEO & co-founder, Exactpro

Slide 2

Slide 2 text

2 Opening Statements A textbook example of what makes a test approach bad Propositions: 1. HIST is mostly fluff 2. HIST uses strawman comparisons and misrepresents 3. HIST recommendations lead to bad testing Rebuttal: HIST (Human Intelligence Software Testing) defines a new era of disciplined, intelligent, and risk-aware testing—where quality is validated with evidence and metrics, not assumptions.

Slide 3

Slide 3 text

3 Sources of Information Ruslan Desyatnikov is the Founder & CEO of QA Mentor, Inc., a global, multi-award-winning software testing company serving over 450 clients across 28 countries. With 28 years of experience, he has built and led large-scale QA organizations at Citi and HSBC, conducted 118+ QA Audits and transformation projects, authored multiple proprietary QA methodologies, and developed a suite of e-learning courses that have trained thousands of testers worldwide. ● I only studied one report commissioned by one of 450 clients ● I only studied one video about one of the 118+ QA Audits ● I only studied one course on static ROI out of a suite of e-learning courses ● I do not have access to any of the proprietary artifacts produced by Ruslan over his 28 years career My analysis relies exclusively on what is publicly accessible: the HIST website, published press release, interviews, and posts or comments authored by Ruslan for his substantial LinkedIn audience. Ruslan himself has clarified that only around 10% of HIST’s actionable recommendations are shared in the public domain I proceed under the assumption that what has been published is both representative and reflective of the accumulated thought leadership behind HIST. With this in mind, I will provide a structured review of HIST as it appears in the public domain, and I welcome further discussion on any individual post, comment, or article.

Slide 4

Slide 4 text

4 Definitions give clarity, references give credibility Examples: A methodology defines a new-era when it introduces a non-trivial conceptual shift, validates it empirically, spreads widely enough to alter common practice, endures over time, and reshapes the surrounding ecosystem Exploratory Testing is an approach to software testing that emphasizes the personal freedom and responsibility of each tester to continually optimize the value of his work by treating learning, test design and test execution as mutually supportive activities that run in parallel throughout the project

Slide 5

Slide 5 text

5 HIST claims discipline and structure, yet consistently lacks both ● No peer-reviewed publications from HIST ● No references to third-party materials it criticizes ● No coherent definitions of the terms it uses ● No proper attribution of widely adopted practices and ideas ● No attempt to build constructively on top of existing work ● Language that is deliberately unclear, overdramatic, and pandering

Slide 6

Slide 6 text

6 When Richard Rumelt's Good Strategy/Bad Strategy was published in 2011, it immediately struck a chord, calling out as bad strategy the mish-mash of pop culture, motivational slogans and business buzz speak so often and misleadingly masquerading as the real thing. Since then, his original and pragmatic ideas have won fans around the world and continue to help readers to recognise and avoid the elements of bad strategy and adopt good, action-oriented strategies that honestly acknowledge the challenges being faced and offer straightforward approaches to overcoming them. Strategy should not be equated with ambition, leadership, vision or planning; rather, it is coherent action backed by an argument. For Rumelt, the heart of good strategy is insight into the hidden power in any situation, and into an appropriate response - whether launching a new product, fighting a war or putting a man on the moon. Drawing on examples of the good and the bad from across all sectors and all ages, he shows how this insight can be cultivated with a wide variety of tools that lead to better thinking and better strategy, strategy that cuts through the hype and gets results. What Textbook? https://en.wikipedia.org/wiki/Richard_Rumelt A textbook example of what makes a test approach bad

Slide 7

Slide 7 text

7 The Essence of HIST The Hallmarks of the BAD strategy ● Fluff - using words of no consequence ● Mistaking goals for strategy ● The unwillingness or inability to choose ● Bad strategic objectives ● Failure to face the challenge ● HIST is mostly fluff ● Every nice thing is strategic in HIST ● HIST is all the good things and no trade-offs ● Metrics are focused on irrelevant things ● HIST values pandering over software testing

Slide 8

Slide 8 text

8 1. HIST is mostly fluff A hallmark of mediocrity and BAD strategy is unnecessary complexity - a flurry of FLUFF masking an absence of substance. A hallmark of true insight is making a complex subject understandable. FLUFF is a restatement of the obvious, combined with a generous sprinkling of buzzwords that masquerade as expertise. FLUFF is using words of no consequence Human Intelligence Human First Purpose- Driven Human Centric

Slide 9

Slide 9 text

9 Book example of FLUFF by Richard Rumelt Here is a quote from a major retail bank’s internal strategy memoranda: “Our fundamental strategy is one of customer-centric intermediation.” Intermediation means that the company accepts deposits and then lends out the money. In other words, it is a bank. The buzz phrase “customer centric” could mean that the bank competes by offering better terms and service, but an examination of its policies does not reveal any distinction in this regard. The phrase “customer-centric intermediation” is pure fluff. Remove the fluff and you learn that the bank’s fundamental strategy is being a bank.

Slide 10

Slide 10 text

10 10 HIST is people-centric human-first human-centered purpose-driven, human-in-the loop philosophy strategic approach HIST is a full structured methodology, more than a methodology, not just another methodology, a mindset, not an anti-Agile movement for those who refuse to settle for shallow coverage, a revolutionary people-centric human-first strategic approach, a discipline, a philosophy, a pioneering framework, a strategy, a system, a way of thinking and working and leading with empathy, is end-to-end, a human-centered quality assurance, a new-era of disciplined and intelligent and risk-aware testing where quality is validated with evidence and metrics and not assumptions, not just testing, the future of testing, and at the same time a deliberate return to what testing was always meant to be, that begins with the human mind, challenges outdated industry practices, restores the critical thinking and discipline and risk-awareness and human judgement in QA, and leadership roles often diminished by over-automation, Agile dogma, or blind reliance on tools, combines human critical thinking, intuition, creativity, and adaptability with meaningful documentation, smart automation through codeless and scriptless tools, AI-driven solutions with human-in-the-loop philosophy, monitored through clear metrics and KPIs, covers the entire landscape of quality, helps QA speak the language of the business, shapes the future of QA, doesn't reject Agile, brings discipline back into Agile environments and human insight that Agile forgot and QA leadership back from the dead, blends discipline with agility and flexibility with methodical structure, fits within Agile and heals broken processes, emphasizes the value of domain knowledge and business logic and user workflows and reusability over constant script creation, values structure and not just bureaucracy, puts critical thinking in the hands of a tester, put leadership back where it belongs, promotes smart and lightweight automation, takes a stand for thoughtful and skilled and purpose-driven testing, repositions human reasoning, reminds us that behind every software product, there are real people, users with expectations, testers with judgment, and teams with responsibility, doesn’t reject certifications, gives us intelligence and helps to make sure things are clear from the start.

Slide 11

Slide 11 text

11 11 Exercise: Spot executive language I’ve seen where Agile Testing, RST, Exploratory Testing, Context-Driven Testing, BDD/TDD, and others succeed, and where they routinely fail when applied without strong human intelligence oversight (or discipline approach behind controlled by metrics) Strong Human Intelligence Oversight (trans. from FLUFF): When smart responsible people do something they can succeed. When stupid negligent people do something they routinely fail. FLUFF - a restatement of the obvious, combined with a generous sprinkling of buzzwords that masquerade as expertise Evidence level: I've seen things you people wouldn't believe

Slide 12

Slide 12 text

12 1. HIST is mostly fluff The Hallmarks of the BAD strategy ● Fluff - using words of no consequence ● Mistaking goals for strategy ● The unwillingness or inability to choose ● Bad strategic objectives ● Failure to face the challenge ● HIST is mostly fluff ● Every nice thing is strategic in HIST ● HIST is all the good things and no trade-offs ● Metrics are focused on irrelevant things ● HIST values pandering over software testing

Slide 13

Slide 13 text

13 Exercise: Spot words of no consequence Human Intelligence in Services, isn’t a Fallback, It’s the Foundation Human Intelligence in Lunar Landing isn’t a Fallback, It’s the Foundation

Slide 14

Slide 14 text

14 Exercise: Replace HIST with any other word Try: ● Humans ● Intelligence ● QA Mentor ● Ruslan Desyatnikov ● Software Testing ● Testers ● Virtue Most of the time the sentence about HIST will make more sense after a simple replacement Example from online discussions: ● AI generates, but HIST ensures relevance, impact, and value ● You can shift left all you want but without HIST, you're shifting chaos left Some statements are more challenging: ● HIST begins with the human mind ● HIST gives us intelligence

Slide 15

Slide 15 text

15 Exercise: Spot the restatement of the obvious (without proper attribution) Proposition 1. HIST is mostly fluff

Slide 16

Slide 16 text

16 Fluff is masking the absence of substance Is it OK for the text to be mostly fluff? Depends on the context For marketing pandering? ● Yes! For technical methodology description? ● No! Proposition 1. HIST is mostly fluff

Slide 17

Slide 17 text

17 Try to dig from FLUFF answers to the following questions Are you expected to report the vanity test case count in HIST? Where is “Investigative Testing” on the scripted exploratory continuum scale? What is HIST’s stance on each of the four values in Agile Manifesto? Are metrics involving pass rates embraced by HIST? Do you prioritise static review or dynamic testing if some version of the target application is already available? When was HIST used for the first time on the client side?

Slide 18

Slide 18 text

18 18 A HIST team doesn’t brag about “number of test cases run.” Instead, they report: “92% of business-critical scenarios validated.” Translate from fluff: Business Critical Scenarios

Slide 19

Slide 19 text

19 Our glorious business critical scenarios validated % reporting vs. their meaningless bragging about the number of test cases run If you focus too much on the use of executive language you eventually stop understanding what you are trying to say. Exercise: Find something that can be called a scenario, but not a test case.

Slide 20

Slide 20 text

20 Things that we will find in HIST description The Hallmarks of the BAD strategy ● Fluff - using words of no consequence ● Failure to face the challenge ● Mistaking goals for strategy ● Bad strategic objectives ● The unwillingness or inability to choose ● HIST is mostly fluff ● HIST values pandering over software testing ● Every nice thing is strategic in HIST ● Metrics are focused on irrelevant things ● HIST is all the good things and no trade-offs

Slide 21

Slide 21 text

21 How to spot mistaking goals for strategy? A strategy is a mixture of policy and action designed to surmount a high-stakes challenge. The essence of strategy is choosing what not to do. In executive language, the word "strategic" is often abused to mean important, valuable, prestigious, high-level or well-funded. This is a great indication that the nice-sounding goals are mistaken for strategy.

Slide 22

Slide 22 text

22 Exercise: count the number of times the word “strategic” is used in comparison table between HIST and Exploratory Testing 5times HIST Test Documentation is Strategic and documented

Slide 23

Slide 23 text

23 2. HIST uses strawman comparisons and misrepresents Strawman 1. Oversimplified, exaggerates, or misquotes 2. Presents an argument as more extreme or radical 3. Ignores nuance, focus only on a small fragment 4. Attacks a claim that was never made

Slide 24

Slide 24 text

24 2. HIST uses strawman comparisons and misrepresents According to the Wikipedia entry, a strawman is "the fallacy of refuting an argument different from the one actually under discussion, while not recognizing or acknowledging the distinction" - in other words, responding to a caricature rather than the real stance - https://lnkd.in/dN3h-mnq Strawman Spotting Checklist Is the position being oversimplified, exaggerated, or misquoted? (Example: summarizing a nuanced argument in crude terms to tear it down) Is the argument presented as more extreme or radical than it actually is? (Example: portraying a conditional safeguard as an absolute mandate) Does it ignore nuance or focus only on a small fragment of the argument? (Example: cherry-picking one clause while ignoring the caveats) Is it attacking a claim that was never made? (Example: refuting something the speaker didn’t express) A consistent absence of references, definitions, and direct quotations is a legitimate signal of a strawman argument A Strawman is arguing with a Caricature

Slide 25

Slide 25 text

25 25 This is what HIST Press Release says about YOU: ● Never prioritize based on real business risks and customer impact ● Never adapt your testing style based on domain, environment, and shifting project needs ● Never choose the right tools, only the trendiest ones ● Never create meaningful, real-world test scenarios ● Never ask tough questions or challenge assumptions Proposition 2. HIST uses strawman comparisons and misrepresents

Slide 26

Slide 26 text

26 (1) This is where Human Intelligence Software Testing (HIST) reframes the conversation. HIST teaches that coverage must be anchored in business value, not vanity metrics. (2) With HIST, testers shift their mindset: From -->"Did we test everything?" (3) Testing every permutation blindly doesn't guarantee protection. In fact, it often leads to redundancy, more effort, more noise and little added value. For a proper rebuttal, the opposition must explain why (1) and (2) are not examples of strawman arguments. Not a subject of the debate proposition, but clarifying (3) would be useful. As would explaining why (4) is not an ad hominem argument (4)

Slide 27

Slide 27 text

27 You did very poor review of this section because you did not provide a single example of your claim! This comment is about this block of text: HIST vs Automation-Centric Frameworks. Modern frameworks often prioritize automation above all else. Approaches like BDD, TDD, and CI/CD pipelines are designed for speed and tooling, not thoughtful validation (definition by HIST)

Slide 28

Slide 28 text

28 For the record, I have studied, evaluated, and analyzed all of these methodologies for many years. I know them well, and I am fully qualified to compare and contrast them with HIST

Slide 29

Slide 29 text

29 This man is not qualified to explain BDD https://dannorth.net/blog/introducing-bdd

Slide 30

Slide 30 text

30 This man is not focusing on requirements

Slide 31

Slide 31 text

31 This book is irrelevant for Agile Testing comparisons

Slide 32

Slide 32 text

32 Agile Testing Perspective

Slide 33

Slide 33 text

33 These three need strong human intelligence oversight Business Analyst Developer Software Tester Given Everything is Broken When Time is Up Then Build Back Better https://johnfergusonsmart.com/three-amigos-requirements-discovery/

Slide 34

Slide 34 text

34 Approaches like BDD, TDD, and CI/CD pipelines are designed for speed and tooling, not thoughtful validation. The focus shifts to coverage metrics instead of actual quality. In HIST, automation is intelligent, intentional, and always human-approved. We are so far past the singularity… Do you think HIST comparisons might be biased?

Slide 35

Slide 35 text

35 HIST comes in not to replace Agile, but to complete it Imagine you have invented something to complete or replace “Agile” How would you explain it to people in a way that is easy to understand?

Slide 36

Slide 36 text

36 Four values and twelve principles We follow these principles: Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale. Business people and developers must work together daily throughout the project. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation. Working software is the primary measure of progress. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely. Continuous attention to technical excellence and good design enhances agility. Simplicity--the art of maximizing the amount of work not done--is essential. The best architectures, requirements, and designs emerge from self-organizing teams. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Slide 37

Slide 37 text

37 HIST Manifesto is Virtue Signaling Exercise: Describe the value of the things on the right? Draw a caricature of the person who says we value: blind reliance on automation, thoughtless execution, shallow, detached coverage, automation for automation’s sake, unchecked speed and shortcuts. Choosing between one Good option and one Bad option is not a trade-off. It is not a strategy. It is classic pandering and fluff.

Slide 38

Slide 38 text

38 Things that we will find in HIST description The Hallmarks of the BAD strategy ● Fluff - using words of no consequence ● Mistaking goals for strategy ● The unwillingness or inability to choose ● Bad strategic objectives ● Failure to face the challenge ● HIST is mostly fluff ● Every nice thing is strategic goal ● HIST is all the good things and no trade-offs ● Metrics are focused on irrelevant things ● HIST values pandering over software testing

Slide 39

Slide 39 text

39 HIST comes in not to replace Agile, but to complete it And soon, Agile wasn’t just a method, it became a religion. ● Ruslan says other methods are like religions But here’s the thing: no one ever clearly defined what "working" actually meant ● Ruslan says others fail to define their terms Executives throwing around buzzwords like "fail fast," "MVP," and "iterate" like they invented the terms ● Ruslan says others use buzzwords No structure. No ownership. Just chaos hiding behind a Kanban board. ● Ruslan mentions Kanban once, Scrum teams and masters a few times, no links to any source material HIST gives us intelligence ● Read it again "human intelligence software testing gives us intelligence" For those who ask, "Why draft test cases if we haven’t seen the feature yet?" you can simply say, "Because HIST instructs and demands this." ● Processes over individuals and interactions? “Well user stories can change till the last day of sprint”. Really? Not in my quality world. ● Following the plan over responding to change?

Slide 40

Slide 40 text

40 The Discipline That Can Save Our Testing Craft Exercise: Evaluate the level of academic discipline in the bibliography list on the left. https://www.satisfice.com/exploratory-testing

Slide 41

Slide 41 text

41 “Investigative Testing” vs Exploratory Testing “Investigative Testing” Defines test data and expected outcomes up front Investigative Testing is the natural evolution of our craft. Be honest… Investigative Testing is scripted checking. Thank you

Slide 42

Slide 42 text

42 HIST on Metrics A methodology description can be understood as a specification of a process or a set of practices. The ability to identify contradictions, ambiguities, or unclear outcomes in requirements specifications is an important skill for software testers. Exercise: Сan you provide evidence of the HIST stance on “standard metrics”, “most metrics” and “metrics involving test case counts or pass rates” from the above table? Do you think using 100+ KPIs is a good indicator of the ability to make a choice? Do you think HIST materials address any questions that were already raised on metrics 20-30+ years ago?

Slide 43

Slide 43 text

43 Cem Kaner on Metrics

Slide 44

Slide 44 text

44 Bad Objectives

Slide 45

Slide 45 text

45 The only purpose of HIST Metrics is to promote the HIST The Hallmarks of the BAD strategy ● Fluff - using words of no consequence ● Mistaking goals for strategy ● The unwillingness or inability to choose ● Bad strategic objectives ● Failure to face the challenge ● HIST is mostly fluff ● Every nice thing is strategic in HIST ● HIST is all the good things and no trade-offs ● Metrics are focused on irrelevant things ● HIST values pandering over software testing

Slide 46

Slide 46 text

46 New-school bullshit can be particularly effective because many of us don’t feel qualified to challenge information that is presented in quantitative form. HIST recommendations are based on fake data 3. HIST recommendations lead to bad testing

Slide 47

Slide 47 text

47 ● 30 minutes per defect is an industry standard fix time for static testing ● Performing an ROI Calculation showcases how much money was saved ● This helps to SELL static testing to the management teams Model: Requirements - 0.33 Design - 1 Code - 3 Unit Test - 5 Testing - 10 Release - 26 I created a model for calculating ROI on static testing, something I have never seen documented or standardized anywhere. Нow can that possibly be labeled "generic"?

Slide 48

Slide 48 text

48 3. HIST recommendations lead to bad testing The Hallmarks of the BAD strategy ● Fluff - using words of no consequence ● Mistaking goals for strategy ● The unwillingness or inability to choose ● Bad strategic objectives ● Failure to face the challenge ● HIST is mostly fluff ● Every nice thing is strategic in HIST ● HIST is all the good things and no trade-offs ● Metrics are focused on irrelevant things ● HIST values pandering over software testing

Slide 49

Slide 49 text

49

Slide 50

Slide 50 text

50 Recommendations drawn from a caricature of executives Source: Images from Ruslan’s LinkedIn posts Link to comments

Slide 51

Slide 51 text

51 The possibilities are endless, but time and resources are limited. We need a way to identify and prioritise requirements for our software. The purpose of our software is to provide information service - detecting and describing defects in software. The value of software testing is measured across the following three main dimensions: ● Information Quality - testing better (better at detecting and describing defects) ● Speed - testing faster ● Cost - testing cheaper http://mitiq.mit.edu/Documents/Publications/TDQMpub/14_Beyond_Accuracy.pdf A Conceptual Framework of Data Quality Software Testing is an Information Service Improvements across any of these dimensions bring value to our organization. We use the following conceptual framework for evaluating the value of the information service: ● Accuracy ● Relevance ● Interpretability ● Accessibility

Slide 52

Slide 52 text

52 Writing Better Defect Reports Condense (Say it clearly but briefly) Accurate (Is it really a defect? Could it be user error, setup problem etc.?) Neutralize (Just the facts, no zingers, no humor, no emotion) Precise (Explicitly what is the problem?) Isolate (What has been done to isolate the problem?) Generalize (What has been done to understand how general the problem is?) Re-create (What are the essentials in creating/triggering this problem?) Impact (What is the impact if the bug were to surface in customer env.?) Debug (What does the developer need to debug this?) Evidence (What will prove the existence of the error? documentation?) https://www.stickyminds.com/sites/default/files/presentation/file/2013/T7_02STRER.pdf?ref=bellular.games

Slide 53

Slide 53 text

53 Writing Better Defect Reports This is a presentation from 2002 (23 years before invention of HIST) https://www.stickyminds.com/sites/default/files/presentation/file/2013/T7_02STRER.pdf Impact What is the impact if the bug were to surface in the customer environment? The impact of some bugs is self-evident. For example,” entire system crashes when I hit the enter key.” Some bugs are not so obvious. For example, you may discover a typo on a window. This may seem very minor, even trivial unless you point out that every time someone uses your product this is the first thing they see and the typo results in an offensive word. In this case, even though it is just a typo it may be something that absolutely must be fixed prior to shipping the product. Make your best judgment. If you think it is possible that this defect will not get sufficient priority then state the potential impact and sell the defect. Don’t oversell, but make sure the readers of the defect have an accurate understanding of the probable impact on the customer.

Slide 54

Slide 54 text

54 Information service is discipline and structure. HIST is neither ● No peer-reviewed publications from HIST ● No references to third-party materials it criticizes ● No coherent definitions of the terms it uses ● No proper attribution of widely adopted practices and ideas ● No attempt to build constructively on top of existing work ● Language that is deliberately unclear, overdramatic, and pandering Quality is business value. HIST is empty slogans.

Slide 55

Slide 55 text

55 Conclusion: HIST is a textbook example of a BAD test approach The Hallmarks of the BAD strategy ● Fluff - using words of no consequence ● Mistaking goals for strategy ● The unwillingness or inability to choose ● Bad strategic objectives ● Failure to face the challenge ● HIST is mostly fluff ● Every nice thing is strategic in HIST ● HIST is all the good things and no trade-offs ● Metrics are focused on irrelevant things ● HIST values pandering over software testing

Slide 56

Slide 56 text

56 56 Thank You! About Exactpro Exactpro is an independent provider of AI-enabled software testing services for financial sector organisations. Our clients are exchanges, post-trade platform operators, and banks across 20 countries. Our area of expertise comprises protocol-based testing of matching engines, market data, market surveillance, clearing and settlement systems, payments APIs. We help our clients to decrease time to market, maintain regulatory compliance, improve scalability, latency and operational resiliency. Exactpro is involved in a variety of transformation programmes related to large-scale cloud and DLT implementations at systemically important organisations. Founded in 2009, the Exactpro Group is headquartered in the UK and operates delivery centres in Georgia, Sri Lanka, Armenia, Lithuania and the UK and representative offices in the US, Canada, Italy and Australia. Are you considering improving quality, time to market, regulatory compliance, reducing costs or latency? If so, visit us at exactpro.com. Contact us