Upgrade to Pro — share decks privately, control downloads, hide ads and more …

The 7 pitfalls of AI

Sponsored · Ship Features Fearlessly Turn features on and off without deploys. Used by thousands of Ruby developers.

The 7 pitfalls of AI

This slide deck discusses some of the open question that the massive AI push still left unanswered. It starts with a little glimpse at the market mechanics that make it easier to understand the oftentimes aggressive and fear-mongering narratives surrounding AI. It then moves on to different areas like the anthropomorphization of AI, the incomplete abstraction, the burnout factory, the AI vampire, and more. It looks at the potential issues in all those areas and ends each area with the questions we still need to answer if we want to use AI at scale in a sustainable way.

The slides also make clear that useful answers for the people affected cannot be expected from the tech companies or the AI free riders as they have different incentives, and thus, it is up to the people affected to add their voices to the currently quite unilateral discussion. At the end, I added a little look ahead by briefly discussing some aspects that point into the future of AI and how things could evolve.

As so often, the voice track is missing, but I hope that the slides still give you a few ideas to ponder.

Avatar for Uwe Friedrichsen

Uwe Friedrichsen

May 06, 2026

More Decks by Uwe Friedrichsen

Other Decks in Technology

Transcript

  1. The seven pitfalls of AI A glance at some usually

    neglected aspects Uwe Friedrichsen – codecentric AG – 2022-2026
  2. “AI is the future! Don’t ask questions! Go all-in or

    you’ll be left behind!” — The unknown AI advocate
  3. ... ... Tech Companies AI Valuation Core interest: High valuation

    Technology Means to push valuation Influence market (Promises & FUD) Free Riders (“Thought Leaders”, “Influencers”, etc.) Find topic to improve own visibility Amplify messages Market Conferences Media Internet Skeptics Look at technology Look at promises Asks questions ? https://pluralistic.net/2025/12/05/pop-that-bubble/ https://www.wheresyoured.at/the-enshittifinancial-crisis/
  4. Observations • The messages are strongly biased • Profiteers only

    try to push their valuation, visibility or profit • The technology is irrelevant • People are irrelevant • It is only about money, fame and power • Clarke's 3rd law works as fire accelerant https://pluralistic.net/2025/12/05/pop-that-bubble/ https://www.wheresyoured.at/the-enshittifinancial-crisis/
  5. Anthropomorphization of AI • AI is not human, nor does

    it have human intelligence • Especially, AI is not “the best of human and computer united” • AI errors are very different from human errors • AI does not learn from its mistakes • Humans are flawed too, but differently
  6. Complete (closed) abstraction Target environment Low-level interface Low-level implementation Higher-level

    abstraction Abstraction tooling High-level implementation Works with abstraction Incomplete (leaky) abstraction Target environment Low-level interface Low-level implementation Works with low-level implementation Works with abstraction Abstraction tooling High-level implementation Higher-level abstraction
  7. Computer Machine code Implements low-level machine code (1GL) Machine code

    Assembler Assembler code Implements assembler code (2GL) Computer Machine code Machine code Assembly language Computer Compiler Machine code 3GL code (+ ecosystem) Implements 3GL code Machine code 3GL Complete (closed) abstractions
  8. Incomplete (leaky) abstractions Computer Machine code Machine code 3GL code

    (+ ecosystem) 3GL 4GL 4GL Implements 4GL code Implements 3GL code Computer Machine code Machine code 3GL code (+ ecosystem) 3GL MDA MDA Implements models Implements 3GL code Computer Machine code Machine code 3GL code (+ ecosystem) 3GL Human language AI agent Instructs AI agent Implements, reviews and corrects 3GL code (“Human in the loop”)
  9. The incomplete abstraction • “Human in the loop” means an

    incomplete abstraction • Significantly increases mental load of developer • Needs to navigate two abstraction levels at once • Needs to understand how AI transforms instructions • Needs to navigate a foreign (inconsistent) mental model • Needs to fix shortcomings of incomplete abstraction • Can easily lead to mental overload • Especially if combined with increased throughput demands
  10. “If you do not get the results you expect from

    your AI setup, you are doing it wrong.” — The average AI advocate
  11. “If you do not get the results you expect from

    your AI setup, you are doing it wrong.” — The average AI advocate
  12. Technology in its infancy • Remember • setting UNIX process

    priorities and nice levels? • working around the 640KB memory barrier of DOS? • providing SQL query hints for relational databases? • bypassing Docker container disk drivers for throughput? • Immature technologies require a lot of “arcane” knowledge • Eventually the technologies mature and the “arcane” knowledge becomes irrelevant
  13. We need a lot of “arcane” knowledge for AI because

    currently it is still a highly immature technology
  14. Eventually, the tooling will mature, and almost all the currently

    “indispensable” knowledge will become irrelevant
  15. The great AI brain reset • AI advocates currently “rediscover”

    software engineering • “We need detailed, non-ambiguous requirements” • “We need a solid, thought-out architecture” • “We need testers who are independent from development” • “We need context information about the problem to solve” • We know this for decades • It would have improved the situation of millions of developers • But there was never time/budget for it • With AI, this is sold as groundbreaking new knowledge
  16. Side note: We never got the time and budget for

    good software engineering. Why do you think AI will change this?
  17. “If you do not get the results you expect from

    your AI setup, you are doing it wrong.” — The average AI advocate
  18. “Friendly fire” • Management and business only care about “productivity”

    • They do not care how we do our work • If we do not get the results, we expect and dare to say it • our own IT peers blame us we would be doing it wrong • our own IT peers tell us we would be “falling behind” • our own IT peers tell us it would be only our personal fault • We created an atmosphere of uncertainty, fear and anxiety
  19. We need to figure out better ways to deal with

    AI until the technology matures
  20. “Since our team adopted AI, expectations have tripled, stress has

    tripled, and actual productivity has only gone up by maybe 10%.” https://stepto.net/blog/ai-developer-burnout-machine-speed-2026
  21. “In our in-progress research, we discovered that AI tools didn’t

    reduce work, they consistently intensified it.” https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it
  22. AI Fatigue Statistics 2025 of software engineers report that AI

    tool integration has increased their daily stress rather than reduced it 65% 52% 29% More than half of engineers say reviewing AI-generated code is more mentally tiring than writing equivalent code themselves. Speed-to-commit improved. Mental cost-per-commit went up. Nearly one in three engineers has considered leaving their role because of pressure related to AI tool adoption, pace expectations, or feeling unable to keep up. This is not a niche problem. It is a workforce-level signal. https://clearing-ai.com/stats.html
  23. “Ninety-six percent of C-suite executives expected AI to improve productivity.

    Seventy-seven percent of employees reported it actually increased their workload.” https://www.alexcloudstar.com/blog/ai-brain-fry-developer-burnout-2026/
  24. The burnout factory (1/2) • Unrealistic efficiency gain expectations •

    Only possible with closed abstraction • Code review still demanded due to code quality concerns • The productivity trap • AI is expected to magically fix all organizational dysfunctions • AI does not fix anything but amplifies current behavior • Disconnect between management and people affected • Responsibility for success shifted to developers
  25. The burnout factory (2/2) • Review fatigue • Human inability

    to keep focus if problems occur only rarely • Especially relevant in “human in the loop” settings • Deskilling by not doing the actual work anymore • Plus losing the mental model needed to do the work • The involuntary manager • Loss of control in combination with increased responsibility Lisanne Bainbridge, “Ironies of Automation”, Automatica, Vol. 19, No. 6, 1983
  26. We still need to learn how to use AI in

    software development in a sustainable way
  27. “AI is starting to kill us all, Colin Robinson style.

    If you’ll recall from What We Do In The Shadows (worth a watch, yo), Colin Robinson was an Energy Vampire. Being in the same room with him would drain people.” — Steve Yegge, “The AI Vampire” https://steve-yegge.medium.com/the-ai-vampire-eda6e4f07163
  28. “I was shipping more code than ever. My output was

    objectively higher. But by 3 PM most days, my brain felt like someone had microwaved it. Not tired in the normal ‘long day of coding’ way. A different kind of exhaustion. A fog that made it hard to make even simple decisions, like what to eat for dinner or whether a variable name was good enough.” https://www.alexcloudstar.com/blog/ai-brain-fry-developer-burnout-2026/
  29. “Based on a BCG study of 1,488 US workers across

    large companies, they found that AI brain fry can increase employee errors, decision overload, and, ultimately, intent to quit.” https://www.bcg.com/news/5march2026-when-using-ai-leads-brain-fry https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry
  30. “Meanwhile, employees who frequently use AI report 45% higher burnout

    rates compared to those who rarely use it [...]” https://www.forbes.com/sites/carolinecastrillon/2025/06/24/why-ai-fatigue-is-wearing-you-down-and-how-to-beat-it/
  31. “[Agentic software building] doles out dopamine and adrenaline shots like

    they’re on a fire sale. Many have likened it to a slot machine. You pull a lever with each prompt, and get random rewards and sometimes amazing ‘payouts.’ No wonder it’s addictive.” https://steve-yegge.medium.com/the-ai-vampire-eda6e4f07163
  32. More and more such studies and articles pop up, just

    a few months after agentic coding went mainstream
  33. The AI vampire • Brain is running under full load

    for a long period • AI agents take away the brain recovery times • Human brains are not equipped for long periods of high load • Implicit task shift • Increased multitasking • Task expansion • Blurred work-life boundaries • Addiction of AI usage as reinforcing factor
  34. We still need to learn how to use AI in

    software development in a sustainable way
  35. “If you haven't spent at least $1,000 on tokens today

    per human engineer, your software factory has room for improvement” — StrongDM AI, “Software Factories And The Agentic Moment” https://factory.strongdm.ai
  36. The AI cost trap • AI inference is massively subsidized

    by providers • Estimated that 5x-10x higher inference costs are needed for frontier model providers to become profitable • Agentic AI can still easily be several thousand dollars per month for a single developer • What if frontier model providers stop subsidy? • Agentic AI usage may become unprofitable • Users may switch to open weight models for most tasks • Technology advancements and economies of scale may (partially) compensate increase in inference costs
  37. Centralization and sovereignty • Driven by the only-the-best-is-good-enough mindset •

    Users only use frontier models because they are a bit better • Leads to a few players dominating AI • Those with enough financial backing will survive • A few players will control access, usage, and the truth • Everyone will depend on a few players • Truth is shaped by what people use for information retrieval • Dependency prevents sovereignty
  38. We need to ponder our AI usage patterns if we

    want to ensure our independence and sovereignty
  39. More questions to ponder, e.g. • Lethal Trifecta * •

    Model hacking and exfiltration of confidential data • Crap in, crap out • Input data quality controls of the model answers quality • In many contexts more relevant than model performance • Four your eyes only • Access to confidential data used for answer * https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/
  40. We still need to learn what AI is and what

    it means for us Anthropomorphization of AI The incomplete abstraction A technology in its infancy The burnout factory The AI vampire The AI cost trap Centralization and sovereignty We still need to learn how to deal with the incomplete abstraction We need to figure out better ways to deal with AI until the technology matures We still need to learn how to use AI in software development in a sustainable way We still need to learn how to use AI in software development in a sustainable way We need to understand how different developments of AI inference costs will affect our AI usage patterns We need to ponder our AI usage patterns if we want to ensure our independence and sovereignty
  41. This is not an argument against AI AI is a

    powerful technology AI has a huge potential AI is here to stay
  42. Such pitfalls and resulting open questions are very normal in

    the early days of a new (potentially disruptive) technology
  43. However, the tech companies will not provide useful answers The

    only care about their valuations and profits
  44. Also, the free riders will not provide useful answers The

    only care about their reach and influence
  45. If we want good answers, we cannot expect others to

    provide them We need to provide the answers
  46. “I don’t think there’s a damn thing we can do

    to stop the train. But we can certainly control the culture, since the culture is us.” https://steve-yegge.medium.com/the-ai-vampire-eda6e4f07163
  47. AI is not software • Software and AI are different

    tools • They have different properties • They are suitable for different types of tasks • Some tasks can be solved only with one of the tools • AI may not be suitable for all types of SWDev tasks • The more precise and verifiable the result needs to be, the less suitable AI is for the task • May put a limit to AI-based code generation • Distinct career paths may evolve • Software engineer vs. AI engineer
  48. It was never about code • Computer is a general-purpose

    automaton • People have ideas what they want to run on a computer • The best universal abstraction we figured out until now to make computers execute our ideas are 3GL • AI raises the question if there may be better abstractions • AI raises the question if it always needs to be a computer • Maybe this leads to novel ideas for bringing our ideas to life