Keynote talk at Jax London 2025
In 2025, many people in IT are anxious. They hear bold claims about exponential growth, about developers becoming obsolete, about machines taking over. But if we look closely, the story is less dramatic. AI hasn’t given us a sudden explosion of new products or breakthroughs. Models are only slightly better than before, and they still fail at basic reasoning. Productivity gains are there, but small.
That doesn’t mean it’s all hype. Like the loom, the car, or the internet, AI is a new technology — one that reshapes work, destroys some roles, and creates others. To adapt, we need to learn new skills.
Here, philosophy helps us see things clearly. Wittgenstein taught us that meaning comes from use; Putnam reminded us that meaning is shaped by our communities. LLMs, in this light, are just engines of text completion. They don’t understand; they only play with words. They can tell you Java is a language, an island, or a coffee bean — but they never grasp what it means.
Still, these tools are remarkable. They are fast, full of knowledge, tireless, and endlessly polite. The challenge is not what they can’t do, but how we use them. Working with AI assistants isn’t rocket science, but it does require skill and discipline: giving precise instructions, making small and testable steps, respecting their context limits, and never forgetting they don’t truly learn. If we’re careless, we fall into traps — switching off our own thinking, chatting as if the models were human, or assuming they’ll remember like we do.
And what about code itself? Some imagine a future where code no longer matters, where we only write specifications and the machines regenerate everything. But history warns otherwise. Past no-code dreams never displaced real software engineering. Code remains the ultimate source of truth: essential for consistency, security, iteration, and for the models themselves to work with.
Which brings us to the heart of the matter: software doesn’t come from LLMs’ words. It comes from the developer’s understanding. The machine can generate text, but only humans bring the meaning, the discipline, and the responsibility. You own it, you run it — and you must not let the LLM steal that from you.
🎯 Takeaways
AI coding is powerful but not magic — it’s a tool that requires skill, discipline, and human oversight.
LLMs cannot replace developers’ understanding of systems, context, and design.
Good code remains central: specifications alone are insufficient, non-determinism is risky, and details/security matter.
Developers must own their work: “You own it, you run it — don’t let LLMs steal it from you.”