Upgrade to Pro — share decks privately, control downloads, hide ads and more …

From symbols to reasoning

Avatar for Djimit Djimit
December 21, 2025

From symbols to reasoning

This strategic research report argues that AI progress is not linear. It moves through phase transitions triggered when algorithmic ideas collide with hardware constraints. You are now entering the biggest shift since the 2012 deep learning reboot, the move from Generative AI (2017–2023), focused on probabilistic content generation, to Reasoning and Agentic AI (2024–2030), where systems execute multi-step logic, self-correct, and use tools autonomously.

The report reframes AI history as an engineering story:
• Symbolic AI did not only fail on theory, it failed on scalability and economics. Expert systems hit the knowledge acquisition bottleneck, and LISP machines lost to commodity x86 price performance. The modern implication is clear, specialized hardware often loses when general-purpose compute reaches massive scale.
• The deep learning reboot (2012) was a hardware unlock, GPUs made large-scale neural training practical, validating “compute + data + backprop” over manual feature engineering.
• Transformers (2017) became dominant because self-attention reduces language modeling to highly parallel matrix operations, perfectly aligned with GPU architectures and memory bandwidth.

It then builds a canonical “Reading Stack” (2017–2025) as a practical blueprint for modern LLM engineering:
• Attention Is All You Need, parallelization removes the serial bottleneck of RNNs.
• BERT, pre-training creates the foundation model economy.
• T5, text-to-text unifies tasks and demonstrates scaling strength.
• Scaling laws and GPT-3, predictable scaling turns training into an engineering capital project, prompting replaces widespread fine-tuning.
• Instruction tuning, models become usable assistants by learning intent, not pattern completion.
• Mechanistic interpretability, capability emerges in abrupt circuits, like induction heads.

From 2023 onward, the focus shifts from “bigger” to “smarter and modular”:
• Chain-of-thought and self-consistency introduce inference-time compute as a design lever, accuracy costs tokens.
• RAG separates knowledge from parametric memory, reducing hallucination and enabling freshness, retrieval quality becomes a first-class concern.
• Toolformer formalizes the agentic interface, models learn when to call APIs.
• DeepSeek-R1 open-sources a credible recipe for System 2 style reasoning via reinforcement learning and demonstrates distillation into smaller models, pushing reasoning from cloud-only to edge and commodity deployments.

The engineering reality of 2025 is dominated by inference economics. Reasoning models increase runtime tokens dramatically and shift bottlenecks toward VRAM and memory bandwidth. Bandwidth is king because decoding repeatedly loads weights for each generated token. Sovereign AI therefore requires more than buying GPUs, it demands interconnect, scheduling, observability, and cost control.

Finally, the report overlays the EU and Netherlands governance lens:
• The EU AI Act makes compliance a design constraint, not a legal afterthought, especially for GPAI and systemic-risk obligations.
• Sovereign AI becomes an architectural decision, open weights versus closed APIs, with hybrid sovereign architectures emerging for regulated sectors.
• RAG and agents expand the attack surface, prompt injection and tool abuse require least privilege, ACL filtering before retrieval, and governance as code.

The executive takeaway is sharp. Do not over-invest in building “your own GPT-5.” Treat base models as commodities. Win through evaluation stacks, proprietary data curation, secure RAG, measurable quality, and enforceable policy controls.

Avatar for Djimit

Djimit

December 21, 2025
Tweet

More Decks by Djimit

Other Decks in Technology

Transcript