AI agents don't reason about what they can't see. Microservices fragment context across repositories; modular monoliths keep the entire solution accessible — for humans and agents alike.
This talk introduces progressive disclosure as the unifying principle connecting UX design, cognitive science, and software architecture. The constraint changed from working memory to context window. The principle didn't.
Through Rails Whey — an open-source project with 28 git branches evolving the same Rails application from fat controllers to bounded contexts — we measure agent-friendliness across five dimensions: context window cost, discoverability,
isolation, predictability, and blast radius. The results reveal a clear sweet spot where naming conventions and modular orchestrations maximize AI agent effectiveness (24/25) using nothing but native Rails tools.
We examine why AI is a capacity amplifier that makes fundamentals more valuable, not less. Why the cost of producing code dropped while the cost of deciding rose. And why orthogonality, unified naming, resource discipline, and named
orchestrations are the four highest-leverage changes you can make today.
Along the way: Shopify's 237 billion BFCM requests, Ruby's #1 token efficiency with Claude Code, Y Combinator calling convention-over-configuration "LLM catnip," and a modularity-vs-deployment quadrant that reframes the monolith debate
entirely.
Good architecture is progressive disclosure for any operator. If it's simple to find and understand, it's simple to maintain and evolve.
If you write Rails, you're already on the best platform for coding with AI agents. This talk shows you why — and how to make it even better.
Presented at Tropical on Rails 2026, São Paulo, Brazil.