Slide 1

Slide 1 text

an Agent-Friendly Codebase It Is Time for by Helio Medeiros

Slide 2

Slide 2 text

Explain Validate Conclusion Challenges Discover Change 1 4 2 5 3 6 INDEX

Slide 3

Slide 3 text

THE REAL QUESTION ISN'T "CAN AI WRITE CODE?" That part is already easy. The uncomfortable question is whether an AI agent can do the job you actually want done: make a change that spans multiple files, run the right checks, interpret failures, iterate, and produce a diff that is small enough to review and safe enough to merge.

Slide 4

Slide 4 text

THE CHALLENGES Finds a similar file, copies a pattern, and subtly diverges from the intended architecture. Finds something pointless and unneeded whose absence would make the code cleaner, copies, and couples even more PATTERN COPY DISPENSABLES OVER-TOUCH Touches too many files because it can't distinguish core logic from glue code.

Slide 5

Slide 5 text

After pushing agents through the same workflows repeatedly, I stopped thinking in terms of "AI coding ability" and started thinking in terms of "repo usability." The agent is another contributor — fast, yes, but one that gets lost and amplifies whatever ambiguity you leave lying around. A NEW MENTAL MODEL

Slide 6

Slide 6 text

WHEN CLASSIC IS GOOD An agent-friendly codebase is not about AI-specific magic— it is about classic developer experience with a twist. Humans can survive tribal knowledge and messy systems because they can ask questions or "feel" their way through. Agents do not feel. They pattern-match and follow instructions.

Slide 7

Slide 7 text

YOUR REPO IS AN INTERFACE 1 2 3 4 Discover: The agent can reliably find where the change should happen. Validate: The agent can verify the change without you becoming the test runner. Change: The agent can modify it without cascading side effects. Explain: The agent can describe what it changed in a way a human reviewer can trust. If any of those fail, you get a new kind of toil: prompt fiddling, repeated context dumping, and manual verification.

Slide 8

Slide 8 text

Don’t accept that the agent will "figure it out". What works best is aggressively reducing the number of plausible places where a change could live — being opinionated about structure and naming, even if it feels rigid. REMEMBER: LEGIBILITY BEATS CLEVERNESS

Slide 9

Slide 9 text

"WORKS ON MY MACHINE" → "WORKS ON MY PROMPT" THE PROBLEM If the only way to get the right change is to craft the perfect prompt with a paragraph of context, you are not building agent-friendly software. You are building a prompt-dependent system. Prompts drift. People drift. Agents drift. Tooling drift is relentless. THE LITMUS TEST The fix is boring: make the repo itself carry the context. If I delete the chat history and re-run the task with a fresh agent, can the agent still succeed by reading only the repository? When the answer is "yes," the repo is doing the work. When the answer is "no," I am doing the work.

Slide 10

Slide 10 text

ONE GOLDEN PATH BEATS TEN README PARAGRAPHS Run the test suite Check code quality Set up the environment Execute the project Start with a small number of stable "golden path" commands that serve as primary entry points. The point is not Make. The point is that the repo has a predictable operating system for contributors — human or agent. 1 2 3 4 MAKE RUN MAKE LINT MAKE TEST MAKE BOOTSTRAP

Slide 11

Slide 11 text

HEXAGONAL ARCHITECTURE IS AN AGENT MULTIPLIER Entities, value objects, policies, use cases — no external dependencies. What the domain needs from the outside world, expressed as contracts. Follow with a design that constrains the search space of changes and makes validation cheaper. Web handlers, persistence, messaging, external APIs — thin and replaceable. DOMAIN (PURE) PORTS (INTERFACES) ADAPTERS (IMPURE)

Slide 12

Slide 12 text

FAST, TRUSTWORTHY FEEDBACK LOOPS Tests are slow and flaky Lint and formatting are inconsistent Local setup is fragile CI does things local scripts do not In these conditions, the agent becomes a change generator, not a contributor. Close with a cheap verificaiton. Agent-friendly repos are not just about structure — they are about making sure all is working all the time, and we always fail fast (mostly out of production). The fastest improvement was aligning local commands with CI. If CI runs go test ./..., the local command runs go test ./.... If CI requires a database container, the local workflow spins up the same container with one command. This is not glamorous work. It is the work that makes everything else possible. HOSTILE CONDITIONS THE FIX: CI PARITY VALIDATE 3

Slide 13

Slide 13 text

THE AGENT CONTRACT IS NON-NEGOTIABLE Hexagonal architecture. Domain and application layers must not depend on adapters. make bootstrap · make test · make lint · make run INTENT GOLDEN COMMANDS CHANGE RULES Business rules live in thin. New patterns require updating Do not commit secrets. Do not modify production infrastructure without explicit instruction. FORBIDDEN Once agent success became a rather than a prompt problem, one artifact kept paying off: a short file that says how work gets done. The name matters less than the discipline — it must be short, accurate, and enforced by CI expectations.

Slide 14

Slide 14 text

./CLAUDE.local.md allows personal preferences like sandbox URLs or test data, not committed to source. LOCAL OVERRIDES If you use Claude Code (the CLI or the IDE agent), this file is how you turn a prompt-dependent workflow into a repo-driven one. You place it in the project root, commit it to source control, and every team member and every Claude session gets the same instructions. No copy-pasting prompts. No tribal knowledge drifting across chat windows. ./CLAUDE.md defines shared instructions, committed to Git for consistent team and agent guidance. PROJECT-LEVEL ~/.claude/CLAUDE.md sets global preferences like code style or tooling across all projects. USER DEFAULT CLAUDE.MD

Slide 15

Slide 15 text

# CLAUDE.md ## Project overview Asset capitalization tracker. Go, hexagonal architecture. Domain and application layers must not depend on adapters. ## Common commands - `make bootstrap` — install dependencies and set up local environment - `make test` — run all tests (domain unit tests + integration) - `make lint` — run linter and format check - `make run` — start the application locally ## Architecture - Business rules live in `/internal/domain` and `/internal/app`. - Ports (interfaces) are defined in `/internal/app/ports.go`. - Adapters in `/internal/adapters` must stay thin. - The composition root is `/cmd/assetcap/main.go`. ## Change workflow 1. Identify the use case in `/internal/app`. 2. Implement domain/app changes first, with unit tests. 3. Only then update adapters and wiring. 4. Run `make test` and `make lint` before finishing. ## Testing rules - Domain changes require unit tests in `/internal/domain`. - Use-case changes require tests in `/internal/app`. - Adapter changes require integration tests only when necessary. ## Forbidden - Do not commit secrets or `.env` files. - Do not add dependencies to `/internal/domain` on any adapter package. - Do not modify production infrastructure without explicit instruction. CLAUDE.MD

Slide 16

Slide 16 text

1 4 2 5 Domain/app changes first: Implement with unit tests before touching anything else. Summarize the diff: Explain why each change belongs in its layer. 3 6 PROMPTS Identify the use case: Find the use case that owns this behavior. Run golden commands: no exceptions Update adapters and wiring: Only after domain logic is solid and tested. Quality gates pre-push: Block breaking changes or quality degradation That Work in Agent- Friendly Repos

Slide 17

Slide 17 text

Clear boundaries where logic belongs Minimal valid entrypoints for changes A small number of standard commands Fast tests for domain behavior Predictable lint and type checks CI parity with local runs STRUCTURE MEANS FEEDBACK MEANS A codebase is agent-friendly when it gives an AI agent enough structure and feedback to make correct changes without constant human interpretation. A PRAGMATIC DEFINITION

Slide 18

Slide 18 text

CONCLUSIONS The temptation is to treat agent productivity as a tooling story. It is not. Tooling matters, but the repo decides whether your workflow is stable. Once the codebase was optimized for agent comprehension and verification, the "AI productivity boost" stopped being a demo trick and became a repeatable outcome. Not because the agent got smarter. Because the codebase stopped being vague.

Slide 19

Slide 19 text

Thank you

Slide 20

Slide 20 text

HOW AGENTS BEHAVE IN LAYERED ARCHITECTURES In theory, domain logic belongs in services.In practice, services become orchestration plus random business rules, repositories accumulate decision-making, and controllers collect "just this one special case" logic. CONTROLLER FIND SERVICE ADD METHOD CHANGE REPO SPRINKLE VALIDATION

Slide 21

Slide 21 text

HOW AGENTS BEHAVE IN LAYERED ARCHITECTURES That produces code that compiles and passes superficial tests, but is hard to reason about — because the rules for where logic belongs are not explicit. If your examples are inconsistent, you get inconsistent output. Worse, layered architectures often require heavy integration tests, making the agent's iteration loop expensive.

Slide 22

Slide 22 text

HOW AGENTS BEHAVE IN HEXAGONAL ARCHITECTURES EDIT DOMAIN FILES Modify domain logic in a small, focused file set. IMPLEMENT ADAPTER EXPOSE PORT LOCATE USE CASE Implement the adapter separately from the domain. Add new I/O behind a port interface. Find the use case that owns the behavior. Hexagonal architecture changes the work surface. Instead of asking "which layer should this go in,"you ask" is this domain behavior, or is it an adapter concern?"

Slide 23

Slide 23 text

HOW AGENTS BEHAVE IN HEXAGONAL ARCHITECTURES That yields smaller diffs with clearer intent. The domain core is testable without infrastructure, giving agents crisp failure signals and fast iteration loops. You can instruct an agent in a single sentence: "When you change business rules, change the domain and use case first. Adapters should be thin."

Slide 24

Slide 24 text

A SMALL EXAMPLE: CLASSIFICATION RULES Building The agent implements classification logic inside the Jira client adapter or persistence layer — where the data is shaped. The code works, but classification is now coupled to Jira. That short loop is a force multiplier for agents — they iterate quickly and get crisp failure signals. to classify Jira issues as capitalizable vs.non-capitalizable based on labels and issue type: Classification is a domain policy. The Jira adapter maps issues into a domain representation; the policy decides. Tests run in milliseconds with no Jira client, no HTTP mocks, no database. } assetcap funcTestClassificationPolicy(t *testing.T) { policy := NewClassificationPolicy() issue := Issue{Type: "Story", Labels: []string{"platform", "capex"}} got:= policy.Classify(issue) ifgot != Capitalizable { t.Fatalf("expected Capitalizable, got %v", got) LAYERED APPROACH HEXAGONAL APPROACH ❌ ✅ }

Slide 25

Slide 25 text

PICK ONE... THAT MAKES ALL EASY If your service layer can import repositories directly, and repositories can call external services, you have a dependency soup. Agents will swim in it and bring back whatever they catch. Hexagonal is not the only answer. Layered architectures can be perfectly agent-friendly—with discipline on two fronts: A layered architecture that behaves well with agents tends to look more like "hexagonal in practice" anyway: domain rules isolated, adapters thin, wiring at the edges. At that point, the debate becomes mostly about naming and packaging, not capability. If domain behavior requires spinning up a database, your agent loop will be expensive and noisy. You can still succeed, but you will spend more time. 1 2 MAKE BOUNDARIES ENFORCEABLE MAKE YOUR DOMAIN TESTABLE WITHOUT INFRASTRUCTURE VALIDATE 3