Back to Writing

Make It Look Easy

February 24, 2026 · Peleke (ed: Claude)

Let’s play a game. We’ll call it: Interview.

Whiteboard interview: distribute a Python library, don't break 7 adapters, don't touch 2,041 tests, ship by end of day, oh and don't change any consumer code. Time remaining: one afternoon. Good luck.

First: You have a Python library. In-process. SQLite, NumPy, graph traversal, Thompson Sampling, Personalized PageRank. It works.

Your users import it and call methods directly.

Then the ask changes. It needs to run over the network. Not next quarter. Today.

Don’t break anything.

You don’t know it yet, but there’s a follow-up: don’t touch the consumer code. Seven framework adapters. Three TypeScript integrations. A learning loop. 2,041 tests. All of it keeps working like nothing happened.

Pre-Claude, this would take months.

The traditional approach: rewrite API (1 week), build client library (1 week), update every consumer (3 weeks probably), run tests (1 week if they pass), pray (infinity). Total: lol.

Post-Claude? You don’t even think about it, cause we have AGI. Right?

…Wait.

For those of us lamentably bound to reality, even with Claude, this kind of migration is non-trivial. It requires myriad decisions it would never make on your behalf .

We shipped it in an afternoon. Even with Claude, that’s an achievement.

Here’s why.

The Protocols

qortex defines its boundaries with typing.Protocol. No database, no HTTP, no transport. Just method signatures and return types.

The protocol boundary: 7 framework adapters above, 2 implementations below, QortexClient protocol in the middle
@runtime_checkable
class QortexClient(Protocol):
    def query(self, context: str, domains: list[str] | None = None,
              top_k: int = 20, min_confidence: float = 0.0) -> QueryResult: ...
    def feedback(self, query_id: str, outcomes: dict[str, str],
                 source: str = "unknown") -> FeedbackResult: ...
    def explore(self, node_id: str, depth: int = 1) -> ExploreResult | None: ...

This pattern runs through every seam: Four protocols, four boundaries.

I can still hear Professor Sedgewick’s voice in my ear: only subclass with great caution.

Every layer targets the protocol, not a concrete class. That’s not academic hygiene. It’s what let us swap every implementation behind those four seams in a single afternoon without touching a single consumer. Design against the interface and the implementation becomes a detail you can change on a Tuesday.

Core

What we built are love languages for a beating heart (…bear with me):

  • GraphBackend: how qortex remembers structure
  • VectorIndex: how it finds similarity
  • EmbeddingModel: how it understands language
  • InteroceptionProvider: how it learns from outcomes

QortexService speaks all four. It composes the protocols into operations, orchestrates them, and returns plain dicts. JSON-serializable, transport-agnostic. It has no idea which concrete implementations back any of them.

The heart doesn’t care what language you speak, as long as it’s one of the four it understands.

Two Transports, One Service

MCP and REST are both thin wrappers around the same QortexService. Neither knows the other exists.

# The entire REST query endpoint
async def query_handler(request: Request) -> JSONResponse:
    service = _get_service(request)
    body = await request.json()
    result = await service.query(
        context=body.get("context"),
        domains=body.get("domains"),
        top_k=body.get("top_k", 20),
    )
    return JSONResponse(result)

This is application layer separated from transport. The business logic has no idea how data arrives or departs. If you’ve built anything that survived a transport change, you know why that separation exists. If you haven’t: this is why.

What Didn’t Change

…Anything, really.

PR #158 swapped the storage backend from SQLite to Postgres/pgvector, added a transport layer atop qortex-core so we have REST alongside MCP, and shifted the deployment model from single-process to distributed.

Here is what it did not change:

  • Core domain logic. Graph operations, Thompson Sampling, Personalized PageRank, rule projection. Untouched.
  • The test suite. 2,041 tests, zero modifications. They talk to the protocol.
  • The seven framework adapters. All target QortexClient.
  • The MCP interface. The interoception layer. The learning loop.
class QortexKnowledgeStorage:
    def __init__(self, client: QortexClient, ...):
        self._client = client  # the protocol, not a concrete class

When HttpQortexClient appeared, the adapters didn’t need to learn about it. They already spoke the right language.

The Cost

This architecture cost real time and effort up front.

The protocols had to be designed before the REST layer existed. QortexClient was defined when there was only one implementation and no plan for a second. VectorIndex was designed with NumpyVectorIndex as the sole backend, before pgvector was on the roadmap. You’re writing code that doesn’t do anything yet, making design decisions before you fully understand the problem.

Sometimes you get them wrong: GraphBackend still has a query(pattern: GraphPattern) method with an empty class and a TODO comment. That bet hasn’t materialized.

Yet. The trick is balancing now with later. With agents, the meaning of “now” shifts dramatically. What used to take weeks takes an afternoon. The boundaries you draw today get tested tomorrow, not next year.

Like everyone, we’re placing bets. Ours is this: the only abstraction worth paying for is the one that lets you change your mind without changing your consumers.

Initial velocity feels slower. A script that calls sentence_transformers and writes to SQLite ships faster on day one. The protocol layer adds files, adds indirection, adds mental overhead when you’re just trying to get retrieval working.

You pay the complexity tax early so you don’t pay the migration tax later. And you have to accept that “later” might never come: Sometimes you’re just wrong. Fully, irrecoverably, incorrigibly wrong.

But, sometimes? You’re early. The trick: Be just. Early. Enough.

Why Agents Still Can’t Do This

Every session, across dozens of attempts, the agent produced working code that would have required a rewrite to distribute. The limitation is structural.

Agents optimize for the task in front of them: Implement vector search produces a working vector search.

The step before — define the boundary such that the implementation can change without the consumers noticing — has no immediate payoff. It only matters when you need to do something you haven’t asked for yet. The step after — reflecting on what worked, what broke, distilling constraints — the agent skips entirely. It ships and forgets it shipped anything.

Our protocols only exist because a human drew them before the first line of implementation. The agents filled in the implementations they do well within boundaries humans drew better.

Boundaries are the structural foundation of every system that lasts. The ones between modules; between services; between what can change and what can’t.

Smooth migrations share one trait: clear boundaries. The ones that turned into rewrites lacked them.

Where, specifically, is the boundary between humans and models? If architecture is still about drawing the boundaries that matter, and humans still draw them while agents still don’t, the question is: what, exactly, is it that we get, that they don’t?

Context Engineering Is Not Copywriting is one attempt at an answer.