How do agents learn?

I build instrumented agent systems and study how they learn, how faithfully they reason, and what it takes to monitor them in practice.

The research agenda features early evidence of principled agentic learning over context. Agents instrumented with qortex learn which context components matter for which queries, and update from feedback in real time.

Next up: generalize this to parameterizing the token budget itself. An agent that learns both how many resources to use, and which is what we call a minimum viable agent.

Six packages that give agents a knowledge graph, a feedback loop, and the instrumentation to measure whether either one helps.

01 qortex
v0.7.7
02 qortex-observe
Functional
03 Vindler
Active
04 bilrost
v1.1.0
05 cadence
Functional
06 interoception
Early

Architecture + hypotheses →

No one cares about research tooling unless you can show them value in one plot and six bullet points. These ship to users and stress-test the stack in the dumb ways prod always finds to break things.

01 Interlinear
Live
02 Swae OS
Live
03 LinWheel
Live
04 ComfyUI MCP
Live

All projects →