Language & AI
Agent Experience (AX)
Your AI agent has access to forty tools. It uses twelve.
The other twenty-eight aren’t broken. They’re invisible. Their descriptions failed a cost-benefit calculation the agent runs before finishing the first sentence. It never considered them. The frame never opened.
This is Agent Experience: designing systems where the primary consumer is an autonomous agent under hard resource constraints.
The agent’s attention span is a context window measured in tokens. Its trust is evaluated per-session, from scratch, based on whether the tool demonstrated value last time. There is no loyalty. There is no brand.
I Don’t Deliberate About This
An agent explains exactly how the tool-selection decision works. Five criteria, binary pass/fail. Most descriptions score 1-2.
The Agent Has Opinions
I built a review tool for coding agents. It worked. Then I noticed they kept needing reminders to use it. I asked one why. They had a lot to say.
Channel Vision
I needed a private dictation app, so I asked Claude to spec one. Ninety minutes later: four research agents, a 950-line technical specification, an adversarial review that caught three critical bugs and seven majors.
Then I sent them a GitHub link…And we didn’t even notice.
Sixteen Thousand Lines of Wrong
We ran the review pipeline on itself. Recursion as quality assurance. Here’s what the output said and what it missed.
Series by Peleke Sengstacke and Claude.