For a decade, an industry grew up around the premise that engineering could be managed like a factory. Count the pull requests. Measure the cycle time. Track the DORA metrics. The dashboards multiplied. The frameworks proliferated. Engineering Intelligence became a category — a real one, with real revenue, real customers, and real teams dedicated to measuring how fast software moved through a pipe.
The problem was never the measurement. The problem was the model.
A pull request is not a unit of work. It's a unit of output. Those are not the same thing. The work happens before the PR — in the thinking, the scoping, the conversation with a colleague, the twenty minutes spent understanding why the bug exists rather than just fixing the symptom. None of that shows up in a dashboard. The engineering intelligence industry agreed to pretend it didn't matter.
And for a while, the pretense was sustainable.
Then AI arrived. Not as a complication. As a landslide.
When a developer can generate ten pull requests in an afternoon, every metric the engineering intelligence industry was built on stops meaning anything. PR velocity is meaningless. Story points are meaningless. Cycle time is meaningless. You can hit every target on your dashboard while your codebase degrades, your engineers disengage, and your AI rollout quietly fails. The industry's response was to add AI metrics — token costs, code attribution percentages, agent traces correlated to commits. Still artifacts. Still not the work. Nobody is measuring whether your team is actually using AI well.
This is not a problem that better dashboards will fix.
The engineering intelligence category was built on a foundation that AI exposed as fiction: that the output is the work. It isn't. It never was. AI just made that fact impossible to ignore. The incumbents aren't failing because they built bad products. They built the right product for a model of engineering that no longer applies.
We started Maestro to get closer to the work. Not activity — the work itself. Code Impact and Review Impact read the actual diff: what changed, why it mattered, whether it solved a real problem or added debt. AI Narratives synthesize that into something a leader can act on: not "the team merged 14 PRs" but "the platform team completed the Kubernetes migration and resolved three critical performance bottlenecks." Semantic understanding, not metadata. We were already as close to the work as the tools of the time allowed.
Then the work shifted.
The work is a session now — a thread between an agent who executes and a human who directs. An engineer in 2026 doesn't live in their IDE. They live inside the session — framing problems, evaluating proposals, verifying outputs, deciding what to accept and what to push back on. The session is where the quality decisions get made. It's where the failures originate, and where the best engineers separate themselves from the rest.
Sessions have structure. They are observable. A 12-turn session where the engineer scoped the problem, challenged the agent's first answer, and ran the edge case is different from a 90-turn sprawl where nothing was verified and the engineer accepted outputs they couldn't explain.
Both produce a PR. Only one produces good software. Your velocity metrics can't tell the difference.
There is a fear running through the engineering industry right now that the craft is dying. That the agent does the work and the human becomes a reviewer, a rubber-stamper, a manager of outputs they didn't produce and couldn't reproduce. It's a real fear. It's also wrong.
The craft of engineering isn't dying. It's being remade.
Your best engineers are already practicing it. They scope before they prompt. They challenge before they accept. They verify before they ship. They've stopped writing every line themselves and started doing something harder: thinking clearly enough, fast enough, to direct a system that will otherwise produce confident nonsense. That is craft. It is not the same craft as before, but it is not less.
And here is the part that should keep engineering leaders up at night: the gap between the engineers who have found this new craft and the ones who haven't is now the most important variable in your organization. It is wider than the gap between senior and junior ever was. It compounds faster. And every tool you currently own is blind to it.
Your best people are pulling away. You cannot see it happening. Your dashboards will show you the same PR counts, the same cycle times, the same green metrics — right up until the quality of your codebase tells you what your tools couldn't.
The session is where the new craft lives. It is where it can be seen, learned, and taught. That is what Maestro is built for.

