Humans Aren’t the Bottleneck — They’re the Load-Bearing Wall
Root Cause: Debugging the "humans are obsolete" narrative
There’s a recurring theme in AI discourse right now: coding agents are getting amazing at building things, but everything slows down because a single human can only keep so much context in their head. Multiple agents working on different parts of a project end up idle, waiting for the human to switch tabs, recall details from three conversations ago, and feed them the right information.
The conclusion: humans are becoming the choke point. And therefore — the next thing to be replaced.
I want to push back on this.
The Coordination Fallacy
Not every point of convergence is a bottleneck. Some are load-bearing walls.
Think about it in terms we already understand. A team lead or engineering manager is, by definition, the person everyone comes to with questions, the one coordinating across workstreams, the one holding context that spans multiple efforts. By the “bottleneck” logic, this person is slowing everyone down. The obvious solution? Remove them.
We’ve seen this movie before.
Google Tried This. It Failed in Months.
In 2002, Google’s founders decided engineers should be left to their own devices — managers were bureaucracy. They flattened the organization and removed all manager roles. It lasted a few months. Page and Brin found themselves buried under requests from across the organization, and engineers complained about the lack of support and guidance. Google not only reversed the decision, but later launched Project Oxygen — a multi-year research initiative that proved managers have a measurable positive impact on team performance.
The company that tried hardest to prove managers don’t matter ended up building one of the most rigorous frameworks for understanding why they do.
“In the Absence of Structure, You Get the Tyranny of Structurelessness”
Charity Majors has argued this from first principles: hierarchy isn’t something humans invented to dominate each other — it’s a property of self-organizing systems. It emerges because it reduces coordination costs and prevents information overload. A manager, in systems terms, is an abstraction layer — much like a well-designed module boundary in software.
Her thought experiment is telling: remove all the engineering managers from a medium-sized company. In the short term, probably not much changes. Most of what managers do isn’t day-to-day — it’s week-to-week, month-to-month. Hiring, training, retention, accountability. Without them, correction mechanisms weaken and informal power structures emerge — but with less clarity and less fairness than formal ones.
Now Apply This to AI Agents
The frustration people describe with multi-agent workflows is real. You’re managing multiple conversations in separate tabs. There’s no shared state, no way for one agent session to be aware of what another has established. The human is manually doing what should be infrastructure.
But here’s where the discourse takes a wrong turn: conflating a tooling problem with a human limitation.
What Can Actually Be Automated (And What Can’t)
Let’s be precise about this, because “coordination” isn’t one thing.
The mechanical layer — routing information between agents, maintaining shared state, detecting when two workstreams touch the same resource, flagging dependency conflicts — this is infrastructure work. It’s rule-based, high-volume, and currently done by humans switching tabs. This should absolutely be automated. It’s a genuine product opportunity, and anyone building multi-agent tooling should be solving this yesterday.
The judgment layer — an orchestrator agent can detect that Agent A changed a database schema that Agent B depends on. But deciding whether to roll back A’s change, update B’s assumptions, or rethink the whole approach requires understanding the why behind both workstreams: the business context, the tradeoffs between shipping fast and getting it right, what the customer actually needs. This is context-dependent in ways that go far beyond the codebase.
The accountability layer — who decides the product should go in direction X instead of Y? Who takes responsibility when the system of agents produces something that technically works but strategically misses the point? You can delegate execution, but you can’t delegate ownership without someone to delegate to. This is one of Majors’ key arguments about management as well: one of its essential functions is the ability to correct course and make calls that someone has to own.
The people calling humans “the bottleneck” are mostly frustrated by the mechanical layer — the tab-switching, the context re-loading, the manual information routing. And they’re right that it’s painful. But the leap from “this mechanical coordination is tedious” to “therefore remove humans from the loop” skips over the two layers where the actual hard work lives.
The Real Failure Mode Isn’t Slowness — It’s Silent Divergence
Here’s what I’ve observed in practice: the dangerous failure mode with multiple agents isn’t that they block each other. It’s that they silently invalidate each other. Agent A makes an architectural assumption. Agent B makes a different one. Neither knows about the other. Both produce working code. You end up with two internally consistent pieces that are fundamentally incompatible — and you don’t discover this until integration, when the cost of fixing it has multiplied.
A human coordinator catches this not by being faster, but by holding a mental model of the system that spans all the workstreams. This is active, interpretive work — not a passive pipe that restricts flow. The human is the one who knows that the change Agent A is making will break the assumptions Agent B is working under. They’re the one who can say “stop, this whole approach is wrong” before three agents spend an hour building on a flawed premise.
This isn’t a bottleneck. This is where coherence comes from.
“Bottleneck” Is the Wrong Metaphor
A bottleneck implies something passive — a narrow pipe that restricts flow by existing. But what humans do in multi-agent workflows is active: interpreting, deciding, synthesizing, and routing. They’re maintaining the system’s coherence under pressure.
A better frame: the human is the loss function. They’re the thing that defines what “correct” means across the whole system, not just within any single agent’s context window. Without that function, you get agents that are individually productive and collectively incoherent.
Or if you prefer a less technical metaphor: the human is the conductor of an orchestra. The musicians are the ones making the music. The conductor doesn’t play an instrument. If you measure “notes played per minute,” the conductor looks like dead weight. But their job was never to play notes — it’s to ensure all the notes add up to music instead of noise.
The Actual Path Forward
To be fair, not everyone making the “bottleneck” argument believes humans should disappear. Many are arguing that coordination itself will be externalized into tooling or meta-agents. And they’re partially right — the mechanical layer of coordination absolutely should be automated.
What we actually need:
Shared context layers across agent sessions, so the human doesn’t have to manually re-establish what each agent knows. Dependency detection that surfaces conflicts before they compound. Better dashboards for multi-agent oversight — something that lets a human see the state of all workstreams at once instead of context-switching between tabs.
This is an infrastructure problem, and it’s solvable. But notice what all of these tools do: they don’t remove the human from the coordination role. They make the human better at it. They automate the mechanical substrate so the human can focus on the judgment and accountability layers — which is where their actual value lies.
The Unsexy Truth
There’s a reason the “humans are the bottleneck, let’s replace them” take gets engagement. It’s dramatic. It sounds like the future. It feeds the narrative that AI progress will simply route around every human limitation.
The boring reality is that coordination is genuinely hard, context management is genuinely valuable, and the person holding the big picture isn’t slowing things down — they’re the reason things cohere at all. Again and again, attempts to eliminate coordination roles — whether in human organizations or in multi-agent systems — end up rediscovering them under new names.
The right response to “the conductor can’t keep up with the orchestra” isn’t to fire the conductor. It’s to give them a better score — and maybe a few fewer pages to turn by hand.
Root cause identified. Two contributing factors: (1) inadequate tooling forces humans to do mechanical coordination work that should be infrastructure, and (2) the ever-reliable hype cycle turns a solvable engineering problem into a scary “humans are obsolete” narrative. Remediation: build better multi-agent tooling, and stop diagnosing things as replaceable before you’ve understood what they do. RC 👋


