Buzzed Technology

Contact us.

We'll get in touch within 24 hours.

All Field Notes
Engineering5 min read

Orchestrator agents are only as good as your single source of truth

An orchestrator is a router, not a planner. It is only useful when the destinations it routes to are connected to the same loading dock.

The orchestrator pattern is having a moment. A top-level agent decides which sub-agent or tool should handle a task, delegates, then composes the answer. Gartner is projecting 40% of enterprise applications will embed task-specific AI agents by the end of 2026, and more than 60% of enterprises with mature AI programs have already moved to multi-agent architectures. The pattern works. What nobody seems to lead with is the prerequisite: it only works when every destination the orchestrator might route to is connected to the same data surface.

Without that, you do not have orchestration. You have integration theater performed by a language model, at your expense.

What an orchestrator is actually doing

An orchestrator agent is not a planner in the interesting sense. For most business use cases, it is a router. Given a request, it picks which specialist agent or tool should run, hands over the relevant context, collects the result, and returns it. The specialist knows one job well. The orchestrator knows who does which job.

This is a powerful shape. It lets you build narrow, testable sub-agents - the refunds agent, the claims agent, the sales agent - and compose them at the top. It also maps cleanly onto the workflow-shaped systems we keep arguing for in agents vs. workflows. The orchestrator is the one place where a small amount of model-driven routing buys you a lot of composability.

Why one place matters

Here is where teams underestimate the cost. Each sub-agent needs context to do its job. If that context lives in twelve different systems with twelve different auth models and twelve different shapes, your orchestrator’s real job becomes identity and plumbing, not routing. Token spend spirals. Latency stacks. The audit trail fragments across systems that do not share a correlation ID.

When the data lives on a single surface - one identity model, one permission boundary, one event stream, one idea of what a customer or a document is - the orchestrator stays thin. It passes a reference, not a payload. The sub-agent resolves the reference against the same source of truth the orchestrator used. Context is loaded once, cached once, audited once. This is the same argument we made in your data doesn’t need to be perfect - except the stakes go up, because now an agent is making decisions on top of it.

The anti-pattern: federated auth inside an agent loop

We have seen this particular failure mode a few times. The team wires an orchestrator to five SaaS tools, each with its own OAuth flow, its own rate limit, its own schema. The agent burns a noticeable percentage of its tokens - sometimes a majority - translating between representations. Retries cascade. A single permission error three tools deep kills a run that already cost a few dollars. Observability is a nightmare because each system logs in its own format. The team ends up building a federation layer anyway, halfway through the project, under pressure, with the agent in production.

The clean version starts the other way. Build or adopt a unified data surface first. Give every sub-agent the same read and write contract into it. Let the orchestrator reason about work, not about identities. The observability problem collapses into a normal one. The retry problem gets smaller. The cost curve bends.

What “one place” means in practice

You do not need a new platform. You need a small set of invariants:

  • A single identity model the orchestrator and every sub-agent respects. On-behalf-of auth so the agent never has more permission than the user who asked.
  • Canonical entity IDs. A customer, a contract, a ticket each has one ID that every sub-agent uses. Translation layers are fine; ambiguity is not.
  • A shared event stream or audit log with a correlation ID per orchestrator run, so a single trace tells you everything that happened across sub-agents.
  • A retrieval layer - vector index, search, or hybrid - that every sub-agent queries, rather than each one rolling its own.

Hit those four, and the orchestrator pattern stops being a science project. We spend a fair amount of our custom AI development work here - not writing the orchestrator, but building the surface it stands on. The orchestrator itself is usually a few hundred lines.

Design the substrate before you design the agent

The order of operations matters. Design the data surface first. Pick the identity model. Write down the canonical entities. Get retrieval and audit on that substrate working without any agents at all. Then build your first specialist agent against it. Then your second. Only once you have two or three sub-agents that all read and write through the same substrate do you introduce an orchestrator on top. Doing it in that order, the orchestrator is a feature, not a rewrite.

The team that tries to build the orchestrator first, and then retrofit a source of truth under it, almost always ends up rebuilding both. We keep writing post-mortems on pilots that stalled in exactly this configuration. The failure was never the orchestrator. The failure was asking the orchestrator to route work to a set of systems that had never agreed on what a customer was.

Give your agent one place to stand. Then give it something to do. In that order, the pattern is boring and reliable. In the other order, it is a line item that ships next quarter for the second time this year.