Someone in your organisation has already connected an AI agent to a live data source. Probably several people have. They've pointed Claude at HubSpot, asked it to pull numbers from Google Analytics, had it read a Jira board or reconcile figures across two systems. The data never moved. Nobody waited for a pipeline. Nobody asked the data team.
They did this because it was easy. And people will always go where the friction is least.
This is happening everywhere, across every function, in organisations of every size. AI agents can now query data wherever it lives. CRMs, finance tools, project trackers, spreadsheets, communication platforms, document stores. All in a single interaction. The protocols for connecting an agent to a live system are maturing fast. What used to be an engineering project is now a configuration step.
The result is the fastest adoption of a new data access pattern the industry has ever seen. It is also completely ungoverned.
The governance gap
When most questions went through the warehouse, semantic governance came bundled in. "Revenue" meant one thing because there was one place it was calculated. "Active customer" had a single definition because there was a single model that defined it. The governance worked because it sat on top of the only query path most people used.
When agents query sources directly, that coupling breaks.
A marketing manager connects an AI to HubSpot and asks: "How many active customers do we have?" A finance director connects the same model to Xero and asks the same question. They get different numbers. Not because either source is wrong, but because "active customer" means something different in each system. HubSpot counts anyone with an open deal. Xero counts anyone invoiced in the last 90 days. Both are reasonable interpretations. Neither is the company's actual definition.
Multiply this across dozens of data sources and hundreds of people asking questions. Conflicting numbers in the same meeting. No audit trail for how an answer was derived. Entity resolution failures where the same customer appears as three different records. Business logic applied inconsistently depending on which tool the AI happened to query. And a growing, corrosive distrust of AI-generated insights. Not because the AI is wrong, but because nobody can agree on what "right" looks like.
The wrong side of the door
The natural response is to expose your existing semantic layer to agents. Make it available as a tool the agent can call. Job done.
Except the agent doesn't have to use it.
An AI agent with access to multiple tools will choose the path that answers the question fastest. If the governed semantic layer is one tool among ten, there is nothing stopping the agent from calling the CRM directly, or querying a spreadsheet, or hitting an API that returns the same data without the governance. The semantic layer is available. It is not unavoidable.
This is the fundamental architectural problem. A semantic layer sitting on the other end of a tool call is something the agent can use. A governance layer sitting in front of the agent is something it cannot bypass. Every request passes through it. Every tool call is mediated by it. The governance is not one option among many. It is the only path to the data.
The difference is not cosmetic. It is the difference between a security guard standing inside one of ten unlocked buildings and a security guard standing at the only gate.
Centralise the intelligence, not the data
The insight is simple: decouple the governance layer from any single data source and make it the agent's interface to all of them.
Every source becomes an endpoint. The warehouse, the CRM, the finance system, the project tracker, the communication tools, the document stores. The governance layer sits in front of the agent and mediates access to all of them. It defines what things mean, which source to trust for which question and how entities relate across systems.
You don't need to centralise the data. You centralise the intelligence.
This means you can govern data that lives in twenty different systems without moving it. The warehouse remains an important endpoint for heavy aggregation, historical analysis and workloads that need physical co-location. But it's one source among many, not the place where meaning is defined. Meaning is defined in the intelligence layer. Once, centrally and applied everywhere.
Introducing SEAM
That's what we built.
SEAM (Semantic Engine for Agent Mediation) is an intelligence layer that sits in front of the AI agent, mediating every query across every connected source. It knows what your terms mean, which source to trust, how entities map across systems and which definition applies in which context.
The user asks a question in natural language. They get a governed answer. They never see the plumbing.
Take that "active customer" problem. With SEAM, the definition is codified once:
- When Sales asks, the agent knows to use the pipeline definition from HubSpot.
- When Finance asks, it uses the invoicing definition from Xero.
- When the CEO asks without specifying context, it defaults to the canonical company definition from the operations hub.
Every answer carries an audit trail: which source was queried, which definition was applied, which hierarchy rule resolved the conflict. An analyst can inspect it. A compliance officer can verify it. The governance is invisible to the end user but fully transparent to anyone who needs to see the reasoning.
And this goes well beyond structured data. An LLM pointed at a GTM container without context will see a wall of tags, triggers and variables with no idea which ones matter, which are legacy and which are misconfigured. SEAM bakes in the domain context that makes unstructured and semi-structured sources useful: what each resource is, who owns it, how it relates to the rest of the data landscape and what the agent should know before it tries to interpret what it finds. Slack channels, meeting notes, GTM containers, document stores. The intelligence layer makes the agent literate in your organisation's data, not just connected to it.
This isn't a glossary bolted onto a chatbot. SEAM carries the full reasoning context an AI agent needs to give correct, consistent answers: metric definitions, source hierarchies, entity resolution logic, business context and temporal versioning. It governs every data source your agents can reach, on equal terms.
Where we are
SEAM is live. We've been running it internally at Measurelab across 11 connected systems: BigQuery, Google Analytics, Google Tag Manager, HubSpot, Jira, Slack, Harvest, Fathom, Gmail, Google Calendar and our internal operations hub. It governs how our own AI agents query our own data. Every definition version-controlled. Every query audited. Every answer consistent.
The technology is only half the problem
Building the intelligence layer is an engineering challenge. Deciding what goes into it is a human one.
Before a single definition is codified, someone has to sit with people across the business and make decisions that no software can make for them. What does "active customer" actually mean here? When two systems disagree on revenue, which one wins and why? How does a customer in the CRM map to a customer in the finance system? These are judgement calls. They require cross-functional conversation, not just configuration.
We've built a methodology for this alongside the technology. The SEAM Canvas is a structured workshop framework that takes a team from "we have fifteen data sources and no shared definitions" to a codified intelligence layer. It maps your sources, surfaces the definitional conflicts that already exist (whether or not anyone has noticed them yet) and produces the decisions that SEAM needs to enforce governance.
The hardest part of this work is not technical. It is getting the right people in the room and making the decisions they've been deferring for years. The Canvas is designed to make that process structured, finite and actionable.
Data democratisation is finally possible. So is data chaos.
For years, "data-driven decision making" has been the aspiration. AI has removed the barriers overnight. Any person in your organisation can now ask a question of their data and get an answer in seconds. Not through a dashboard someone built six months ago. Not through a request to the analytics team that takes two weeks. Directly. In natural language. From the systems they already use.
That is data democratisation. It is here. It is happening with or without your permission.
The question is whether it scales to brilliance or to chaos.
Without a governance layer, it scales to chaos. A hundred people asking questions of a dozen systems with no shared definitions. Every answer technically defensible. None of them consistent. Trust in data eroding faster than it ever built up.
With an intelligence layer, it scales to something organisations have never had before. Genuine, consistent, governed intelligence available to every person at every level. Available now. To everyone.
One of these two futures is coming for your organisation. The only variable is whether you act to determine which one.
If your organisation is adopting AI faster than your governance can keep up, SEAM was built for you. Register your interest to learn more and get early access, or book a conversation if you'd like to talk through what an intelligence layer looks like for your data.