The AI revolution already happened
AI's revolution is already here. We're too busy debating its future to harness the transformational power of today's models.

You spend Monday morning trying to reconcile two dashboards that should agree but don't. By Tuesday, you're explaining to a stakeholder why last month's numbers shifted, not because anything changed, but because someone updated a transformation you didn't know existed. Wednesday, a colleague asks you to pull data for an AI pilot, and you realise you can't confidently explain half the tables in your warehouse. Thursday, the person who built most of it hands in their notice.
That's the derelict data warehouse showing up in your calendar.
In 2025, I wrote about The Derelict Data Warehouse: the phenomenon of analytics infrastructure built with good intentions that slowly drifts into disrepair. Abandoned projects, duct-taped spreadsheets, over-engineered mazes, untamed data lakes. We catalogued the failure modes and offered an audit checklist.
At the time, it felt like practical advice for organisations wrestling with their analytics foundations. I had no idea how urgent that advice would become.
When I wrote the original blog, a broken data warehouse was an inconvenience. Your reports were out of date. Your dashboards were slow. Your team wasted time wrestling with bad data.
In 2026, it is an existential risk. Three things have converged to make it so.
AI does not fix bad data. It amplifies it.
If your warehouse has inconsistent schemas, missing documentation, outdated transformations, unknown data lineage, or unclear ownership, then any AI system trained on that data will produce confidently wrong answers at scale. LLMs do not know when your data is stale. They do not pause to question inconsistent metrics. They will hallucinate insights from garbage with the same certainty they apply to gold.
Bad data + AI = automated incompetence.
New AI capabilities emerge weekly. Businesses that can move fast (experiment, adopt, iterate) have a structural advantage.
But you cannot move fast if your data foundations are brittle. Every new AI initiative, every predictive model, every agent system requires access to clean, well-structured, trustworthy data. If that data is trapped in a derelict warehouse, scattered across abandoned projects, locked in someone's head, buried in undocumented tables, then every AI experiment becomes a six-week data archaeology project first.
The market expectation has shifted. A functioning data pipeline, from collection through transformation to reporting, is no longer seen as the deliverable. It is the prerequisite. Clients increasingly expect that infrastructure to be in place before the real work begins. If your team is still rebuilding similar pipelines from scratch each time, the cost is not just the hours spent. It is the strategic work that never gets started.
You cannot innovate at the speed of 2026 on the infrastructure of 2019.
Data warehouses were always living systems, but the rate of decay has accelerated. The global datasphere is growing at over 20% year on year, with data now spread across internal data centres, cloud repositories, edge locations, and third-party platforms. Business requirements shift weekly, not quarterly. Team members rotate faster. Tools and platforms evolve constantly. Compliance requirements tighten. There is simply more data, in more places, changing faster than any manual process can track.
Institutional knowledge that used to persist for years now decays in months. The person who built the warehouse has moved on. The documentation was never written. The assumptions that informed the design are lost. This is not a people problem. It is a structural reality of the pace of change in 2026.
The good news is that AI agents can maintain data warehouses: monitor pipelines, detect schema drift, update documentation, optimise queries. But only if the warehouse is in a maintainable state to begin with. An AI cannot rescue an abandoned project it does not know exists. It cannot document a duct-taped spreadsheet workflow it has no access to.
You have to clean the foundation first. Then AI can keep it clean.
If your team is spending more time maintaining existing infrastructure than building new insights, you have a derelict warehouse, even if it technically works. In 2026, standing still is falling behind.
Your competitors are deploying AI agents. They are automating their pipelines. They are building knowledge systems that do not decay. If your team is stuck maintaining infrastructure manually, you are not competing. You are treading water.
The good news: once the foundation is clean, everything else accelerates. Pipelines that used to take weeks can be automated. Knowledge that used to decay can self-heal. Teams that used to drown in maintenance can focus on strategic work.
But you have to fix the foundation first.
Book a warehouse audit →
We'll assess your current state, identify the highest-impact repairs, and give you a roadmap, whether you maintain it yourself or want us to help you build something that stays healthy.
AI's revolution is already here. We're too busy debating its future to harness the transformational power of today's models.
NL-SQL tools can transform how you query data - but only if your foundations are solid. Here's an honest look at what works and what doesn't.
At the start of the year, if you’d asked us whether Measurelab would be standing shoulder to shoulder with Europe’s biggest consultancies by September, we would've been surprised. Not because we don't believe in ourselves, but because these things feel so distant - until suddenly, they’re not. So, here it is: we’ve been awarded the Marketing Analytics Services Partner Specialisation in Google Cloud Partner Advantage. What’s the big deal? In Google’s own words (with the obligatory Zs): “Spec