We are living through a technological revolution and spending most of our energy arguing about whether it's really happening.
The discourse around AI is a cacophony, apocalypse warnings, dismissals, denial, and hype cycling through our feeds with dizzying regularity. But while we debate what AI might become, we're ignoring what it already is.
What are the four main perspectives on AI?
In my observation, people tend to fall into four main camps:
- The apocalypse camp: The end is nigh, and the robots are coming for us.
- The "AI is naff" camp: It hallucinates, it's a parlour trick, and it's a bubble waiting to pop.
- The denial camp: It's a fad that won't impact my job or life significantly. I will just carry on as normal.
- The augmentation camp: This is a tool that extends human capability.
I'll admit, depending on the day (and perhaps how much sci-fi I've recently consumed), I tend to flit between the Apocalypse camp and the Augmentation camp myself.
The news cycle tends to orbit these camps with dizzying regularity, often driven, cynically speaking, by whatever generates the most engagement on any given week. I want to cut through this noise and challenge the extremes of the debate. But to do that, we first need to look inward, at a human trait that is both a gift and a curse.
Ever since we first picked up a bone club, our species has been defined by a symbiotic relationship with our tools. But there is a nuance here that we often miss: we don't just adapt to technology; we force technology to adapt to us.
If that bone club had required a PhD to operate, we would have remained a bright but stationary primate. We thrive because we build systems that hide their own difficulty.
The modern smartphone is the perfect example. It is a device of infinite complexity, yet we have engineered the friction out of it so effectively that a two-year-old can master it in seconds. This seamlessness is a triumph of design, but it creates a dangerous psychological side effect: because the complexity is hidden, buried under sleek glass and intuitive UIs, we mistake the tool's ease of use for simplicity of function. We stop respecting engineering and start expecting magic.
I saw this in myself when I bought my first smart home setup. Speaking a command, having it processed by a server farm halfway across the world, and instantly triggering a physical switch in my living room was nothing short of sorcery. Cut to three months later, and I found myself swearing like a sailor at the speaker because it failed to understand a command while I was chewing toast. I had stopped seeing the millions of lines of code and the miracle of connectivity. I just saw a light switch that didn't work.
And this is precisely the cycle now playing out with Large Language Models, but at a velocity we've never experienced before.
The narrative whiplash
When GPT-3 first appeared, it was remarkable. I remember reading AI attempts at a Harry Potter chapter only a year prior that were laughable; suddenly, the machine could output something coherent and passable. It could write code, not reliably, but the mere fact that it was a possibility was staggering.
Then came the inevitable narrative whiplash. The headlines shifted from "AI will replace all writers" to "AI is getting lazy" or "AI is hitting a wall." We moved from awe to criticism in record time. We are judging the technology based on its current, fleeting limitations rather than its fundamental power. We look at a tool that can summarise a 500-page document in 30 seconds and complain that the prose is a little "dry."
We are missing the forest for the trees because we are waiting for the next hit of dopamine, the next "magic trick" (like Sora's video generation or GPT-5). But this focus on the next leap creates a blind spot for the current reality.
What is the "Frozen AI Hypothesis"?
Here is the argument I really want to make: If all AI development stopped today, if we froze the models exactly where they are right now, the outcome would still be transformational.
We are currently so obsessed with the foundation models getting smarter that we are ignoring the utilisation gap. I would estimate we are utilising a fraction of the potential utility of the current models, largely because our internal data is often too messy or unstructured to be useful. We haven't even begun to fully integrate the "old" tech (like GPT-4) into our supply chains, our education systems, or our creative workflows. While high-stakes fields like clinical healthcare may rightly demand the higher reliability of future models, the administrative and logistical sides of these industries could still be revolutionised by what we have today.
The narrative shouldn't be about whether AI will achieve AGI next year (a concept that I believe has become more of a marketing term than anything actually definable), or whether it's a bubble. The narrative should be about the fact that we have a miraculous tool right now that we have barely unboxed.
If the "AI is naff" camp is right and the models never get better than they are today, we still have a tool that allows a junior coder to work like a senior, a writer to brainstorm at 10x speed, and a researcher to synthesise data in moments. That isn't a flash in the pan; that is a fundamental shift in how human beings process information.
So if the models aren't the bottleneck, what is?
What is the "Infrastructure of Intelligence" in AI?
This utilisation gap isn't about the models lacking IQ; it's about us lacking the infrastructure to harness them. The real innovation happening right now isn't just in making the "brain" bigger, but in building the nervous system around it.
We are seeing this with platforms that shift the focus from "what can the model say?" to "what can the model do with my data?" The magic happens when you stop treating the AI as a chatbot and start treating it as a reasoning engine grounded in company-specific truth. It is no coincidence that Google has repositioned BigQuery from a data warehouse, to a ‘autonomous data and AI platform.
However, this infrastructure relies on more than just software; it relies on data maturity. You cannot simply dump a swamp of unstructured files into BigQuery and expect precision. This is where the unsexy but vital work of data semantics and metadata labelling comes into play. To get the most out of these models, we need to structure our data in a way that establishes trust. If the model doesn't understand the semantic difference between a 'lead' and a 'sale' because the metadata is missing, that isn't an AI failure, that is a data modelling failure.
Beyond the data itself, there are three massive architectural challenges we need to solve to unlock this utility. These have nothing to do with raw compute, but everything to do with how we build the systems wrapping the models.
Better interfaces. We are currently interacting with remarkably advanced intelligence through interfaces that are often little more than a text box. While chat has its place, this is a failure of imagination for many use cases. We need adaptive front-end technology that can render data, visualise trends, and allow for manipulation of outputs, not just reading text. Perhaps a new interface altogether is on the cards…
Long-term memory. We need to solve the 'goldfish' problem. A model resets every time you hit refresh. By building external memory architecture, systems that remember your past projects, your tone of voice, and your specific constraints, we make the model feel infinitely smarter without adding a single parameter. Think of it as giving the AI a notebook it can refer back to.
Note: things have moved forward on this point since I originally wrote it, memory is now a feature of all the frontier models, although it is certainly not a solved problem
Evolving knowledge. The 'Frozen' hypothesis works only if the system around the model isn't frozen. We need architectures that allow knowledge to update, not by retraining the model itself (which is expensive), but by updating the knowledge base the model retrieves from. The model stays the same, but what it knows evolves daily. It's the difference between a static encyclopaedia and a living library.
We don't need a smarter model to check stock levels or summarise a legal contract; we need better application layers, memory systems, and interfaces that connect the current models to our reality.
The nature of the bubble
This leads me to the lingering fear that we are in a bubble waiting to pop. I tend to agree that there is a bubble, but I believe we are looking at the wrong historical analogy. People fear a Housing Crisis style crash, where the underlying assets were toxic and value evaporated overnight.
I see this climate as far more akin to the Dot-com bubble of the early 2000s. Yes, valuations are currently insane. Yes, countless startups will likely vanish into the ether, and perhaps even some of the major players will face corrections. But we must remember that while the Dot-com bubble burst, the internet did not go away.
The underlying technology was not toxic; it was merely over-speculated. It didn't stop being useful. In fact, it was only after the hype died down and the bubble burst that the internet truly integrated into the fabric of our lives. If the AI industry pops and investment is lost, the intelligence we have bottled isn't going back on the shelf. The ability to synthesise information, code, and create is here to stay, just as the fibre-optic cables laid during the .com boom remained to power the internet age long after the Nasdaq crash.
Moving to the augmentation camp
Ultimately, the doom-mongering and the hype-cycling are distractions. We need to stop looking at AI as a replacement for humanity or a failed experiment. We need to treat it like the bone club, the smartphone, or the smart speaker: a tool that is currently flawed, yes, but undeniably powerful. The magic hasn't faded; our appreciation for it has.
The path forward is clear, regardless of which camp you currently reside in. We don't need AGI to revolutionise our workflows; we just need to use the current models effectively. The financial bubble might burst, but just like the internet after the dot-com crash, the technology is going nowhere. And the real work, the unglamorous work of data modelling, memory systems, and better interfaces, is what will turn a generic model into a specific solution.
We are living through a moment of technological magic. Let's not let our complacency, or our entitlement, blind us to the opportunity sitting right in front of us. If we stop waiting for the robot apocalypse or the next miraculous update and start building with what we have today, we might realise the revolution has already happened, we were just too busy complaining about the typos to notice.
FAQs
What is the "utilisation gap" in AI?
The utilisation gap refers to the massive difference between an AI model's potential and its actual application within a business. This is usually caused by fragmented, messy internal data rather than a lack of "IQ" in the AI model itself.
Is AI a bubble like the Dot-com era?
While valuations may be speculative, the technology mirrors the Dot-com bubble: even if the financial bubble pops, the underlying utility remains. The ability to synthesise data and code is a permanent shift that will not disappear if the market corrects.
How can businesses fix the "Goldfish Problem" in AI?
The "Goldfish Problem"—where AI resets and forgets context every session—is solved by building external memory architectures. These systems allow the AI to refer back to a "notebook" of your specific projects, tone of voice, and constraints.