The derelict data warehouse, revisited: why this problem just became existential
AI doesn't fix bad data; it scales it. In 2026, a derelict data warehouse isn't just a nuisance, it’s an existential risk.
<figure class="kg-card kg-image-card kg-card-hascaption"><img src="http://measurelab2.wpengine.com/wp-content/uploads/2014-07-19-15.22.35-1024x768.jpg" class="kg-image" alt="Went for a wander around Lewes itself over the weekend. The images aren't super-relevant, but they're quite nice. This one is down near the Priory." loading="lazy" width="1024" height="768"><figcaption>Went for a wander around Lewes itself over the weekend. The images aren't super-relevant, but they're quite nice. This one is down near the Priory.</figcaption></figure>

This week's been quite nice in several ways – my days have been a bit more diverse in terms of activities, and I've spent a lot of my time attempting to do some code stuff, both some pretty rudimentary Javascript stuff from one of Mark's templates in Google Tag Manager and also some slightly more involved stuff because I needed to retrieve and wrangle larger amounts of data than the Spreadsheets add-on or the interface itself would allow.

Fortunately, there's a package for everyone's favourite free software cross-platform statistical programming language that allows you to retrieve data with only a little fuss and bother (it took me all of Wednesday to get the damn thing working, but I got it sorted in the end). I haven't done anything with R since I used it for a bit of a coursework at the beginning of the year (for which I got 88%, so I'm basically an expert) and most of the challenge was working out how to import the data (the code provided by the package people was not completely clear) which is not where most of my R skills lie, having always been given nice, clean, well-formatted .csv's for ease of importing at uni.

Once I had the data, however, it was all good. Sadly, I didn't get to do any of the interesting analysis/visualisation stuff, as the client just wanted it to compare with some internal numbers – however, knowing now how to get the data in, that stuff should be no problem when we do want to do something fun and interesting with a large data set. It's nice, as always, to be able to make use of some of that stuff I spent the last three years learning. I'm also interested to look into the use of PHP and fast Fourier transforms (yeah, I'm interested in looking at Fourier transforms. I didn't even say that when I doing my degree) outlined by friend of Measurelab Jason Bailey over here, but that's something for another week.

Tuesday and Thursday I had one 'Livanto' (my preferred Nespresso
™ pod styleso far), but today I not only ventured into the slightly odd territory of 'Ciocattino' and 'Vanillo' (Livanto but flavoured with chocolate and vanilla, respectively) I had two cups of coffee.

My habit is developing, and my mean heart rate, no doubt, is slowly increasing. Over and out
AI doesn't fix bad data; it scales it. In 2026, a derelict data warehouse isn't just a nuisance, it’s an existential risk.
The monolithic martech stack is collapsing. An open source, warehouse-first intelligence platform is cheaper, more honest, and actually yours.
Measurelab has a new website. It reflects a shift toward building intelligence platforms, not just solving technical problems. Here's what's changed.