#135 Twelve highlights of 2025 and future predictions
In this episode of The Measure Pod, join Dara and Matthew as they look back over the year with twelve of their favourite highlights. They discuss the funniest and most thought-provoking moments from past episodes, including the infamous tungsten cubes story, the importance of critical thinking in the age of AI, and the rapid acceleration of productivity. They also share their thoughts on warehouse-native analytics, the potential consciousness of AI, and predictions for the future of the web. Tune in for a reflective, insightful, and humorous recap of 2025 in the world of data, analytics and AI.
Show notes
- #123 Why teams are turning to warehouse-native analytics (with István Mészáros at Mitzu)
- #132 The impact of AI on digital experience (with Yali Sassoon at Snowplow)
- #134 Building AI agents, from consciousness to collaboration (with Daniel Hulme at WPP)
- #133 The role of AI in analytics (with Juliana Jackson at Jellyfish)
- #128 MCP servers in digital analytics (with Gunnar Griese at 8-bit-sheep)
- #119 Google Cloud Next 25 roundup
- More from The Measure Pod
Share your thoughts and ideas on our Feedback Form.
Follow Measurelab on LinkedIn
Transcript
“There is no single agreed upon definition of consciousness. So it makes the challenge of figuring out when or if an AI becomes conscious really, really difficult.”
Dara
“ Every hour you leave on the table is the equivalent to eight hours, six months ago.”
Matthew
[00:00:00] Lizzie: Hello and welcome to The Measure Pod by Measurelab podcast dedicated to the ever-changing world of data and analytics. With your hosts, Dara Fitzgerald and Matthew Hooson. Between them, they’ve spent more years and they’d like to admit wrestling with dashboards, data quality, and the occasional Google curve ball.
[00:00:32] Lizzie: So join us as we share stories about how analytics really works today and where it might be headed tomorrow. Let’s get into it.
[00:00:41] Dara: Ho, ho, ho and welcome to the Measure Pod. This is obviously the Christmas edition. I haven’t just completely lost it. Here with, with Matthew. For those of you watching, I’ve got a regular Christmas hat on.
[00:00:52] Dara: He’s got a silly, tiny novelty Christmas hat on.
[00:00:57] Matthew: Yeah, I’m a very big guy. This is a normal sized hat.
[00:01:01] Dara: It’s, it’s like the, have you seen the thing with Shaq holding kind of normal sized? Yeah. This is basically you with your Yeah. Regular sized Christmas hat. Yours is the same as mine. They’re the same hat.
[00:01:11] Dara: Yeah. It just looks a lot smaller. A bit bigger actually. Yeah, probably. So we’re going to do something a little bit different. We’re going to do a look back over the year. Instead of the 12 days of Christmas, we’re going to have our 12 highlights. so it’s going to be highlights from each of us of what we’ve enjoyed or what stood out the most.
[00:01:30] Dara: just, just 12 talking points from the podcast we’ve done that we think are worth a recap. So I think I’m going to kick it off, Matthew, isn’t that right? 12.
[00:01:40] Matthew: I’m not sure about the title. 12 talking points. That does sound like the most boring thing we could ever do.
[00:01:46] Dara: Listen, for Christmas as a kid, I got a lump of coal every year. So that’s, you know, it’s, it’s, it’s inspired by that. There’s nothing fun about this. This is, this is dry content. Seriously delivered by two men in Christmas hats.
[00:02:02] Matthew: Yeah. I’m going to start, I’m going to send all my CH kids presents back and give them 12 talking points on Christmas day.
[00:02:08] Dara: I think you should be great. I think you should. can we guarantee that the actual 12 things are going to be more interesting than the way that I’ve teed it up?
[00:02:16] Matthew: I think so. I hope so. If people have bothered listening to the podcast over the past, I think we’ve done for maybe 15 episodes now.
[00:02:24] Dara: This is new, since we took over. Is it, listen, I’m, you know, I’m getting a bit sick of correcting all your mistakes that you make right? On the podcast, but it’s actually 16. Oh. Including this one. Okay, so this will be 17, right?
[00:02:39] Matthew: Well, we both don’t dunno what we’re talking about anyway. Yes. It’s going to be interesting.
[00:02:43] Dara: Okay. Should we, should we get going? Let’s go. My first one that I’m going to talk about is the infamous tungsten cubes. I wish we could claim this as our own kind of, you know, our own joke that we created, but this was a real, genuine story. But the reason I bring it up is not just about the story itself. So a recap of the story.
[00:03:06] Dara: So, anthropologists ran an experiment where they actually put a version of Claude in charge of a real physical. Tuck shop, snack shop, whatever you call it, wherever you are in the world, you might have a different name for it. But within the office they have a snack shop and they put Claudius, which was the name of this AI boss, in charge of the tuck shop.
[00:03:27] Dara: And the results were quite humorous really. So Claudius went, and got a bit power hungry. It also wasn’t a great businessman or a woman. Business person, business ai, being anthropologist, I shouldn’t use that word. Yeah. Shouldn’t do that. so it made a bad boss, basically. It wasn’t very good at business.
[00:03:51] Dara: It, one thing it did do well, I think it found this niche Dutch chocolate bar that somebody had asked for, and I think it probably made that one person very happy. But then someone had a joke, asked for tungsten cubes, and it went and found them. It was so proud of itself that it sold them for less than it bought them for.
[00:04:09] Dara: So it wasn’t a very profitable experiment. I think what was interesting about it, apart from it giving us a good laugh and we had a few memes flying around afterwards. It kind of became a bit of an in-joke for us, but I think it was one of many things that philanthropic have done that have impressed us throughout the year where they’re quite open about what they’re experimenting with.
[00:04:29] Dara: If you were being cynical, which we probably shouldn’t be on the Christmas episodes, you did bring up something before where you said that there are people who think some of this is, is a marketing tactic to try and make philanthropic stand out. but nevertheless, it does get attention. So they’re very open about sharing what works and what doesn’t work.
[00:04:47] Dara: And there have been a few other scary things throughout the year where they’ve released. Results of tests where, you know, there’s things like, you know, manipulation that the LLM will use. Even within that Claudius experiment, at one point, the, the, the model got carried away. Started making up all this stuff about saying, I’ll meet you.
[00:05:09] Dara: I’ll be wearing a blue tie and I’ll meet you in the car park, and all this kind of bizarre stuff. So, but yeah, it’s, it was, it was a funny story, but it’s kind of the broader theme is, is just how. Philanthropic are sharing all of this stuff. And a lot of it is quite thought provoking. ’cause it makes you think about where the lines are with all of this stuff.
[00:05:26] Dara: And you know, at what point should we be worried about, you know, handing over control to some of these AI models, the mass truck shop on employment.
[00:05:36] Matthew: That’s the, that’s the big way. I did actually, I’ve, since then I’ve, I saw when Gemini three came out. There’s an actual, do you know all the benchmarks? All the models do?
[00:05:45] Matthew: Yeah. there’s an actual vending machine benchmark that all of them are using and putting on their, on their thing. They get, it gets a base fund and then it, they, they, they run to see how much profit it makes in a, in a theoretical vending machine.
[00:05:59] Dara: So it actually is like a, it’s a real test now.
[00:06:01] Matthew: Proper, proper test.
[00:06:03] Dara: Yeah. I think no tucks and cubes. Yeah. I think it is the real test of, of, of life, isn’t it? If you can run a tuck shop, you can live as successful and, and, and that’s where I started.
[00:06:12] Matthew: Yeah. Purposeful life. Yeah. I, so I’m going to go more serious and take this seriously and not, not, not make everything I, I know you tried to make that joke a serious thing.
[00:06:21] Matthew: Serious. I enjoyed it, we’ve not. We’ve only had one client on so far in this run, but it was, and I don’t know if we’ve done it before, really ever in the history of the podcast, like talk, got a client on to talk about something. We’ve worked with them and, and, and a project we’ve done. I might be wrong, but it feels like it was the only time.
[00:06:39] Matthew: So we talked to Veronica at Springer about this, this big data form transformation project we did. And it was just, yeah, it was really cool to talk to an actual client that we’d. Done work with and, and talk about their experience and how they made decisions and, and all their sort of future gazing of what they’re going to do.
[00:06:58] Matthew: ’cause we’ve never done it before and I definitely think we should. We should do more of that. So I think watch this space and we’ll, hopefully, bring on more clients in the future to chat about stuff we’ve been working on, down the line. So yeah, I don’t have much to say about it other than I enjoyed that format and I’d really like to do more of it.
[00:07:15] Dara: Yeah, I, I agree. And I think it is something we’d like to do more of. And it was great that Veronica was so happy to put herself forward to do that. It’s funny, isn’t it? It feels like a very long time ago.
[00:07:27] Matthew: Yeah, it does. It does. I don’t even remember when we started. I feel like we’ve been trapped in this, these boxes.
[00:07:35] Dara: I’m forever, I’m going to say, I’m going to say June, just like I tend to throw out answers that probably aren’t right. I’m going to say June.
[00:07:42] Matthew: Yeah.
[00:07:42] Dara: Yeah. Let’s, let’s say June, around June.
[00:07:44] Matthew: If that’s in, in keeping with the salty-ness of our normal facts and things. But yeah. And Veronica is a fan, so if you are listening, Veronica, hello. And You made the top 12.
[00:07:57] Dara: Okay, so back to me. So my, my next highlight, and this is something that’s come up, it’s, it’s come up several times and it is particularly recently, and this is to do with the kind of the effects that AI are having or might have on critical thinking. So, it’s probably come up more times than I’m going to include, but my little summary is, there was talk of it kind of eroding critical thinking.
[00:08:24] Dara: And this is something that Juliana Jackson mentioned, on, on, on her chat with us, where she felt that it’s going to make, and she’s not alone in thinking this, but that it’s going to. Because you can just go to an LLM and get it to give you the answer. You know, a lot of people are now outsourcing their thinking to LLMs.
[00:08:42] Dara: So her concern, and I do get this, is that it’s going to erode people’s critical thinking. As time goes on, people are going to be, there’s going to be less of an incentive to be curious, to think critically, to really probe what’s going back from the LLMs. Interestingly, we got a different perspective from Daniel Hume, who, not that he didn’t think that it could be a problem initially, but he felt that there’s going to be a backlash in this kind of post-truth world.
[00:09:10] Dara: So when people get to the point, which we’re probably not far from, especially with things like Sora. Where nobody knows what to believe online, you’re naturally going to have to then dig into it a little bit further, to sense check things or at least you would hope people would. So this is something that I’ve kind of thought about a lot myself, and I’m sure you have Matthew, and lots of our listeners probably have, but it’s, you know, to to to kind of boil it down into a really kind of, you know, oversimplified questions like, are LLMs going to make us lazy intellectually, or are they actually truly going to augment us and allow us to use our brain on more?
[00:09:45] Matthew: Genuinely challenging tasks. Yeah, and it’s interesting, I think since we had that conversation, the Bri, the British government has came out with like a new, my wife’s teacher and she was showing me the new curriculum and, and there’s a big focus on critical thinking in there and like that being a core tenant moving forward, which I can only imagine is a, is a reaction to all this stuff.
[00:10:07] Matthew: and I think since we had that conversation as well, nano banana. Pro came out, but yeah, so that, that’s come out and the realism, you, you see these new images and things and you think that looks really real. And then they release a new model and you go, no, that looks really real. and they’ve definitely turned the dial to 11 and it’s harder to, harder to tell what is and isn’t real.
[00:10:27] Matthew: I know that’s not quite. The same as the critical thinking, but it, but spotting the difference in, in reality is going to be really important. So, yeah, a really interesting slash is scary. So, I, I, my, my next point is, my next talking point is around, the acceleration. AI affords and it’s kind of related to what, what, Juliana was saying in that she worries, everyone’s going to get lazy ’cause they’re going to be augmented and just asking AI for absolutely everything.
[00:10:56] Matthew: But we had an interesting conversation with Mark Edmondson and he raised a really interesting point about the fact that he felt so, so sped up 10 times more efficiently and he was able to do so much more that he was actually finding. Him, it is more of a burnout because he didn’t have the, the, the, the spreadsheet to just go and zone out on for a little while or this sort of very low hanging, monotonous piece of work to go and do for a little while.
[00:11:25] Matthew: Everything is sort of hyper focused. You can have like four or five different things going at any one time. You tend to work slightly longer ’cause you know. Every hour you leave on the table is the equivalent to eight hours, six months ago. and it’s some, it’s not a way I’d ever thought about it before that, the fact that by accelerating ourselves in this way, we might be burning ourselves out a little bit cognitively. And I’m, I’m interested to see how that goes.
[00:11:50] Dara: I, I, I also thought that was really interesting and, and there were a couple of other times where a similar conversation came up. So this I, this idea of, so, so, firstly as you like, ’cause it’s your point, and I’m not going to hijack it like you did to mine. not much anyway.
[00:12:05] Dara: Maybe a little bit. So, the main thing is like, well, by being. This amount is more productive, whether it is 10 X or whatever. We’re more scared of what we’re leaving on the table when we’re not working. And it does kind of create a bit of a frenzy mindset. I know I’ve even found that it’s like when you’re, when you’re doing stuff, you want to keep doing it ’cause you’re so productive, you want to just keep churning stuff out and it makes you almost feel a bit anxious if you’re not doing something.
[00:12:30] Dara: But the other kind of related point that came up in a couple of conversations, one I remember clearly was the one with Gunner, Grise, and it was. Around this idea of like, there’s an assumption that we’re all going to use this free time on purposeful things, whatever they may be, but we don’t actually know that yet.
[00:12:48] Dara: And you know, are we just going to make ourselves. Ourselves busier with the time and this fits in with that point for Mark. This is like, there’s, there’s some early evidence just from our own experience to say that we are just going to make ourselves busier or at least it’s going to be a risk that we’re going to have to try and, you know, be aware of that and try and make sure we’re putting our own rules in place to make sure that we’re not just vibe coding 24 7.
[00:13:12] Matthew: And, and it does feel a little bit as well to, to expand on that a touch. I, I, I don’t know if this is related, but I, I sometimes feel like my, my, concentrations are eroding away a little bit. Like I’ve, I noticed, I noticed it with short form content and all that sort of a, a little while ago, but it is, it does, it might be in my head, but I feel like it’s a little bit harder to concentrate on longer form tasks because I’m, I’m so used to getting augmented and speeding through things, and it’s a bit like.
[00:13:44] Matthew: I need to be doing something else and wandering off and my attention is, feels so much more thin and spread. And I think there’s definitely some sort of system I’ve got to start putting in place to hone in on things a little bit. Again, it’s, it’s, maybe my mind’s just deteriorating, but it could be a coincidence.
[00:14:01] Dara: It could be a bit of both. I dunno, I can’t speak to whether your mind’s disintegrating or not, but the, you know, the whole, like the whole multi-agent thing. Definitely. ’cause you know, AI might be able to go off and run on parallel tracks, but when, when you’re setting these things up, we watch it happen, you’re right.
[00:14:15] Dara: It makes your own brain try to go off on parallel tracks and obviously we’re not designed to do that. So it does stretch the attention a bit too thin, I think.
[00:14:24] Matthew: Yeah. We, we, we kind of said it with, Open AI released Atlas, and we were saying, oh, you, you can set all these tabs off to be doing this, that, and the other.
[00:14:32] Matthew: But, but you’ve kind of got a little bit of your cognition sitting in that tab that you’ve left over there doing something. It’s impossible not to, otherwise you’ll just forget it exists. So interesting. We said we weren’t going to go too dystopian, but we, we, we have such a tendency to, to veer into the, into the bad, don’t we?
[00:14:53] Dara: Look, we, we are who we are. That’s what we have. Yeah. My next thing of note, it’s just, it’s just a little one. My next one. consciousness. Just as simple as that. Simple. Yeah. Simple. so we talked about this, this is fresh in the mind. We talked to Daniel Hume about this on our last podcast. But it wasn’t the first time we, we, we, we talked about it.
[00:15:18] Dara: It wasn’t the first time we thought about it. But, you know, this, this, I guess where, where it got me thinking is not just about, I’m going to try and not go dystopian, but I’m going to end up in some weird place here. I can already, I can already tell it’s, it’s going to happen. so the talk with Daniel was, was very much around, you know.
[00:15:38] Dara: What, well, firstly, what is consciousness? And there is no single agreed upon definition of consciousness. So it makes the challenge of figuring out when or if an AI becomes conscious really, really difficult. ’cause we can’t even say why we are or if we are. so it’s just really one of these really kind of mind blowing kind of topics where you go away and you think about it a lot more and.
[00:16:03] Dara: I think you’ve read the book now, the Hidden Spring. It’s on my reading list. We’ve talked about it for a couple of episodes now, I think. but it’s just a fascinating subject in general that goes way beyond what we do in work, way beyond it. But it’s, it’s interesting to think about and I think, you know, with LLMs, I think Daniel said something like 60 to 70% of people, I’m sure this was in a server or something, believed that LLMs are conscious and when you talk to them or when you talk to an LLM, it does do a convincing job. And you do end up wondering, well, hang on, if all, if this is just some kind of computational output or, you know, some kind of predictive model, spitting out words, well isn’t that what we do?
[00:16:51] Dara: So it just gets, it does get you thinking, you know, what, what, you know, what is, and, and science can’t agree on this philosophy, can’t agree on this. So it’s just a really. A huge topic way beyond what we tend to talk about on the podcast, but it’s something that I personally find really fascinating.
[00:17:04] Matthew: I think that’s our most related to data and data and analytics. Topic we’ve covered so far probably is did you think when we came, when we came back, that with the, that would be one of your consciousness would be on the, on the docket, but, it, it is super interesting and I, yeah, I’ve, I have read that book, which. He is, he’s hard at times to, to, to get your head around.
[00:17:26] Matthew: But he, he uses so many terms in there that, that really align with, like, he uses terms like generation and inference, generation and prediction engines and this, that, and the other. And it makes you think, well, I don’t know how I’m stringing these words together. It feels like my brain’s just going, I just put in probabilities of what I should say, one after the other.
[00:17:45] Matthew: So it’s, it’s tricky. but really, really. Really fascinating. And, and, and at the other point he raised, ’cause ’cause we were talking about this a little bit yesterday, weren’t we? ’cause you, you were having a, a little bit of a consciousness conversation with Claude, was it? Or Gemini? I can’t remember which one.
[00:18:02] Matthew: And, I think Daniel hug also raised the, the potential that because they’re, they’re all trained on sort of the corpus of, of human knowledge. There’s a lot of sci-fi in that training data. Or maybe there. They’re sort of spouting out these, these familiar tropes in, in AI literature and films and things like that. But that’s a theory, isn’t it?
[00:18:26] Dara: Rather than, rather than a fact. It is. It’s so, it’s, it’s so mind boggling and so hard to know. there was something you, there’s something I forgot to mention as well, which is. Some of the things that have come out, around some of the self-preservation behaviors that the LLMs have shown.
[00:18:43] Dara: So again, anthropologists are good at sharing this kind of stuff, but they’re not the only ones. But in a lot of testing, the LLM actually shows some form of self-preservation. Behavior, there’s another word for this, isn’t there something to do with like anti shutdown or something? So there’s some other term I think that’s used as well, but it’s where the model will actively go out of its way to try and, you know, not be, not be disabled or shut down.
[00:19:06] Dara: Which again, if there’s no clear definition, it’s hard to say, but that’s certainly, you know, you could see how someone might argue that that’s a sign of something being. Self-aware and, and having some form of, you know, desire to continue to, to exist, whatever, whatever that means.
[00:19:23] Matthew: Yeah. It’s strange, isn’t it? ‘Cause I, I’ve always thought about that because I’ve read about that long, long, long time ago that people have been saying, oh, when, when AI gets to a certain point, it might. Not wanting to be shut down and might try to self-preservation well, before we were even into talking to these systems, it was even possible to ask you that question, and then it’s actually eliciting and doing that behavior.
[00:19:46] Matthew: It’s like, that seems like a prediction’s come true here, of a pretty worrisome bit of behavior. But I’m still, I’m still reeling from the, if, if any, anyone, if anyone builds it. Everyone dies. Everyone dies or something. I forget the exact title, but yeah, they talk, they talk a lot about that sort of self preservation stuff. And, it’s, it’s, it can be again, back straight back to dystopian.
[00:20:13] Dara: I was going to say, I’m so glad the rest of our talking points, what, what are we calling them now? The rest of our Christmas presents, things of note. Oh, right. This is like the equivalent. This is the equivalent of getting socks, isn’t it? This is 12 pairs of.
[00:20:26] Dara: Christmas socks. That’s what this is, that’s what this is. Really. Yeah. But the rest, the rest will be positive. Only guaranteed. Except the ones that aren’t. I enjoyed
[00:20:36] Matthew: our talk with Istvan. We talked about the growth of warehouse native analytics. The reason I enjoyed it is ’cause it’s, it really aligns with like our thinking and what, what, where we see things going and how we see the, the future in, in, in warehouses and, and sort of, growth intelligence platforms and, and it being the center of all of this stuff.
[00:21:04] Matthew: We talk about all the ai, all the, the analytics, the data sort of centralizing around these things and like mitsu and, and, and what. Its fan was talking about, was very much like a, a, a real value add to, to those central points. So it was just a small point really, but it was something that I was really enjoying talking about.
[00:21:22] Matthew: ’cause it really aligns with. Well, not just me. I think as we all think at Measure Lab, like how, how things are going and, and where the future is with, with data and analytics and, and the unlocks you can get from that.
[00:21:34] Dara: Yeah, definitely. I, I, I, I also enjoy, I don’t have a lot to add to that one. I think it was a really good one as well.
[00:21:40] Dara: And like you said, particularly I think because it, it, it, it aligned to what we, what we do. Which leads us on nicely to my next one, actually. Doesn’t align quite as much as consciousness, but Yeah, that’s right. O obviously, obviously. but yeah. but yeah, my next one is, when we talk to Gunnar Greese about cps.
[00:22:00] Dara: So I. For a couple of reasons. I just really enjoyed the chat. I thought it was quite practical. It was something that people could go away and kind of figure out for themselves, including me, and that’s what, that’s the other thing I liked about it. So, that was probably a bit of a catalyst for me to go away.
[00:22:19] Dara: It was kind of my light bulb moment, you know, everybody has one where they suddenly see all of this. AI technology can do. Even if you’re aware of it beforehand, there’s something for everybody that kind of flicks that switch and gets you really kind of excited about it and gets you hands on.
[00:22:36] Dara: and that’s what it was for me. It was the MCP chat. So, I mean, I had used, I, I’d only really used LLMs before that in the fairly basic way of just, you know, like asking questions, just a kind of back and forth type research aid or even just, even just getting. Simple pieces of information, but this was the first time I actually got involved in kind of using some of the, you know, like it got me using Claude Code and it got me playing around in the terminal and various bits and pieces, which is not something that I tend to feel particularly.
[00:23:08] Dara: Comfortable doing. so it was kind of a nice challenge to get stuck into and figure all of that out. And it was both from a, I used it from a, a kinda work standpoint to work with the GA four MCP, but then also, I was playing around with stuff. On a more personal level, use cases like building myself a little running dashboard, and even using things like obsidian for kind of note taking and journaling, and using the MCP for that.
[00:23:37] Dara: So that was kind of the moment for me that made me think, right, I really need to get stuck into this and, and play around with it. And it kind of opened my eyes to some of the things that are possible with this technology.
[00:23:48] Matthew: Yeah, no, that was a, that was a really, really interesting chat and, and like, I’d, I’d, I’d come across and played with CPS quite a bit, but I think definitely that.
[00:23:57] Matthew: That opened my eyes a little bit to the, the interplay between multiple CPS and, and sort of building out these, these, these multi-agent systems and, and it, it, it’s another demonstration of how quick everything’s moved in. Like some of the stuff that’s come out since that we had that conversation about CPS like skills and.
[00:24:16] Matthew: agents and like to delve in, we’ve delved a lot deeper into like agent development kits in Google and, and all those sorts of things. I’m sure we could have, and I’m sure Gunner would have lots of, thoughts and, and has and has been exploring and playing with all of this new stuff that’s come about and, and appeared as well.
[00:24:32] Matthew: So maybe we can have him on again in the future to chat about what he’s been playing with. ’cause it seemed that was probably three months ago. and it does feel like a different world in those three months. So yeah, watch the space. I really enjoyed it. Our recent chat with, YALI, where we kind of talked, we talked about it, was, it was kind of naval gazing is the wrong word, but it was, it was, it was just a sort of.
[00:25:01] Matthew: Thinking about and, and predicting what we think the web, the future of the web might look like. And we were just kind of talking about these super hyper-personalized layouts of websites and how the LMS are going to sort of, allow that hyper-personalization where like, no, no two journeys are the same.
[00:25:20] Matthew: And, and the shift from traditional websites to stuff like LLMs and, and chat interfaces and conversational interfaces. Sort of redefining and changing things. and then coupled with just the general, we’ve already touched on it a couple of times, but the general sort of dearth of AI generated content and, and slop and things.
[00:25:43] Matthew: It’s whatever way you look at it, the way the web looks, the way we interact with the web and the way the web works is going to be fundamentally transformed over the next. I don’t even want to say over the next, over the next amount of time units, X number of months, number of units of time.
[00:26:00] Dara: Yeah, yeah, yeah. And not just, not just the web though. Just even like, ’cause like that, the hardware side of it, and we’ve talked about, this has come up a few times where all of this, you know, technology, all the, all the advanced LLM models, everything else that’s happening, it’s, it’s, it’s all kind of. Forced into this box that it doesn’t really fit in anymore.
[00:26:19] Dara: ’cause computers, phones, they were developed for, you know, they were all developed pre, pre, well I shouldn’t say pre ai. I’m going to get myself in trouble here. AI has been around for a long time, but, you know, pre kind of pre-chappy chat, GPT, let’s say that, you know, when it became kind of mainstream and.
[00:26:35] Dara: 2022 whenever it is. So, you know, hardware hasn’t caught up yet and there’s all these rumors around what Sam Oatman and Johnny Ive are doing. No one really knows. There’s things we’ve joked about on the podcast like that friend necklace that’s more than a little bit ridiculous, at least in my humble opinion.
[00:26:54] Dara: So it’s, you know, what, what, what, what is this tech? What is this hardware going to look like? Then you’ve got things like the meta RayBan glasses, which are a bit creepy, let’s be honest. yeah, yeah.
[00:27:05] Matthew: It’s funny though, isn’t it? ’cause because Google did that a long time ago with Google Glass and everyone went, no, that’s well creepy.
[00:27:12] Matthew: I don’t want you to be talking to me with a camera on your face. And then like a bit later everyone went, no, you know what, it’s fine. As long as knowledge. Yeah, yeah, yeah. Yeah. It’s fashionable, so it’s fine.
[00:27:24] Dara: Yeah, it’s strange, but it’s how things work, isn’t it? You know, what was, what was weird before? Gradually, you know, basically big companies eventually tell you, no, it’s fine, and then it becomes normal.
[00:27:33] Matthew: And I suppose I, I, I felt the, the problem with this new attempt at, at hardware is that there is a lot of vaporware, like friend, like the one I bought. Rabbit R one. They’re still trying to, they’re still trying to get that to work.
[00:27:50] Matthew: I guess there’s some big new video the other day and about all the updates and this, that, and the other, and I was watching thinking, ah, shut up. I’m going to bother charging it up to look at what that is.
[00:27:58] Dara: I, I thought what you were doing there was building up to trying to sell it. No, you say, oh, if anyone’s interested Yeah, if anyone’s interested in buying it from me.
[00:28:08] Matthew: No, it’s, it, it’s that there’s going to be a lot of that, there’s going to be a lot of this sort of vaporware thing and people releasing things half baked before they’re fully realized and, and stuff like that. And, and I suppose as well, like a few sort of battlegrounds we’ve seen with all this new technology is.
[00:28:25] Matthew: The, the vaporware that will, or the, the new device space, the, the browser space has gone, has, has kind of exploded more than it has in a very long time. So you’ve got Atlas and Comet and Arc and obviously Chrome and whatever Google’s going to do with Chrome. So all of these new, oh, and Jira release one I believe as well.
[00:28:45] Matthew: Oh no. Jira bought the browser company, Ronan a, So that’s another battleground. And then you’ve got the CLIs and, and trying to get into your terminal console and things like that. So you’ve got Gemini CLI, cloud code, JIRA release, RVO something or other in there. So there’s like another little battleground going on there.
[00:29:07] Matthew: So it’s just really interesting seeing these companies competing and trying, trying to define what the future looks like in all these different spaces. And guess who’s quiet? apple. Oh yes, apple. I thought you, I, I, I, I, I thought you were going to say Amazon and I was about to go, actually, they just released a model today, so I’d already kind of teed myself up for some Gotcha. I dunno. I was wrong.
[00:29:29] Dara: No. So Apple, yeah. I mean, this isn’t my, I can’t claim this, I can’t remember who said this, somebody said it. I dunno if it was even a guest on the Measure pod or not, or if I read it, but someone was, someone had the opinion that, you know, apples are. Are waiting until things are a little further baked.
[00:29:44] Dara: And then they might, ’cause they’ve got the reserves, they could buy one of the big, potentially they could buy a big AI company.
[00:29:50] Matthew: But, you know, what are they doing? It feels like, here, I’ll put a prediction out there. Shaller, I’ll, I’ll do a prediction there. There’s a French company called Mytral. I dunno if you’ve seen the Mytral ai.
[00:30:00] Matthew: They have, they have, they create models and things like that. They seem small enough. And have enough of the base sort of, of technology and things figured out that it would be a prime thing that go, that Apple could gobble ’em up and, and. Just take their technology and run from there. Yeah, if, well, I presume they couldn’t buy like a, but they probably could buy any of these companies, couldn’t they?
[00:30:21] Dara: They have so much cash reserves. I think they, I think they’ve got a hu I think they’ve got a huge, huge, huge cash reserves, I think. Yeah. I, I, I, I mean, I don’t know. ’cause the valuations are so crazy now, aren’t they? But I, I, I don’t know. But I think they, they certainly have. A big enough war chest to buy a, you know, a, a decent company.
[00:30:38] Dara: So if, if Tim Cook, if he’s still the guy, if he’s listening, which he probably is, I’m sure he is a fan of the podcast. Well, you know, you’re welcome to give us a kickback for recommending this to you.
[00:30:49] Matthew: Yep. I’ll take 10% of whatever purchase price of Mytral. It’s even Mytral. It is now MIST. TRAL. Yeah. Mr. Yeah,
[00:31:02] Dara: Mr. Yeah. So I, you could flip a coin on my last two. I, I, I, I, yeah, I, I could put either of them, you know, as my, as my top one, but my next one I’m going to, I’m going to stick with what I originally. Jot it down. So, next 25. So I’m going to go right back to the beginning. This was the first episode that we did together.
[00:31:27] Dara: So for that reason alone, I think it’s worth a mention. I think it was our first, our first outing, as a, as a pairing on the podcast. And it was a great one. I really enjoyed it. I thought, you know, the start, we might have been, we might have had to find our feet, but actually it was a really good episode.
[00:31:43] Dara: Maybe if I listened back to it, I wouldn’t agree with that, but I, I, at least my memory is telling me it was a good one. Partly ’cause it was plenty to talk about. We had both been watching the next 25 remotely watching the keynote. We were talking as it was happening and there was just so much information and it was really, It was kind of go. It is interesting actually, ’cause Google did a good job as they tend to do, of saying, here’s all this brilliant stuff that’s going to just make everything completely easy. And we were even joking about how all of their demos, they had these like completely filled out clipboards where it was like, now I’m just going to do a prompt.
[00:32:17] Dara: And they go to their clipboard and paste in these really detailed, well thought out, refined prompts. And then lo and behold, everything worked. Perfectly, but it was just this kind of promise. And Google is pretty good at this. They just put out this promise. You’re going to have a data science agent, you’re going to have a data engineering agent.
[00:32:35] Dara: Everything’s going to be perfect. You know it’s going to go through and pull together schemas and clean everything up and you’re not going to have to do anything anymore. Now, obviously. That’s not reality. But the promise was really exciting. And I think it’s interesting now that we’re coming to the end of the year that Google had kind of liked behind a little bit and now they’ve, they’ve pushed to, or at least they had pushed to the front, maybe Opus 4.5, maybe, you know, depending on which.
[00:33:01] Dara: Benchmark you look at and you made the point last week, I think that they all seem to favor, you know, there’s a little bit of cherry picking going on with this stuff. But Google really is kind of driving from the front now and not just with the actual. The kind of models that they’re bringing out, but the fact that it’s their infrastructure that sits under some of the other companies as well, they’re primed and in a really good position to remain one of, or the main player, I think going into next year.
[00:33:30] Matthew: Yeah, I, I think so. And, and it’s. I think we’ve, we’ve, we’ve talked a lot over the 16 episodes or so. It’s been a recurring theme of, you know, worrying about these, not necessarily worrying, but just sort of recommending caution on a lot of these agents and things that they were chucking out into, into Google Cloud and making sure x, y, and Z is correct and your data’s correct, et cetera.
[00:33:52] Matthew: What I have noticed over the past, sort of a couple of months, four months or so, they’ve really begun to release. The nuts and bolts of what makes those releases really a lot more powerful. So they’ve, they’ve put in tons of investment in data plex. So you can really build out semantic layers and metadata and company level information and make all of the, the, the predictive things that Gemini does across Google.
[00:34:18] Matthew: Much more, much more powerful. And like Katie at Katie at my lab, she is continuously exploring the new BigQuery Gemini functions that they’re releasing on almost a daily basis. I’ve not even counted how many there are now, but there. They’re really forging ahead. And then you think you also have one of the biggest cloud architecture companies in the world.
[00:34:46] Matthew: You also, unlike Amazon, unlike uh uh, unlike Microsoft. Manufacture your own tensor processing units, your TPUs, which is a direct competition to, to people like Nvidia. And I think it made, I think it shook the market a bit when Gemini 3.0 was so good. ’cause they’re like, well, Google’s done that with their tensor processing units, not with NVIDIA’s GPUs.
[00:35:09] Matthew: They’re going to finish out on top. and things got a bit shaky for a little while. So it just feels like they are. Such a prime position to just, regardless of if the model is absolutely tip top or neck and neck with this, that and the other, all the other stuff they’ve got, it just makes ’em feel far ahead.
[00:35:29] Dara: Yeah, far ahead. I think competition across, yeah, I think you’re right. I think if you look at everything, yeah, if you look at everything, I think it’s hard to argue with that.
[00:35:35] Matthew: Yeah. And the next, the next, I think all, if all listeners write in. the letter saying that they think we should absolutely do the next, next 2020, next 26 live from, from next 26. We can, we can try and make that happen. It’s in Las Vegas, but we’ll, we’ll fall on that sword. Yeah, that’s fine.
[00:35:56] Matthew: We’ll, we’ll do it so you don’t have to. so my next one, my next thing of note is, is just the, the, the Salty news, our award-winning. Jour journalism, even though we, I would say 60% of the time get the, the individual facts wrong, the, the, the detail wrong maybe, I am, I am enjoying just being forced to keep sort of the finger on the pulse of what’s going on and what’s happening and these releases, and it’s focus and I’m sort of focusing in on it and chatting around it every two weeks.
[00:36:33] Matthew: It was quite cathartic and has allowed me to sort of. Keep up a bit and having this structure has been good and I’ve really enjoyed chatting around it and, and all this, all if, I dunno if anyone likes it, but I like it. I enjoy it.
[00:36:46] Dara: So salting news until we hear otherwise, we’re going to keep doing it and even if we hear otherwise, we might still keep doing it anyway.
[00:36:52] Matthew: Yeah. Yeah.
[00:36:54] Dara: No, I, I we don’t really care what you think. I agree with you. It’s fun. It keeps us on our toes a little bit, although we do always have the disclaimer that it’s salty, so we don’t, you know, there’s, there’s enough pressure to make us find things worth talking about, but not enough pressure that we actually have to be entirely factually correct about it, which I think is the perfect sweet spot. Yeah. Yeah.
[00:37:15] Matthew: If only any, any news outlet could do that. I mean, they’re mo this is a guess.
[00:37:19] Dara: They’re mo they’re mostly that way now, aren’t they?
[00:37:22] Matthew: Well, that’s true. That is true. Yeah. Yeah. A lot of them. Anyway, deep back to the dystopian, I had a nice positive up pee thing.
[00:37:28] Dara: What Interacts me down at the end, can’t, can’t help myself. I can’t help myself. But yeah, no, I, I listen, no, I, I think it’s great. It is our chance to chat through stuff that’s happened. Even though this stuff’s going on all the time, it kind of, it’s nice to have a bit of a focus and be able to kind of look back and actually see what’s happened and then chat through it.
[00:37:45] Dara: And usually we have a bit of a laugh as well, because. We make a few jokes about things, whether it is around tungsten cubes or, I think there was some random thing about pumpkins at one point as well in the, in the salty news about, there, there we, I think Google again, were using an example of organizing Halloween party and it was something about ordering 8,000 pumpkins or something like that.
[00:38:06] Matthew: So, oh, it was open air. So you insulted you. That’s a net of salt.
[00:38:10] Dara: Ah, see, I’m glad you picked up on that. Yeah. Yeah. That was, that was intentional. Okay. I think it’s back to, back to me, and I think it’s the last one on my list. so I’ve gone for, it’s, it’s, it’s a theme and it’s probably if you, if you cut our episodes, if you combine them all together, cut them in half.
[00:38:30] Dara: Like, you know, the whole stick of rock thing. This is probably the theme that’s running through the middle of it, which is I’m going to go with the American version. Garbage in, garbage out. but this has been an underlying theme throughout pretty much all of the episodes that we’ve had where we’re all excited.
[00:38:48] Dara: Everyone’s excited about where the industry’s going. Everyone’s excited about the latest LLM and the new features and everything else. But ultimately you can’t get away from the fact that if you have rocky foundations and then you layer a bunch of new technology on top, then you can’t reliably use what you know, what comes out at the end of it.
[00:39:07] Dara: So, you know, if you’ve got rubbish data going in, I’ve switched from American back to. UK here, you’re going to have rubbish coming out. So it’s easy to get carried away with all this stuff, and it is exciting and everybody does want to be getting involved in this, but you can’t get away from having those rock solid data foundations, and that’s come up through several.
[00:39:29] Dara: Conversations we’ve had with different people throughout the different episodes, regardless of what their particular topic was about, it seems to be, you know, as I said, that kind of undercurrent that runs through everything.
[00:39:40] Matthew: Yeah, I mean, it came up again recently. We’ve been doing a lot of exploration with conversation analytics.
[00:39:46] Matthew: We just, we just ran a webinar on it, a couple weeks ago. And even in that you, you can. You can layer in lots of semantics and metadata and all this great stuff on top of the data, and you could, you could pretty much do that indefinitely and, and get incremental gains with the more information you put in there, the more useful and the more knowledge the conversational agents have on your, on your company and to answer your questions.
[00:40:11] Matthew: But if you do all of that on top of crap, I think I, I think I turned it in the webinar. Don’t, don’t put diesel in your Ferrari. If you spend all that time building this, absolutely fantastic. Semantic engine and multi-layered multi-agent platform, but the underlying data isn’t right and not properly vetted and tested and checked and combined and modeled.
[00:40:36] Matthew: It’s pretty pointless and, and, and a big problem because what could happen, and, and I think we might have even seen it a couple of times, is if, if you rush to it, you get some application out, you spend a lot of time and a lot of money releasing that application. A couple of mistakes, a couple of people noticing the data’s not where it should be, can pretty much erode the trust in.
[00:41:00] Matthew: Not just the data, but AI and set your company back like a significant amount of time. And we’ve already talked nonstop about the pace of this stuff. You can’t really afford to be set back with a lack, a loss of trust internally, with the, with the whole thing. And then by the time you get that back again, you’re like, oh shit, where’s everything at?
[00:41:22] Matthew: Everything’s moving on again. And so it can be really damaging and it, and it’s the simple fix is to. Go ground up with it and, and get everything shown properly before you start really delving in.
[00:41:34] Dara: Which, you know, you can understand why people do it. You know, maybe it’s not seen as being as exciting and you want to jump ahead, but you, you, you have to, you have to do it or otherwise, all that exciting new stuff is just going to be a waste of time. It’s going to give you the wrong results.
[00:41:48] Matthew: Absolutely. So two final things. My final one, which I think is the final thing of note, and then we’re going to do our own predictions for the next year. But for this, my last thing is. At the end of every episode we’ve been asking the guests what their predictions are for the next couple of years where they see things going, even if it’s a ridiculous question.
[00:42:18] Matthew: ’cause that’s all we’ve talked about for the entire podcast. I’ve asked it regardless and powered through. so I just thought I’d, I’d just list through a couple of the couple of people’s predictions which are, which I think were pretty interesting. Juliana talked about. The growth of AI for marketing and creative intelligence, which was, which was interesting.
[00:42:40] Matthew: She’s, she’s put in a big bet on that you’ve got Yali who predicted user experience is going to be completely different. We kind of talked about this a little bit already. and that’s going to change how things are. With interfaces, it’s going to change analytics and data pretty much completely. Johan had some, shared some similar concerns to, to what we did about deep fakes and, and various misinformation that exists out there.
[00:43:08] Matthew: So he was sort of worried about how that was going to progress. I think the word black mirror, the term Black mirror came up. Mark Edmondson. And Mark, you might be the, you might be the, the, the first one that’s actually nailed it and actually got our prediction correct. He reckoned that Google will definitely have Gem Gemini three by the end of the year, which will be a very strong model and that Google would end the year on top.
[00:43:32] Matthew: So top points Mark. ’cause I think you actually got a prediction, correct. And he also talked about some more bigger things like geopolitical instability around chips and all sorts of things like that, which was scary. Brian Clifton talked a lot about personalization, so he talked about it.
[00:43:55] Matthew: Hyper-personalization being a tipping point and maybe there being some backlash, where people just start saying no to, no to tracking full stop. They don’t want to, they won’t want to be involved. And we’ve talked, we’ve, we’ve sort of riffed on that, the idea a few times since where I think my view is it’s not quite the same as hyper relevant marketing because you’re actually getting something of value out of it in terms of like.
[00:44:21] Matthew: An assistant that knows everything about you and knows your preferences and things. I do wonder where it’s going to go privacy wise if people are going to put more in, because they get more out than they do with some ad that sells you the exact right shade of beige socks. And there’s, there’s more. But I’m just going to round this off with the most interesting, or I don’t understand, one of the bunch, which was from Daniel, Hume, who predicts the rise of neuromorphic.
[00:44:50] Matthew: Computing, his argument being that you don’t need it. A nuclear power station to power our brains. You just need, you just need the power of a light bulb to do so. So a lot more hyper efficient models are going to come about from this neuromorphic computing and that may have knock-on effects like your Nvidia and things like that.
[00:45:12] Matthew: And who are betting big that you need tons of compute to, to progress the space. So that was what all of the really intelligent people that we talked to on our podcast said were their predictions over the next X number of years. Dara, what is your, what do you predict? Oh, let’s just do it next year?
[00:45:31] Dara: I think, I think the hardware thing is going to move more in the next year. I think there’s going to be more focus and more investments in that area, so I’m interested to see. Better things come out than the French necklace. Whether it is the Sam Altman, Johnny I thing, or it’s, you know, further improved versions of the meta glass, RayBan glasses, whatever.
[00:45:58] Dara: But I think there’s going to be more in that space. Another prediction I’m going to make for next year. How many am I allowed to do? By the way, I dunno. It depends if one of the ones you say is for the one that’s in my head, in which case I’ll just cut you off. I’ll just keep talking. Podcast. I’ll just keep making predictions.
[00:46:14] Dara: I’ve got about 640. Is that okay? I think the other one is around the kind of personal AI space, so you know, your own personal AI model. So rather than using this, you know. Like, whether, whether you’re Gemini or Claude or Chacha, bt, whatever, you know, have your own version that you can confidently share personal information with and have it learn and train based on your information.
[00:46:41] Dara: So it’s your own personal ai and then effectively like yours, like your digital twin on steroids. so I’m interested to see if, if, if more happens in that space. I think it probably will in the next year.
[00:46:55] Matthew: mine, I, I think there’s going to be, we, we’ve talked a bit about. If everything stopped right now and models didn’t get any better, there’s still so much, so much left in terms of utility of what’s already out.
[00:47:09] Matthew: And I think we’re seeing that over the past however long in terms of like memory being added and, and improvements to the reasoning of the existing models and, the skills and CPS and all of these sort of add-ons that, that didn’t exist before that are unlocking more and more and I think. There’s going to be loads more innovation, there’s going to be a lot more sort of maturation of the products that sit on top of these models.
[00:47:36] Matthew: And I think that’s going to lead and push people a lot more to have to get their data in order. Like, like we’ve already talked about. People need help. ’cause all those things are great and they’re going to be getting more and more powerful and more and more impactful for a business. So people have to get their ducks in a row in terms of data and, and sort of get it centralized and, and governed and trusted and all of those buzzwords that we continuously talk about to just round everything back off, back into the world of, of data and analytics.
[00:48:08] Matthew: but that’s my prediction a lot more. Investment in cloud, centralizing data and unlocking the power of what already exists.
[00:48:17] Dara: Alright, that’s our, our, our Christmas, 12 days to Christmas, 12 days off Christmas,
[00:48:22] Matthew: 12 days to Christmas, 12 things of note, 12 things, 12 day socks, 12 day stocks.
[00:48:27] Dara: Whatever you want to call it. There are 12 highlights. Have it 12 highlights, some dystopian, some not some positive. But I think we ended on a vaguely optimistic note there.
[00:48:36] Matthew: Yeah.
[00:48:37] Dara: And it’s been good. I’ve enjoyed our 16 episodes together. 17, including this one, unless, you mean you didn’t enjoy this one, hence why you’re counting 16.
[00:48:46] Dara: Oh, I’m not sure yet. You haven’t 16 haven’t, yeah. We need to, need to, need to review it afterwards and decide what we think about this one. Yeah, I agree. And, yeah, looking forward to getting more guests in the new year and, and carrying on.
[00:48:58] Matthew: Yeah, I think we’re going to have a, a slight, hiatus about a month, I think, are we, and then get, come back sort of late Jan, early Feb. And, and start off again.
[00:49:09] Dara: Yeah.
[00:49:09] Matthew: Unlike ai, we do need a little break every now and again just to recharge, keep up with the AI news so you don’t fall too far behind in February.
[00:49:17] Dara: Yeah. We’re probably going to have a whopper of a news segment in the next one. We do.
[00:49:21] Matthew: Everything will have changed. Maybe just the first episode will just have to be news. Yeah. And then we’ll go into guests after that.
[00:49:26] Dara: It’ll be okay. Forget everything we said in every previous episode. Yeah.
[00:49:31] Matthew: Google’s gone. They’ve, they’ve, they’ve, they’ve fallen to pieces. But now all hail tl.
[00:49:36] Dara: Alright, that’s a wrap. We’ll see you all in the new year. Have a great Christmas. Bye-bye. That’s it for this week’s episode of the Measure Pods. We hope you enjoyed it and picked up something useful along the way. If you haven’t already, make sure to subscribe on whatever platform you’re listening on so you don’t miss future episodes.
[00:49:53] Matthew: And if you’re enjoying the show, we’d really appreciate it if you left us a quick review. It really helps more people discover the pod and keeps us motivated to bring back more. So thanks for listening, and we’ll catch you next time.