#122 Digital transformation in the age of AI (with Antony Mayfield at Brilliant Noise)
In this episode of The Measure Pod, Dara and Matt are joined by Antony Mayfield to talk about what 15+ years of digital transformation work actually looks like and how it’s changed. From the rise of influencer marketing to the sudden urgency of AI, Antony shares what he’s seeing on the ground with big brands and why adapting to new tech is no longer optional. Expect a wide-ranging chat on systems, shifts, and why ChatGPT isn’t just a tool, it’s a turning point.
Show notes
- More from The Measure Pod
- Jony Ive’s ambitious OpenAI device
- New Anthropic Claude models (Opus 4 and Sonnet 4) and Claude 4
- Google Ads Data Manager API
- Google Tag Gateway
Share your thoughts and ideas on our Feedback Form.
Follow Measurelab on LinkedIn
Transcript
Some people will just stick with the familiar, even if it’s painful and tedious, because that’s what they know.
Anthony
The real first tech revolution is when somebody started talking and that spread through the world.
Anthony
[00:00:00] Dara: Hello and welcome back to The Measure Pod. I’m Dara joined, as always by my, still relatively new, but now established co-host Matthew. Hello, Matthew. How are you?
[00:00:25] Matt: I’m very well, thank you. It’s nice to be established.
[00:00:30] Dara: Yeah. I need to come up with new words. What’s the, what’s the word for Cos Isn’t new but is also kind of new co-host.
[00:00:38] Dara: I need to buy a dictionary. Really? Yeah. Just cohost. Oh, I didn’t think about that, that’s nice and simple. Yeah. Yeah. Okay, so we’ve got some news that we’re going to go through and, then we’re going to switch over to a really interesting conversation that we’ve had with a guest this week.
[00:00:59] Dara: So to the news first. bit of a, you know, I don’t know, bit of a like, not particularly related to what we do as such, but I just thought it was kind of interesting, about open AI buying. Johnny i’s Company, which is called io. How did he do it? Firstly, I have so many questions about this. He managed to get the name IO for a company, and that’s probably why it was worth 6.5 billion.
[00:01:27] Dara: That’s what I owe me to pay for the, just, just for the domain, like what is it? Is it io.io or is it just, anyway, that’s homework for our listeners. They can figure that out.
[00:01:36] Matt: It’s definitely interesting just how much, like there’s nothing there, isn’t it? It doesn’t have anything. Does it, like this company doesn’t have a, it has, rumors of some sort of AI product or some AI device.
[00:01:48] Dara: It’s got, well, no, it’s got, well maybe this is a rumor as well, but apparently there’s a prototype that Sam Altman has actually taken home and tried out. But that could be a lie.
[00:01:58] Matt: But I, exactly, that could be as true as our news. There’s a measure lab prototype that I’ve taken home that’s going to completely change the world yeah. Is that true or not? I don’t know. And now is the measure lab worth 6 billion? Let’s hope so. Yeah. I think it just shows how much money is in. I mean, it’s got Johnny, I connected to it, I suppose.
[00:02:16] Dara: Johnny Ive and the name io and, and you put those two things together. yeah. And domain, Johnny, live AI
[00:02:22] Matt: 6.5 billion, six billions. Actually a bit of an undervaluation, I’d say.
[00:02:25] Dara: Yeah. Yeah, probably. Yeah. But it is interesting, I do think they could be quite powerful, I dunno if you watched it, there’s a video, it’s a little bit kind of Yeah. Broy, they’re like sitting in this coffee shop having, you know, a coffee and chatting about how they got to know each other and stuff.
[00:02:39] Dara: Waxing lyrical about San Francisco and Yeah. It means, yeah. Yeah. And so obviously for marketing purposes, but it, it, you know, it is interesting that like I. They’re kind of premise or what they’re saying. You know, the reasoning behind this, which does make a lot of sense, is like, we’re using decades old technology.
[00:02:57] Dara: I mean, our actual laptops and phones aren’t decades old, but the technology is decades old and it’s not really fit for purpose in terms of what AI is really capable of. So it’s like we’ve got this vastly expanding new technology and we’re trying to interface it with these little weird boxes and you know, that we carry around in our hands. So the idea being that they’re coming up with something that’s actually fit for purpose, but what that is, there’s a lot of speculation. Yeah. But nobody knows for sure.
[00:03:25] Matt: I mean, you’ve had a couple of sh a couple of shots at, haven’t we, in terms of, what was it? This humane AI pin? Yeah. That did well that it’s gone. I think. I believe they’re, they’re now bankrupt and gone. and then the rabbit R one, which I was stupid enough to buy.
[00:03:41] Dara: Yeah. How was that working?
[00:03:42] Matt: Instantly I think. I think it was two days before I realized it was crap. So if it is truly different, ’cause they, those, the argument with both of those things was, well, could they have not just been an app on a phone?
[00:03:55] Matt: Mm. So don’t get me wrong, I’ll probably buy it, whatever it is. It’s Johnny, I’ll have it. Just let me know when it’s out and I’ll put my arm in my pocket.
[00:04:03] Dara: One thing I, I don’t know, I I, I, I wouldn’t read too much into this, but one of the, I, and actually I don’t even know if it was in an, if it was in the official press release or just in an article about it, but it said some, it had a quote saying about, you know, we want to see this world where if someone’s got a subscription to chat GBT, they get sent one of these things.
[00:04:22] Dara: Or you know, apparently it could be like a family of things. So it could be like the Google Nest devices where you might need one in each room and one that follows you around like a drone or something. But that suggested to me that maybe you’ll get, you’ll get this for that, that they won’t make the money off selling you the device itself, but they’ll possibly make it in other ways through subscriptions and maybe. Harvesting your data and selling it to the, to the highest bidder.
[00:04:44] Matt: There’s always been that sort of, and we’ve talked, we’ve talked before about maybe doing conspiracies of conspiracy theories in data episodes. There’s always been that conspiracy theory that eventually you’ll just get, like you say, you’ll just get given your smartphone because what they’re actually interested in is all the data and the, of course, the, the consuming YouDo on it, and the better you have a platform to consume on Yeah. Then not.
[00:05:05] Dara: So yeah, maybe, and we’ll all, wow, this is amazing. I can’t believe they’re just giving us this really shiny piece of technology out of the kindness of their hearts.
[00:05:13] Matt: I just had to, all I had to do was have Elon Musk inject a needle into my eye, and now I can buy it. Underpants on the Go
[00:05:21] Dara: Life goal’s complete. But yeah. We’ll, we’ll see. We’ll, we’ll see. I mean, this is an interesting headline and, and obviously it is quite a, yeah, they’re, they make quite a dynamic duo. I mean, they’re basically like, they’re like the you and I of, of, of AI and, and, and hardware design really, aren’t they?
[00:05:35] Matt: You know, they, they’re, I did get eye pressure.
[00:05:37] Dara: They’re like, they’re like a poor man’s Matthew and Dara, really, but we’ll see. We’ll see what they can come up with. But if, and if they need our help, if they’re listening and they need some advice or consultancy, they know where to find us. But yeah, let’s watch this space. It’s, I think they’re going to call it companion AI companions, so slightly.
[00:05:52] Dara: Yeah, we’ll see. Could be a little AI friend that follows us around. But, yeah, we’ll see what that actually turns into, onto slightly more kind of real tangible news. but still on the AI front, anthropic, I’ve released their latest versions of, of the Claude, models. So they’ve got their, you know, what’s been kind of called their powerhouse, which is Opus four, and then the smart All rounder, which is sonnet four.
[00:06:25] Dara: and they’re doing well. It’s funny with these, isn’t it? Like I, I, you’ll know more about this than me, but like every model that gets released, they pick the, like benchmarking site or standard or whatever that they do well in. So it’s like every time Google comes, comes out, it’s like, oh, g, you know, the new Gemini Pro, it’s at the top of this leaderboard. But then, the new cloud models are apparently top of, SWE bench verified, whatever that is.
[00:06:52] Matt: Yeah, yeah. It’s, it’s, I think at the, at the minute, I think the Claude release was a few days after like 2.5 preview or whatever that, that, of Gemini, the Google released, and both of them were claiming to be the best coding models in the world.
[00:07:09] Matt: So it’s, and they’re, they’re just one after the other. So especially now, like you, you’ve had Google kind of have a drop the mic moment with VO three, which is another, the big thing that came out, which is kind of still got me in and in a continuous state of wow and panic.
[00:07:28] Dara: With the audio as well as, I mean it’s, that’s yeah, the audio.
[00:07:32] Matt: Yeah. It’s, it’s getting close to like, how the hell can you tell what’s real and what isn’t?
[00:07:36] Dara: Maybe that’s what this is. Maybe for people watching this podcast. Maybe we’ve been created in field three, although I think it creates better. Yeah. Do we have a prompt, I dunno. I think it creates better, it creates better outputs maybe. Yeah. We’re in the lower session Yeah. Dynamic.
[00:07:53] Matt: yeah. But then, Google drops the mic instantly after that. Then it’s Claude’s like, oh, well we, oh, sorry. Anthropics. Well we’ve got Claude. We were actually the best before Opus. Yeah. So then you, you got to imagine there’s going to be some. Open AI thing around the corner, that’s going to be that.
[00:08:07] Matt: I think I saw the ADE some, the technical officer starting to talk about CHATT five and they’re starting to try and spread those rumors and get that, that mill mixing. So it’s a race to the apocalypse. Yeah. Just got to enjoy it. Yeah. Inject, inject what Daddy must tell us to inject and, and yeah.
[00:08:25] Dara: And just buy underpants and just carry on with life.
[00:08:28] Matt: Yeah. Yeah.
[00:08:29] Dara: Have you played around with the new Claude versions?
[00:08:33] Matt: Yeah, I have done, I’ve done, I do tend to, I, I do tend to flip from pillar to post with them. So whenever the new one is, I’ll, I’ll use it. I, there’s been, one of the, one of the, the guys in our team has been using sort of the refreshed version of Claude Code, which has got some nice sort of value ads in it, working a bit more closely with IDs and stuff, and he’s finding that really useful in some of the tools and things we’re building here at Measure Lab. So we played a bit. Yeah. So I tend to flip back and forth between Google and. Gemini Now, interestingly, sorry, Gemini and Claude.
[00:09:08] Dara: Yeah, I think from what I, from, from what I read, it’s the, the, the, like the, and what, what these models are doing particularly well at is the, like, it’s in my head and I’m going to say real programming, I shouldn’t say that, but it’s like for, you know, like hard kind of hardcore programming tasks.
[00:09:24] Dara: So I think that’s what it’s ranking highly for. and, and I think there is some evidence outside of just what they’re claiming. ’cause I think I read that GitHub is using, and are going to adopt it for their copilot. So they’re going to use, i, I guess it would be opus rather than so that they’d use, but you know, I guess that’s a bit of a testimonial there.
[00:09:45] Matt: Yeah, yeah. It’s definitely, I, I mean I still find, I still find all of them potentially struggling a little bit with data applications, but I, I think we might have said this in the conversation we’re going to have today, that I’m not sure there is as much data. Engineering and, and analysis SQL type information out on the World Wild Web where all of these things have been trained compared to a lot of web development, training data that they’ve probably been trained on.
[00:10:14] Matt: So potentially they’re a little bit stronger on the web dev and, the, the standard software engineering stuff compared to niche that suppose you could call data engineering and non analysis niche, but potentially less public. Yeah, I think so.
[00:10:27] Dara: That makes sense. Yeah. Yeah then onto a couple of more, feels like this news is like.
[00:10:33] Matt: Anything not AI related.
[00:10:35] Dara: Yeah, yeah, yeah. No, I was going to say, I think there’s like a, almost like a, a, an unintentional kind of hierarchy to this new, so start maybe an inverse hierarchy. We started with the kind of least relevant to us, and then we’re, we’re gradually moving towards the more granular stuff. so on a non AI and.
[00:10:53] Dara: Even this stuff, like you can’t read a release without the dimension of ai, even if it doesn’t really have anything specifically to ai. What, why I’m saying that is because, Google on the ads side and the tracking side have, have released some new, some new features or new, new tools lately that came from, I think it was called Google Marketing Live, which took place recently.
[00:11:18] Dara: but even in the like, release notes about that, they’re saying, you know, how we’re releasing all this like AI powered, and it’s like, well, hang on, is it, is this actually AI powered or are you just putting the word AI in there because it’s such a buzzword? but a, a lot of the kind of more, like ad specific, so a lot of the stuff you can do in Google Ads, they’re introducing a lot of AI powered.
[00:11:37] Dara: So in terms of both creating your campaigns but also optimizing them and all the rest of it, there is a lot of AI going into it. but there were a couple of things we wanted to mention. Which aren’t really AI related, but you’ve got the Google Ads data manager, API, which is, so the data manager itself is not news.
[00:11:58] Dara: I think that’s been out for about a year. But the API is new, and that’s currently an enclosed beta. So I don’t know what the timescale is for that being available to everybody. but Data Manager is a platform that lets you kind of import, manage and activate your first party data, through with Google Ads.
[00:12:19] Dara: and then the API’s obviously going to allow you to do that on a more, kind of, on a bigger scale. so it’s kind of like watching this space on that one. It is, you can express your interest. There’s a form that you can express your interest for the closed beta. But if it’s anything like some of these forms, you might expect there’s a lot of forms.
[00:12:37] Dara: How long have we been waiting for? What is it? agent, agent space.
[00:12:44] Matt: There’s a space. I think I put a name down for me for the Meridian model way back when that was. Oh yeah, that was a while ago. Yeah. I mean, I think, I think you can get access to that now, but I think Google Forms is over doing overtime, it seems to be about 500 of them that you can sign up to beaters and testing. I think so. No, it’s definitely interesting. Really, really sort of, could be really powerful.
[00:13:09] Dara: Yeah. And I guess it’s a bit like, you know, some of the like, cap and, you know, it’s, it’s, it’s going to be a way for, you know, as security, and privacy becomes, you know, well, it’s just like, this isn’t new, but like, you know, as it continues to be more and more of an issue, I think anything you can do to kind of improve what you’re doing with first party data is a, is a no brainer.
[00:13:27] Dara: and so then on a kind of similar vein, I guess, the gateway for some reason is becoming quite a popular term, but so does the Google tag gateway. Am I calling it the right thing? It is the Google Tag Gateway, isn’t it? which was called first party mode, which probably makes more sense. I actually probably prefer the old, the, that says on the team, we’ve got a really great name, let’s change it to something that doesn’t really make a lot of sense, like Google Tag Gateway.
[00:13:56] Dara: But, you know, Google is smarter than me, so they probably see some reason to make that change. but this is where you can, you know, I guess affect, tell me if I’m saying this wrong, but basically lets you host the script on your own domain, so you’re not, and you can do this either with or without service-side GTM.
[00:14:13] Dara: So even, you know, it doesn’t matter, but you are, you’re basically pointing it to your own domain, as opposed to the Google Google domain. So you could use it with your own, your own kind of, setup.
[00:14:27] Matt: Yeah, yeah. You can, you can host it on your own servers or on, on my Google servers. And yeah. I’ve not delved into it a massive amount.
[00:14:38] Matt: This is all relatively new and, and new announced with the similar things that, that you’ve been able to do with things like state, where you’ve been able to host these things and, and have it host it all in first party context and things like that.
[00:14:50] Dara: So, so yeah, again, a kind of first party theme to these, these Google updates, you know, they’re obviously putting a lot of attention into, you know, improving their user’s ability to collect and use first party data.
[00:15:06] Dara: So I think these two things, the tag gateway and the data manager API, are both two steps in that direction. data management, API, maybe we’ll talk more about when we get access to it. but for now, I think it’s just more a case of saying these things are, are out or coming out soon. So watch this space.
[00:15:28] Matt: And I guess we can say that we’ll put the, we can put the, the release notes that you’re talking about, Dara, or the, the announcements in.
[00:15:35] Matt: In, the show comments so people can go and have a read through themselves.
[00:15:39] Dara: Yeah, I could, yeah, exactly. And I couldn’t actually find too much information. On the tag, sorry, on the data manager API, there’s the, you know, the developer, the kind of Google developer docs around that. but I couldn’t really find a huge amount of information, which I guess is not a surprise given.
[00:15:59] Dara: It is, it’s quite new. So more, more will probably come out. But yeah, I’ll include all the, all the links to, you know, the, the really any release notes and developer documents for people to take a further look at. There is a reason why our first two news items were AI related, which I mean, they, they often will be anyway ’cause we’re just swimming in AI news.
[00:16:20] Dara: but this is, this, this episode is AI themed because we have a really interesting conversation. with a guest, Anthony Mayfield, who’s from Brilliant Noise, who’s a really interesting and, and and clever guy. And we have quite a wide ranging conversation around AI, including a few kinds of predictions about the future and thoughts about how companies are using AI at the moment and, and what it means for people.
[00:16:45] Matt: One, one running theme is probably, about how you can try and get more AI adoption in, in, in a company. So I think it’s, it’s, it’s really interesting plus all the other stuff you talk about. So I really enjoyed it. Alright. Enjoy the conversation.
[00:17:00] Dara: Okay, very warm. Welcome to Anthony Mayfield. Anthony, thank you for joining us on the Measure Pod and welcome, delighted to be here.
[00:17:08] Dara: Thanks very much for having me on those. So we’ll let you do a proper job of introducing yourself, rather than me doing a really bad job of it. So in as much as you like, you can go back as far as you like and you can go into as much detail as you like. but just give us and our listeners a bit of background leading up to what you are, what you’re doing with Brilliant Noise Today.
[00:17:28] Anthony: Brilliant Noise has been around for about, 15 years, and there’s been a kind of, we’ve done a lot of different types of work around social media influencers generally in the marketing, sphere, but the kind of common thread throughout all, it all has been this idea of, digital transformation, of adapting organizations in that means teams, and that means individuals and individuals that means minds.
[00:17:54] Anthony: So adapting all of those things, all of those systems really, to come up with new frames and new ways of looking at the world because of the, of the tools and the, the systems that have be, have been becoming available to us and really in the last two years, last two, three years, that has become all about artificial intelligence and that’s become the whole, the whole of digital transformation, I’d say at the moment.
[00:18:22] Anthony: And the reason. For that is, that since, especially since Chat GPT came into the world, it has dialed up the urgency of digital transformation. People have been trying to change organizations to take advantage of digital technology for a long time, I think during the pandemic and then some of the subsequent economic shocks, of the last few years.
[00:18:45] Anthony: That, that is, it may be kind of dialed down slightly for some organizations, but it’s right at the top of the agenda again now because it’s moving faster. a ai, which is another wave, another of the many waves of, of digital, the digital revolution perhaps, and I think many, many, many people say it’s not an extraordinary thing to say.
[00:19:06] Anthony: Many people say it’s the biggest wave of all. And it certainly feels like that to me. The, it’s both speeding things up, speeding up the urgency. And then also some of the programs when, I mean famously like I think many, many companies that engage in digital transformation trying to change how they work to take advantage of data, digital tools, the internet, whatever.
[00:19:28] Anthony: They very often fail. Transformations generally fail, but, hopefully they’ve made some progress before they fail. With artificial intelligence, some of the things that made things slow down or fail before have, have now been swept away. because, well, essentially because artificial intelligence, generative ai, as we experience it in daily tools like, chat, GBT and Gemini and Claude.
[00:19:56] Anthony: Generative AI is a very, very different, technology. It’s a technology which accelerates how we think for the first time. All other technologies that we’ve had, I would argue, and I think many others do argue, all the other technologies we’ve had from the Abacus through to Google have been mind extensions.
[00:20:18] Anthony: They’re things that help us extend out how we sync. so we’ve got cognitive limitations, we’ve got things we do well, things we can’t do, and we use things like pen and paper and databases and all of these things to extend our minds. Generative AI is the first technology that actually speeds up how we think we can, it syncs with us, and that’s one way of looking at, another way of looking at it as well is that it’s, conversational computing.
[00:20:45] Anthony: I think that’s a really good way of describing ai. If we didn’t have the, the, the phrase artificial intelligence, which has been around for a long time with meaning. All sorts of different understandings. The big change since chat GBT came out is you can talk to computers and you can make things and change things and ask questions without having to understand code, without having to be able to, to do engineering.
[00:21:13] Anthony: Those things are still great. They still help, but suddenly a lot more people can get their hands on this incredible amount of processing power and incredible amount of data that’s available to us, and that’s going to change how we think.
[00:21:27] Dara: So it’s interesting ’cause it, it, it is, you know, you, you talk about things like even go back to, you know, the, the abacus and the printing press and all these things, and in a way it’s, it’s like AI is just part of the broader technological revolution, but at the same time you seem to be saying, and I think we would agree that it is, it’s a very significant wave due to the fact that it is, like you said, for the first time, it’s actually.
[00:21:52] Dara: it’s, it’s making it much, much, much more accessible to people who don’t need to be technical at all. And then it’s also actually speeding up the way we think. So it, so it is significant, but it is, would you say it is still part of this kind of broader technological revolution that we’ve been going through since, you know, the dawn of the computer?
[00:22:13] Anthony: Well, I mean, it’s, it’s this, it depends how you want to frame it, how you want to cut it off. You could say it’s part of the same technological revolution that happened since we sewed crops in the ground. there’s a, there’s a lovely book by, a systems, thinker and complexity scientist called George Rajewski.
[00:22:35] Anthony: It was at Aiken University and he wrote a book called the Futurist Digital. It’s a very short book and very concise, but it, he talks about, is technological. Coevolution basically says that culture and society and technologies evolve kind of like twisted around each other, like DNA or in, and the effect of a technology.
[00:23:01] Anthony: So technology’s invented and it makes things better, obviously, or creates more of something. And, but the side effect of it is that it creates complexity. And in order to deal with that complexity, you need to invent more technology. And so you kind of keep stepping up and up and up. And those waves of invention seem to have gotten closer and closer together. One of the effects of the first one is, as we say, you plant crops in the ground, great.
[00:23:29] Anthony: Now we don’t have to be meads and we’ve got more food than we know what to do with, but there are side effects. Side effects are that we suddenly have surplus and we need to, we can have specialization and that means we need to start organizing ourselves. So we need to, technologies like writing to do accounts and.
[00:23:45] Anthony: So, I mean, we need a technology called cities because it’s just not convenient to have everyone spread out so much. So then we put everyone in cities and we need to invent things like, well, literally streets and, and bin collections, and then we need finance systems and we need communication. So, and then, so the, and then skipping to the other end of it.
[00:24:05] Anthony: I mean, I think the, what’s happening at the moment as part of the digital revolution, majete would say that started in about 1990, with the web. The web was, was that information, the beginning of that information revolution. And, the web created access to information, made it access to information, almost free, marginal cost of accessing information.
[00:24:28] Anthony: Reproach is zero. And that meant that everybody could publish lots of things, but then we couldn’t find things. So we had to invent search and mean, search. And it becomes too complicated and gets broken. And we’ve got all these digital tools that we’ve been using. I mean, I’ve, I’m. Sort 25, 30 years into my career and, you know, PowerPoint and Word and things like that were just, just new on the scene when I started working and 30 years on those.
[00:24:55] Anthony: Well, the last Lastingly now called productivity tool has created all sorts of chaos and terrible problems with filing and finding anything. And we have back to back meetings and 500 emails in an inbox. And so we’ve created complexity from this thing that was supposed to help us. And now there’s AI and that will help us manage that complexity, but also it will create more complexity, but we’ll have to invent something else.
[00:25:23] Anthony: So it’s part of, part of, it is, it is part of that digital revolution. Yeah. Rahi says we’re about halfway through that revolution, and AI is the thing that’s pushing us through the next half.
[00:25:34] Matt: Do, do you think that it’s going to really put a test on how well we can. Adapt technology. So to to, to go back to your analogy, when humans are really good at just embracing new technology and ingratiating it into their daily lives and just running with it, and they become complacent with it extremely quickly, like flushing the toilet is not a miraculous thing to me.
[00:25:56] Matt: But like when, like you say, when you’re designing cities and trying to get rid of all this stuff, it’s absolutely crucial. But the pace with which technology is changing is, it feels like it’s, it’s, it is truly exponential at this moment in time. Do you think there’s going to be a reckoning where we just cannot become complacent or keep up with or invent new technologies to deal with the things that are rapidly changing?
[00:26:21] Anthony: I think the key to all of this is. Making the, answering that, and there’s so many different ways to answer it is complex. The answer is it’s very complex. And, you know, some, some humans will be complacent, some will be incredibly vigilant. Some societies will be vigilant. Some companies will embrace it, some won’t deal with it at all.
[00:26:42] Anthony: and, and none of that’s new. Like, so you can see that, that just like with the, the web revolution, we’ve, I mean we, we have lived through two or three. Many technological revolutions in our lifetime. And that’s, that’s really weird. So, you know, we, we, we got, we got the web and thought that was the best thing ever.
[00:27:03] Anthony: And then the mobile web and then social media. And I don’t, I mean, it is interesting, the reckoning, I don’t know who, I dunno if there’ll be a reckoning. It always feels faster. It’s always felt impossible since the 16th century people, I mean, 16th century, if you had 300, 400 books, A, you were rich, and B, you started writing a lot in your journal.
[00:27:23] Anthony: And there are journals of people in England with libraries of three or 400 books saying that they have information overload. They, there’s, there’s too much now that they can read. They’re worrying about that. Then we know, you know, it’s, it’s almost, you know, a trope really, but, that, that Aristotle was worried about people learning to write and, with it.
[00:27:44] Anthony: So we’re always worried about having too much. What we know so far is that we have always adapted, and that’s, that’s not just because we as units change, but, it’s because our culture changes and we come up with new ways of behaving. So that accommodates those new technologies, but also that our brains kind of upgrade as well.
[00:28:06] Anthony: I mean, we’ve basically got the same brains, more or less as we had when language developed. It’s not that long ago, you know, still the same structure when language emerged amongst humans. It hijacked the bits of the brain that were to do with the trigonometry. And geometry it, because we did, we developed brains that were really good at throwing objects to hit other moving objects because that’s how we were hunting, or our ancestors were hunting.
[00:28:37] Anthony: And from that emerged language, but language. So that was the, you know, in a way that was a technology, but that language emerged and immediately that gave our brains, which were, which use maps to make sense of the world. We make maps of meaning, sometimes geographical. But because we had language now, we could make maps in our head of things that didn’t exist, like maths or politics or religion.
[00:29:04] Anthony: We could do those things because we could invent labels. And so suddenly we’ve got this, the, the cognitive re the real first take revolution is, is, is when somebody started talking. and that spread through the world. That was a massive viral hit. Humanity loved it up there, tens of thousands.
[00:29:22] Matt: So shall we just sit here and talk, in millions of podcasts today, to each other?
[00:29:28] Anthony: We’d love it. We love, love it, love it. But it seems we had language we could think in abstract talk. So it wasn’t that like suddenly we had evolved, but our technology had evolved and the way that we used our hardware had evolved. and then it evolved again when we, when we had writing, it evolved again when we had movable type.
[00:29:46] Anthony: But you know, when we were writing, suddenly we could think and read and read the thoughts of others. And that started us thinking in new ways. And so did printing and so did the internet. So yeah, we probably will adapt. and yeah. It’s interesting that phrase though, a reckoning. Will there be a reckoning?
[00:30:03] Anthony: That’s, that’s a very, I think that’s probably quite a Christian, idea. Well, because that’s our intellectual heritage in Western Europe, you know, we always think that there’s a beginning and end to things, and we always think that there’s a judgment base somehow.
[00:30:17] Matt: I have a flair for the dramatic.
[00:30:18] Anthony: Yeah, me too, me too.
[00:30:20] Dara: Just, just thinking about it. Recommend, I, I, I wonder as well, like it with, with the, I don’t know if this is any different to how it’s ever been, but it, as the technology gets more and more complex, are the masses moving further and further away from understanding it? So rather than maybe it won’t even be a question of becoming complacent, maybe there’ll be a point in time where people don’t even need to know any of the technical reasons why this thing works.
[00:30:45] Dara: They, they won’t even maybe know that they’re interfacing with something. They’ll just be living their life and things will just happen and they might Yeah.
[00:30:53] Anthony: I mean, that, that seems, that seems like a, like a, a credible scenario, doesn’t it? ’cause I mean, we don’t understand very much of the incredible technology systems that we live in inside of.
[00:31:05] Anthony: I mean, I understand quite a lot about generative ai, but there’s a bit where it stops. also our brains are really good at glossing that over. You don’t, you know, we talk about, generative ai hallucinating to, you know, covering and basically what it hallucinates what it’s doing is it’s kind of covering over the cracks of things that it hasn’t quite pulled together.
[00:31:25] Anthony: It’s called confabulation, and that’s what we do as well. Like, it’s only when you really think hard about something and try to explain it to someone that you begin to understand what you don’t know. because our brain doesn’t want you to worry about that. I mean, you’d go mad if you had to try and understand everything.
[00:31:43] Dara: Confabulation is pretty much all I do. bringing it That’s a good word. It’s a brilliant word. Yeah. bring, bringing it back a little bit, ’cause you, you mentioned again around generative, generative ai. And just just to bring it back a little bit, when I, that first question I asked you about whether, you know, this was just another wave in the, in the, in the broader kind of technological or, or information revolution.
[00:32:03] Dara: part of the reason why I asked you that is because when, when, when Brilliant noise. Pivoted, if that’s the right way to put it to, to kind of, or, or I guess maybe you doubled down on AI and now Brilliant Noise. This is, you’re, you’re purely working with companies, helping them to adopt ai and from the outside that, that might have looked, and maybe to you on the inside, maybe it felt like a very radical change, but based on what you’ve said, you could actually say that that’s more just of an extension of what you were previously doing or like an evolution of what you were previously doing.
[00:32:35] Dara: So I’m curious to know, did it, did it feel like a very radical change or did it just feel like the logical next step?
[00:32:42] Anthony: It did feel like a radical change. so there was a, there was a kind of through 2023 up until the Chat GBT came out on November 23, was it? Or, and so through that year, you know, you’d seen things like Mid Journey and we’d seen things like GTT two and people making, well, you know, the role players.
[00:33:04] Anthony: Dungeons out of it and people playing with writing, it was all Jasper had come out. And that was kind of an interesting tool. so all through that time we were like, kind of, this is going to be huge. and the reason that we thought that was not just ’cause it’s a cool tech ’cause at the time everyone was talking about Web3 and crypto, which in their own ways are called technologies, but they didn’t feel as transformative as this.
[00:33:29] Anthony: and partly that was because of social media because we’d, we’d been through that cycle with social media and in fact, I, the, our previous, the previous company where a lot of the founders of Brilliant Noise work at just called I Crossings when we were there, social media developed and the reason that we knew and that we could follow through on that.
[00:33:50] Anthony: Tech way was, you could see something fundamental changing and that was the, this, the means of distribution, means of creation, of content has changed completely. So that’s amazing. And you could see the same with ai. Like, oh no, no, this isn’t just, it’s going to speed up copywriting. It’s like, no, this doesn’t change everything because it completely changes power and it changes economic production.
[00:34:15] Anthony: And so it’s, it’s going to be huge. So there was that feeling of it burgeoning Now in terms of it, it being a radical departure from Brilliant noise, I guess like the kind of the architecture of thinking about it was all there. ’cause we were thinking about digital change and working with brands on how they, how they reorganize for it.
[00:34:32] Anthony: But the sort of early 2023, early 24 after the, the launch of chat kati, we had a, we had a, an economic shock, as in like our, our business had to be restructured and we had to, we had to shrink. and. That meant that there was also an opportunity, which was like, well, we have to decide whether we continue or whether we, yeah.
[00:35:00] Anthony: And if we do, then what are we doing? And I think it was just completely of this, well, we, we go all in on this. We work with our clients that we are working with, to help them. But then we explore what it means to live and work with AI and we become a living lab. Really. That’s, that’s how we thought of ourselves since then.
[00:35:21] Anthony: We see what happens. And I guess there were like two or three out as the team was kind of. Committed experimenters and everyone else was interested. and then over that year there was just a lot of experimentation and sharing and new methods of working, and ways of working emerged. And since then, everybody has become quite in depth, users and practitioners with ar They’re not all AI consultants.
[00:35:47] Anthony: We still run some marketing campaigns. So we have a, we have created a director, we have data analysts, we’ve got, you know, a, a, a client, a client director. so we’ve got people in these roles still, but everybody has to come. AI version if, well, ai, ai, first version of that. It did feel radical. It felt really radical.
[00:36:09] Anthony: And we had to, I think we had to have serious conversations with ourselves about being committed to this. even at that stage. So yeah, it was, it was radical then, and it’s, I think it still is for when we’re talking to clients and they’re, we are helping them to start to embed working in a kind of AI first or thinking about how AI embeds into the way that they’re working.
[00:36:32] Anthony: It is, it’s very radical for them because it feels like a big departure. There’s this kind of, when you, when you start working with AI a lot, you’ve probably experienced this yourself, so you kind of, like, you start working with something like Jack pt and you go, oh, that’s cool. And then you make it work a little bit better and you’re like, and then you discover that you can make custom got and you’re like, lovely.
[00:36:55] Anthony: Hell, I can create my own versions of this. And then you realize that you can make custom and talk to each other just within the savings anyway. Oh my goodness. And then once you’ve gone past that, I’ve made really useful tools. You start to escape. Actually, I need to think about this whole process differently.
[00:37:12] Anthony: I need to start thinking about what data am I bringing into this? Now I understand a bit about what makes it work better. What makes it work better is context, basically. So if I give it data, if I tell it to think in a certain way, if I introduce certain ideas to it, it’s going to, it produces me better results.
[00:37:30] Anthony: Okay? So that means you step back and start thinking about all of the things that go into that. And very quickly you start to challenge everything within the organization. You know, people, as an example, people start with a very, very common place to start. And it’s a good place to start anywhere. The, the, the important thing with AI literacy is to start practice, deliberate practice.
[00:37:51] Anthony: You know, very commonly it’s like, how do I manage a meeting? How do I manage my emails? Those are very, very common tasks because those are the two of the biggest complexities that our most recent tech revolutions landed us with. And then once you’ve, once you’ve organized a meeting, you, you start by going, okay, I’ll use AI to, to call my agenda together.
[00:38:10] Anthony: Or actually I am completely better by using AI to prepare pre-reads for people. Imagine those pre-reads, remember those when we didn’t have back-to-back meetings? We used to prepare for meetings. Yeah. And the other thing that we did with those meetings, because they were big investments of time, is we’d make notes from them and we’d, we’d take those and look at those and, and do things.
[00:38:29] Anthony: Not just go to the next meeting. So you, you start making the meeting better and then you go, hang on. Why are we having this need to get to school? And you go, okay, so well, well the meeting’s about decisions, obviously. Okay, well these people don’t need to be there. They can just watch the video and then you start, then you start to realize, actually while we’re writing the notes with this transcript from the meeting, this transcript’s really good.
[00:38:52] Anthony: There are some really good ideas in there. That transcript’s, not the nodes that transcript’s data about us having a conversation about something. We can use that to start planning that conversation that needs to go into the data that informs the actions that are coming out of this. You see what I mean?
[00:39:09] Matt: It’s, it spreads out. So do you think, the best thing a company can do to begin to adopt this stuff is to align that machinery and get their data in order? because that’s kind of a theme we’ve had to a certain extent over the past couple of weeks where, you know, recently of next 25 that Google’s chucking AI features at BigQuery, for example, like Bilio, there’s, there’s about 20 new AI agents and this, that, and the other.
[00:39:36] Matt: And what struck us is it all looked really cool. But if you haven’t done your homework and your groundwork to get your data into a good shape, then you, there’s risks involved with just. Aimlessly following one of these agents down a path. Did you think that, like you said, I mean, we were talking a little bit more about your day to day there, but just getting data in order and getting the internal machinery in order is a good place to start getting ready for more tools.
[00:40:05] Anthony: I, I mean, it’s fantastic if you can be doing that and mean, I think that the place to start is pe how people think about it and is, is practicing it every day. That’s the place to start. And the reason I say that is the enemy of why transformations failed. Many, many reasons. But let’s, let’s, let’s, for the sake of a pithy headline, let’s say, the reason the transformation fails is the very, very bright people understand the problems of the organization, understand how it could change, write a plan about what people should do, and then what they should do is do a word count, as it were, or a spiritual word count of how many times should come into that.
[00:40:44] Anthony: And then nobody bloody does it because. They’ve, they’re just being right is like, is Well, it’s not the, not the way that you ever get any change or get anything done. The way you get change is by changing habits and behaviors and the rest of it. Now, if you go to an organization and say, we need to clean the data and it will allow us to run mass agents that it, they, hey, nobody understands it and they don’t care and they’ve still got their day jobs.
[00:41:08] Anthony: B they’re scared of, of, of AI because there’s all this narrative about it’s going to, you know, it’s going to make us all slave solar machines or World works. So why, why would that, why would that work if you can, if you can get everybody using ai and they get to the point through working upwards of like, of understanding a meeting as not as a meeting, which is like a bundle of stuff that we don’t really question, but understanding it as.
[00:41:37] Anthony: something that is trying there to solve a problem, that there’s some data attached to it and a process attached to it, and an outcome will come out of it, and they start to think like that, then they start to go, actually, I need to organize the data better for them. And then you, instead of talking about data cleaning, for instance, like, you know, data cleansing or, you know, it sounds very technical and weird.
[00:41:59] Anthony: Turns out, I mean, I’m, I’m not a data specialist, but it turns out when I was talking to our data analyst, which is saying we were cleaning data for a client to help them put together an AI system, and I was, and she took, walked me through what they were doing. It’s like, oh, right, you’re fixing their spreadsheets.
[00:42:14] Anthony: Yeah. So the spreadsheets are all designed like Word documents because no one’s ever been trained in these bits of software that all these organizations run on. So they don’t have proper headings. They’re not organized into columns. They’re written to look like a document, but they’re not read. But a machine looks at it and it doesn’t understand it.
[00:42:32] Anthony: So, but if someone’s been through the process where they want their data. In a good order so that they can get something done. Then when, when you start talking about organizing data properly and all the rest of it, that will happen faster. So I’m arguing for sort of a bottom up approach there, but both are true.
[00:42:51] Anthony: You know, you, you, you think you need to be fixing the data, but there’s, there’s also you some complexity to that, and it’s not just a case of, okay, all the data we’ve got, we need to organize that and have a beautiful data architecture, and now can we it we can, it now should work. If you see, you hear the word should anywhere along the line, the alarm does go off because yeah, it should work now, but it doesn’t.
[00:43:14] Anthony: Why is that? Because of those pesky humans and you’re like, well, you know, sorry guys, but the humans are going to be involved for a long, long time. Yeah. You need to work out how a human brain, which is an amazing piece of hardware, works with this amazing piece of hardware you’ve got called AI or software.
[00:43:34] Anthony: and work out a system for that, not just a, a kind of an architecture that would work if only as humans would do as they’re told. I’m sorry, I think I’ve gone off on an announcing machine run, which was the last thing anyone was expecting.
[00:43:46] Dara: No, no. I say it, it’s, it’s interesting. I, I, I don’t think anybody would argue with, if you said to them, there’s this technology that will make your meetings more efficient and maybe even help you decide if you need those meetings in the first place and it’ll take notes and all the rest of it.
[00:44:01] Dara: And I think, you know, 10 out of 10 people would say, yes, sign me up for that. But in reality, there’s that weird phenomenon where people say, I’m too busy to make myself less busy. So how do you, so the stuff is kind of like ground level, I completely get it. And I think if you can go into a company and you can, and you can show people these tools in action and maybe go through some use cases.
[00:44:23] Dara: How, how do you encourage and, and, and, and is it just down to the individuals? you know, will it work in some companies and will it not in others because of the personalities or the, or the people? because it is weird, you think everybody would buy your hand off. but in reality, people, people don’t, some people will just stick with the familiar, even if it’s painful and tedious because that’s what they know.
[00:44:45] Dara: like even your example of people using spreadsheets wrong. So if they took the time, they’d realize that it’s actually much more efficient, but they’re just used to doing it the way that they’ve always done it.
[00:44:55] Anthony: So well, if they took the time or if the company had bothered to train them or if anyone had bothered to train them ever and not just say, are you all right with spreadsheets?
[00:45:04] Anthony: You know? but it’s amazing. We, that’s how tool centric we are, that we think we’ll buy a bunch of tools and everything will be fine. and what happens is people muddle by and there are workarounds. But some, yeah, so I think some, some companies, and some people. We’ll always be laggards. ’cause we have a, we have a spectrum of human behaviors and attitudes and rightfully so because that means that we’re an interesting, diverse species that don’t just roll over at the next splashy technology.
[00:45:34] Anthony: That they’re actually, you know, we have to persuade some people to use it. Good. So in terms of getting people to use it, you find that it’s not you, you definitely can’t tell them. You have to show and then tell, but showing’s not enough. You have to get ’em hands on and they have to use it and they have to use it through deliberate practice.
[00:45:52] Anthony: So there’s a, there’s, there’s a lot of evidence around how adults learn and it’s, you know, unfortunately many of us learn how to do training and write training things and things before this research was done. But we need to, we need repeated practice. We need space repetition over time. We need to be able to discuss the ideas with other people and we need to be able to.
[00:46:15] Anthony: Use or cry out the skills and fail slightly sometimes at them. And that’s what they call a pedagogy or a pedagogy. That’s, that’s how you learn. And, and that means that you have to come up with a use case, as it were, for people that’s persuasive enough for them to spend time every day working with it. and then as they do that, they develop a level of literacy.
[00:46:40] Anthony: I think they develop all, you know, there’s some practical skills like prompting of things, but they also develop a feel for it. They have a felt end, as learning designers call it, of, of, of, oh yeah, no, I get it. Yeah, no, this is how it works. And that’s really hard. It’s really hard for people to explain that to people.
[00:46:59] Anthony: Nobody wants to be explained. You know, they want, I want some training. I want a session on Wednesday where I turn up for an hour and I’ll come out of the hour an expert on the thing that I’ve previously known nothing about and how it worked. My wife runs this garden design course.
[00:47:15] Anthony: And it’s a similar attitude for adults coming in there is like, I’m, I’m, I’m ready to be told out the garden design, you, you have to do it. You have to, you have to work. You adults forget that learning, you know, they forget to spike the fact that you, they had children in the house, you’ve done GCSEs and they’ve seen the pain of learning.
[00:47:35] Anthony: So they forget that learning is work. It’s hard work. And the reason that people resist is perfectly natural is what our brains are designed to do. Our brains consume a huge amount of energy and they’re designed not to expend energy unless they have to. ’cause that’s a survival technique. So it. In a world where you’re bombarded with information and opportunities to learn new things, the brain rightfully goes, yeah, it’s all a load of rubbish internet.
[00:48:01] Anthony: It senses that there’s something valuable there, which is, so what we do to get people started, we call them, actually borrowed it from Casey Newton, the tech journalist a couple years ago. He said it had, he had his AI vertigo moment. You know, you know, he is like, it’s like, oh my, you know, not just an aha moment, but oh my goodness, that works.
[00:48:22] Anthony: That shouldn’t work. And you need, you need people to have a couple of those because what happens when, when you are, until you’re about 12, your brain is absolutely flooded with a bunch of hormones that mean that you remember everything. I dunno if they’re hormones, actually, I’m not a biologist, but you know, they, with a load of chemicals that mean that you remember everything after 12, that switch is off.
[00:48:46] Anthony: Doesn’t mean you can’t learn anything, but it means that system has to be activated. Generally lovely. phrases, neuroscience, Sussex, IANs like surprise is a superpower. If you’re surprised by something, the brain goes, I think there’s something here that I have. Could you use, I’ll have some of that, please.
[00:49:05] Anthony: And it explodes the brain with hormones, which are lovely. We lost them. I mean, you’ve worked in a room full of people when they’re actually learning. Everyone leans in. Everyone’s like, oh, tell me more. You can feel it in the room. That’s so we that so you do, to get change in an organization, you’ve got to shock or surprise people into a positive state.
[00:49:26] Anthony: And then how do you change? Sort of going back to some other points in your question, that’s down to leadership. A lot of it is down to leadership because. Even if you have a bunch of bright people who are surprised and desperate to learn, unless they are given the space to learn, and unless it is the psychologically safe place to learn, that they’re not afraid of screwing up or breaking the AI or ruining them, they won’t learn.
[00:49:52] Anthony: They, that, that’s, it’s just, and so leaders need to provide clarity around what, what their plan is. They need to be clear about what they don’t know and be able to say, look, we don’t know how this is going to affect the organization, but these are the ways that we’re going to find out. We’re going to find out together, and we’re going to, we’re going to run some experiments in safe places and try some stuff out.
[00:50:14] Anthony: And don’t worry, this stuff’s safe. Go and go and try this out over here. And then you’ll get people who are learning. If you’re, if your failure’s not an option and we’re not a, you know, we’re not at home to miss a screw up, then no one’s going to, no one’s going to learn anything and change at all.
[00:50:32] Matt: As a, to take it away from, not take it away from an organization’s responsibility, but to to, to look at an individual’s responsibility for a second.
[00:50:40] Matt: Like there’s a lot of noise and, and there’s a, just a, there’s a lot, a lot of listeners here now probably saying like, how the hell can I possibly know where to focus my attention or to look or to concentrate or what thread to pull at because of just the sheer amount of news and updates and things that are coming out.
[00:50:57] Matt: Have you, have you got any sort of practical approaches to cutting through some of that noise or, or just finding an area of focus because like you say, everyone needs to walk, augment themselves and, and to be looking at the stuff and, and essentially adopting it into practice. and help work can help point you at that in some ways, but there’s a certain level of responsibility on the individual perhaps.
[00:51:21] Anthony: Yeah. I mean, yes, I think there is, you know, it does operate at all those levels of team, individual organization. So as an individual, deeply in your best interest to understand generative ai, not just as it’s a skill and you want to have it as part of your each, because whatever you do, if you’re doing it with your brain, it’s going to help you do it faster and better.
[00:51:45] Anthony: And there’s, you know, it’s just, it’s just hard to lose, I think, in that scenario. But, so how you, how you start and focus, that’s a different challenge to cutting out the noise, which kind of goes back a bit to social media, but the how you, how you start, you can bound them, you can put some boundaries on, on how much you have to do on what you need to do.
[00:52:06] Anthony: And they are this, I mean, I think, Ethan Molik, who is Wharton, business school professor, he writes very eloquently about practical uses of ai. He wrote a book called Co Intelligence, which is excellent, recommended to anyone. Very practical, very feet on the ground. But he says it takes about 10 hours of deliberate practice to get a sense of what’s possible with a large language model.
[00:52:34] Anthony: So with chats, you could see, so set yourself that task, give the 10 days, an hour a day, and find something to do with it. And it doesn’t matter what it is. It could be planning a holiday, it could be organizing your email. It’s low stakes. You don’t expect anything from it. Just play. See if you can make it work better.
[00:52:52] Anthony: See what’s interesting, everyone loves falling down a, you know, a google rabbit hole of research. Go and research something, knock yourself out. It’s fun. but if you do it for about 10 hours, you’ll, you’ll begin to get a sense of how it works. Once you’ve got a sense of how it works, then the deliberate practice stops being an issue because you’re just like, what will happen is you’ll start going, oh yeah, I could, I could use it for that.
[00:53:18] Anthony: oh, I could use it there. So it’s a. You know, that’s, I think one of the, one of the analogies that we really like is, and look, a lot of people use ones like this is, well, it’s like learning a language or it’s like learning an instrument. It takes deliberate practice. If you learn French for 10 hours, then, or you know, Spanish, I like learning Spanish.
[00:53:40] Anthony: But if you go to Spain with 10 hours of Spanish, congratulations, you will be able to order drinks and say hello. And people will find you more polite than they did, and you might have a slightly better experience. You might even spot some things on the menu you wouldn’t have known about otherwise.
[00:53:56] Anthony: If you, if you keep going for a few months, you’ll be able to have a basic conversation with people in, in the cafe. And you, you know, you might, you might find somewhere more interesting to go beyond that. You can start reading the local newspaper and then you find out about the things that are going on in town that you wouldn’t have known about.
[00:54:12] Anthony: And maybe, you know, it’s like, but see, you don’t expect to know everything straight away, but you will get payoffs. At each stage of learning, it gets easier and easier because it’s so much fun and it, and it helps you to do what you want to do. So that’s, that’s the kind of, that’s how to get started, with it.
[00:54:32] Anthony: The other thing to say is that in terms of managing the noise, it’s like, turn it off. I mean, but this goes back to social media literacy, and, and how we use social media. I, I, I used to say with Twitter, probably would still say about X if I wandered on there occasionally, that you, if you don’t like it, then you don’t, you know, you can’t have an opinion about what Twitter is like or what LinkedIn is like.
[00:54:59] Anthony: You have to realize it’s your feed. So if, if it’s too noisy, mute some stuff or don’t go there as much, you know, that’s the problem. Not, not that there’s too much noise there, there, there’s just, there is so much noise in the world. So, you know, get some noise canceling headphones, decide what you’re going to listen to.
[00:55:16] Anthony: That’s the way to cut down noise. Mm-hmm. And I think the other thing that, one thing that can actually help with the noise is something we often open workshops or keynotes with is the old William Goldman quote from the book, Adventures in the Spring Trade. He wrote about Hollywood in the eighties and how everyone was trying to pitch a film and with absolute certainty, you know, it’s dangerous.
[00:55:43] Anthony: dangerous liaisons meets Star Wars, you know, war, war, it’s going to be a hit and it’s not done anymore in it. But nobody knows. And they always said nobody knows anything. That was the thing you had to remember in Hollywood. Everyone’s pitching but nobody knows anything. And when you’re looking at a speed of lead in, or Reddit or wherever you are and people are saying stuff, just say to yourself, none of them know anything.
[00:56:06] Anthony: Because none of us do it when, when it comes to generative ai, we do not know what the next few years look like. At the beginning of this year, like there was a kind of, right. In January, the people with the most money in AI, the venture capitalists and the big tech firms, knew exactly how AI was going to work.
[00:56:24] Anthony: And they said, right, we’re going to build a $9 trillion super cluster of things and we’re going to do this. And it’s all, you know, it’s all taken care of now. And then three weeks later, everyone realized that on Christmas Day, Deep Sea had launched in China and had a completely different way to do it. All the narratives, all the predictions for 2025, about how large language models worked, were off the table.
[00:56:47] Anthony: So anybody who says anything about AI has to say it with some humility in terms of, they have to say, I think they even, like everything I said, would cut everything I’ve said. By the way, listeners, everything I’ve said comes with the two caveats. It’s like, this seems to be the case for now. This seems to be how it works for now until we find better evidence.
[00:57:11] Anthony: This is, this is what seems to work. Even the large language models, owners, I think flawed andro, have the most information, have put the most investment into understanding how the model actually works at the moment. We know the principles, but, and they, I’m not sure where they are now, but I think they’re about 5% understanding of how their own model works.
[00:57:34] Anthony: But even they dunno anything said that, you know, Jeffrey Hinton has got the Nobel Prize, didn’t he? Amazing computer scientist, ai. He said 10 years ago that we wouldn’t have any radiologists anymore, but radiologists are now using AI more than anyone else. He didn’t know because he’s a computer, you know, all, all he’s good at is being a genius about how to make things.
[00:57:55] Anthony: He doesn’t actually understand how we use it. You know, the, the people who invented, ip, the people who, you know, internet protocol, he invented the Tim Burners, Neil, Tim Burners. Lee is not a billionaire. He literally invented the saying and did not invent Amazon or Google all of the ways that value was created later on.
[00:58:16] Anthony: No, I mean, I wish he had, he had a lovely man. So nobody knows anything. That’s the way to deal with noise. And the more confident someone is, is he more confident? I am, the more I’m probably talking.
[00:58:28] Dara: How so, but as a, as a, as a company, then, do you like that’s quite a, that’s quite a shift in your mindset. You, you basically have to accept that you, you have to invest in this, and you have to allow the people within your organization to increase their AI literacy.
[00:58:45] Dara: So you’re going to invest time and money and potentially build solutions either in-house or get people to build them for you and five minutes later those solutions could be redundant and you have to build something else. And there’s, I mean, there’s no, I’m not even sure if this is a question because you, this is just the way it is, but it is quite a different, yeah.
[00:59:04] Dara: It’s not like, oh, we’ll build a website and maybe it won’t be the best website ever, but we’ll improve it in five years time. It’ll do the job. For now, this is a bit different. It’s like you might, you might build something thinking it’s going to change your business and then one of the companies will just roll something out the week after you’ve launched it.
[00:59:21] Anthony: I think that what’s, this is why AI literacy, or actually understanding how to use it is the best investment I think any of us can make at the moment, at any of those levels. Because when you develop your understanding of what’s happening and what’s possible and what seems to be possible and what doesn’t work, and where things don’t work.
[00:59:42] Anthony: Then as the new waves of innovation come through, you know what to do with them. You know, you, you, they, I, I’ve seen this from the Kevin Wheel. The chief Product Officer at Open AI was on podcast a little while ago. Love. It was a really good session, actually, a really good q and a. But here’s why.
[01:00:01] Anthony: You know, we’re they and their million developers using their, their API, their, are bumping up against the limits of what the current models can do. And when they do, they’re like, great. Not because it doesn’t work, but because this will probably be solved in a few months time. So that’s a good place to be.
[01:00:21] Anthony: I remember when, so in, in 2023, was it, you know, the, essentially building a, a custom chat box and we had to get an engineer to help us. We spent about two or 3000 pounds on building this thing that used a client. The client’s learning materials and answered questions about them. And then three, four months later, the GPT store came out and we made 20 of those in a day for nothing.
[01:00:50] Anthony: But we made it on the day they came out because we knew exactly what that feature was for. And amazingly still, many, many people haven’t even opened the GPT feature and then they’re just using it like Google still.
[01:01:02] Dara: So can you put a kind of slight, maybe slightly tangible, but it is following off on that a little bit. So you might, so as a business you’ve got to, you got to understand enough, you’ve got to, you do have to accept that, you know, the technology’s going to change, you know, almost as quickly as you, as you, you know, as quickly as you learn one bit, it’s going to change and you’re going to have to adapt to the next bit.
[01:01:20] Dara: Does that mean then that you, so like a question we often get asked when we’re kind of working in the, you know, the marketing analytics space, you often get asked like, can you put an ROI on analytics? And it’s a tricky question to answer and various people have different answers. So people think you can, so people think you can’t.
[01:01:35] Dara: What’s your view on, on ai? So if you are. That’s the first question the business owner’s going to ask isn’t it’s like, can we prove an ROI on this? If we invest in ai, how do we know it’s delivering? How do we, how do we measure the success of it?
[01:01:48] Anthony: Such an interesting question. I mean it, yeah, it’s obvious. It’s an obvious question, but it’s so, it’s so interesting when you try to unwrap that, and actually that question can mean very different things in different settings.
[01:02:01] Anthony: Sometimes I’ve, I’ve found where a leadership team or a leader has gained a, even a basic level of AI literacy, they gain an instinct that, oh my God, this is going to be huge. We need to get in there and we need to start moving fast. And what’s the ROI going to be on this? Can we measure it? And what they say, what they’re saying is, and is honest, straightforward, let’s measure this as we go.
[01:02:24] Anthony: Let’s make sure that we’re measuring it. Great. Then there’s another way of asking it and that’s like, what’s the ROI going to be on it? And that is, I don’t want to do it like you tell me exactly how this is going to work. I don’t know if I mentioned how uncertain and how weirdly, blah blah, you know? Yes.
[01:02:42] Anthony: There’s a leap of face. You can wait, you can wait until there are case studies and deaf ROIs. Another wave that we, that I’ve, I’ve counted that question and I hope not too aggressively, but is like, okay, so we are working on your, right. We’re going to work on your, on your sales team’s, efficiency in how quickly they get through sales.
[01:03:04] Anthony: So we could measure the uplifted sales. Sure, sure. But, you want, you want to know about the efficiency, about how much more quickly they’re producing sales documents. Okay. Graves. And you want to know what the, their productivity levels, what, what return on investment, how do you measure their productivity levels at the moment no one does.
[01:03:21] Anthony: Yeah, so meetings, this meeting software, what’s the ROI on that? Well, how do you measure the effectiveness? Oh, you don’t, do you? He doesn’t measure anything to do with that. You just go to meetings like the rest of it. so there’s that, there’s those kinds of aspects to it. The other thing though I’ve found is that, the ROI conversation can be bound into, depending on people’s fields, you know, a model of ROI that can be quite reductive.
[01:03:46] Anthony: So for instance, in marketing and performance marketing, well, what’s the ROI on this? I I, is it going to improve? Is it going to gimme a sales artist? Is it going to improve awareness? Whatever, if you take a different T and CHATT is really good at this actually. If you tell, if you stop thinking of it as a marketer or as a, and go, okay.
[01:04:05] Anthony: Tell me, tell me how Sequoia investment would put the, together, the return on investment case on this. Tell me how McKinsey would put the investment case on this if it was wanting to invest, you know, you said, 10 million pounds into this, what was, and you will see stimulated versions of those different ways of thinking about ROI and it turns out that, that it will be things like share, price, uplift, it’ll be, you know, the effect on staff turnover.
[01:04:33] Anthony: It’ll be complex, you know, suddenly what ROI is can be things that people in that use business cases that people use to invest millions of pounds, you know, see you can actually broaden out, your r ois. I remember with one company I was looking at, before we started working with them, and I think in the end what we looked at was quite straightforward productivity measures, just building business case studies around.
[01:04:59] Anthony: The innovations they came up with. But we were, I was looking at their reporting and trying to see how they were measuring things at the moment. One thing that came up was like the scope three environmental impact. so they measure, is it scope three or is it schedule, I think scope three. so a lot of companies that signed up to this will measure what their carbon impact is.
[01:05:23] Anthony: And one of the ways that they reduce that, especially if they’re not actually manufacturing things, will be not having as many meetings, not going to the office, not cutting down on business travel, things like that. But that’s a carbon business case. That’s a carbon ROI that you. I, I’ve killed 10 meetings this week because I’ve realized that with ai what we can do is actually make those decisions, inform those people and, and have one hour meeting where people are having a nice chat and catching up and bonding instead of, a seven hour meeting where everybody’s walking through PowerPoints that have nothing to do with reality.
[01:06:04] Dara: So one area that we haven’t really gone into much Anthony so far is around ethics. So, you know, if you’ve got a company, they’ve got enthusiastic people, they’re improving their literacy. What’s your, I know this is a very big broad question, but how should a company or even the individual’s approach the kind of ethical side of how they’re using a
[01:06:25] Dara: And, you know, what, as you’re building that literacy, what, how do you in parallel kind of maybe build your awareness of, of, of things like bias or, or any kind of privacy concerns around using these. These tools, is there any kind of tips you would give people or any, any advice around making sure that it’s used ethically?
[01:06:44] Anthony: I mean, one thing we say, one thing we definitely do. So when we run, we’ll sometimes run like kind of an onboarding workshop series, you know, two or three workshops in a row for a team to get them started. We make ethics or we make values certainly part of day one. So we start that day with prompting.
[01:07:06] Anthony: and the reason we start with prompting is you, it’s a good way to shock people by showing them, what a difference, a different approach can make something. And that leads them into kind of being interested in the rest of it. But we end, we usually end that first day by taking, if the company has a set of values, that is meaningful to them or a set of principles or something, we, we take those and we, we get people to talk about what they might mean in the context of ai.
[01:07:34] Anthony: and. The reason I say that is that, that that’s the place where suddenly, at the end of the day, a lot of the anxieties and questions and, uncertainties from an, from an ethical point of view come into play is you can, people will talk there about is this jock replace on what’s this going to do to, what about the environmental impact of using, things?
[01:07:57] Anthony: What about them, are they taking our data? What, you know, all of those sorts of things. So you need to get all of those out, out on the table and deal with them as part of putting in a, you know, as a, as a kind of an organizational policy level. You need a, you need a policy around these things.
[01:08:14] Anthony: If you have an ethics, then you’re going to want to, you’re going to want to ask what do they mean in the context of AI and, and what are these things? What, what questions do we need to ask? And that’s a good place to start. What one? Sort of a slight tangent from, or it is a slightly cool tangent from that is, one of our clients that we work with deeply committed to their, their values and their cultures all over the walls.
[01:08:42] Anthony: They’ve been books and everyone gets them, they insisted. We read it when we started working with Provide. Cool, cool, cool. And so we turned that book into a chatbot, and had it talk to people in the meeting so that, that chatbot was talking through the frame as if everything that was said in that book was gospel truth.
[01:09:04] Anthony: It was fun. But I’d read a thing in the Harvard Business Review about dilemmas and about the real test of a value, you know, ambition. That’s great. kindness. Okay. But if I come across a situation where there are two options and I. Maybe both are equally, equally backed up by data and argument, but they’re, but they’re different.
[01:09:29] Anthony: That’s life. You come across dilemmas and you have to make a choice. Then which way does the value guide you? That’s what a good value or a good principle is. And that’s a good guide for ethics. So I, I was reading that and I thought, well, let’s, let’s put in some, let’s put in some dilemmas for it.
[01:09:47] Anthony: And this company, we happen to be in the marketing services space. So I picked some recent crises in the marketing space like hava and just stopped oil or, you know, working with a celebrity turns out to be very right wing. What do you do? And then, I said, okay, imagine you are on the board of this company.
[01:10:07] Anthony: What happens when the staff start testing, oh, well we’d have to lose that client because we’re people first. Okay, right. Fine, we’ve lost that client. and now we need to lay off a team ’cause we don’t have as much revenue. Oh. Right. Okay. Well, what we might need to do then is consider our other things and start qualifying it in this way.
[01:10:26] Anthony: And I thought it was very interesting to probably what people don’t do, but to actually imagine having your values as an AI on your board. It would be really awkward because, I’ll tell you, I’ll tell you for one thing out, my limited experiments, everybody’s values are a lot more woke than the way that those companies actually behave.
[01:10:51] Anthony: You’ve noticed that, you know, you’d think they were all charities if you just read the value Pros, I think ethics are really important. We’re going to bring them in. I also think they’re really interesting things you can do with AI to bring. Ethics and values to life, and see what you actually mean by them.
[01:11:05] Dara: Okay. Thank you again, Angie. I think that’s a really good place to stop. I think we probably could all keep going for several more hours, but, we’ve probably, we’ve probably all got places to be, so I think we can, we can leave it there, but maybe leave the, leave the door open for a, a return visit one day to see how much of what we’ve talked about.
[01:11:24] Dara: We, we can put you to the test when you said earlier, you know, don’t trust anyone who claims to know what they’re talking about. We can see in six months or 12 months time where things actually are. But for now, let’s leave it there. And, thank you again for, for taking the time to, to talk to us.
[01:11:38] Dara: That’s it for this week’s episode of The Measure Pod. We hope you enjoyed it and picked up something useful along the way. If you haven’t already, make sure to subscribe on whatever platform you’re listening on so you don’t miss future episodes.
[01:11:50] Matt: And if you’re enjoying the show, we’d really appreciate it if you left us a quick review. It really helps more people discover the pod and keeps us motivated to bring back more. So thanks for listening and we’ll catch you next time.