#129 The impact of AI on digital analytics and user privacy (with Brian Clifton)
In this episode of The Measure Pod, Dara and Matthew sit down with Brian Clifton, the former Head of Web Analytics at Google (EMEA), author, and founder of Verified-Data.com to explore the intersection of AI, privacy, and analytics. From the evolving role of Google Analytics and data democratisation to the challenges of user consent and data protection, they unpack how privacy-first thinking is shaping the future of measurement. Along the way, they dive into issues like AI in user interfaces, copyright and data ownership, and the risks automation poses to professional journalism. The conversation also covers creativity in the age of AI, from automation and collaboration to the impact of hyper-targeting versus contextual advertising.
Show notes
- Verified-Data.com
- Brian Clifton
- More from The Measure Pod
Share your thoughts and ideas on our Feedback Form.
Follow Measurelab on LinkedIn
Transcript
“If we didn’t have the regulation that we have now, particularly GDPR, which is the gold standard globally, I’m sure those settings just wouldn’t exist.”
Brian
“The key that a lot of people miss with these opportunities that we have with AI is just the ability to think now, to have the space and time to think, which means be creative.”
Brian
[00:00:00] Dara: Hello and welcome back to The Measure Pod. I’m Dara. I’m joined by Matthew, Jeremiah Huson. I assume that’s your middle name. I’ve never asked you, but I’m, I’m guessing it is isn’t,
[00:00:26] Matthew: I’m not going to tell you what it is. I’ll tell all my secrets to ai, not to the likes of you.
[00:00:30] Dara: Yeah. You only give away really deep personal information to American corporations.
[00:00:35] Matthew: Yeah.
[00:00:35] Dara: Claude has so much doubt on me. It could destroy me overnight, but me too. And I’ve been realising this through this podcast by talking to guests about this and saying about how, you know, we’re just perfectly happy to give away. Yeah. I’ve spent the last five years declining cookies. And, and now I’m just like spilling all of my deepest, darkest secrets into little text boxes.
[00:00:59] Matthew: We did go into that a little bit, I suppose, for later, but we did go into that a little bit today on this episode about the privacy side of things. We
[00:01:05] Dara: do, we do. We’re good at spoilers and you pointed out last week we teed up a spoiler and then didn’t even do it. I said I was going to talk about something I was doing and then it never happened.
[00:01:15] Dara: So I feel it’s worth calling that out now in case anyone else is listening, saying, I thought he was going to talk about what he’s been doing. Are
[00:01:22] Matthew: you going to
[00:01:23] Dara: no. No, it was around us. So our episode last, last time was with Gunnar Grey talking about the Google Analytics MCP. So it was me. I was going to say I was inspired by that conversation and I was playing around with not just the G-A-M-C-P, but a couple of other ones as well.
[00:01:39] Dara: So I was going to talk about that, but I just didn’t, but, and I. Now you’ve forever been really down in law. It’s that thing that you never talked about. I never talked about it. Yeah. It’s a bit, well I’ve said it, I’ve said it now. I’ve been tinkering. GA one is definitely interesting, I need to explore it a bit more.
[00:01:54] Dara: But other ones as well, like, I’ve been using obsidian note taking apps, so I’ve been playing around with A MCP and also, these are kind of personal ones, but that running app that I use as well. But they’ve got an MCP, so I’ve been using that to build myself a little. Running dashboards. Yeah, just to procrastinate basically, and not run.
[00:02:10] Dara: So I just, I just build the app instead of actually doing the running. Well, there’s 30
[00:02:14] Matthew: degrees outside, sorry, 27 degrees outside in Spain. Yes.
[00:02:18] Dara: Do you want to run around and how many degrees is it where you are?
[00:02:21] Matthew: Seven. Se seven to between seven and 10. I’m wearing a fleece. That should, that tells you all you need to know.
[00:02:28] Matthew: Conversations have begun about when is the right time to put the heating on within Measure labs, virtual corridors. So that’s where we’re at.
[00:02:36] Dara: Yeah, I, I can’t relate. Sorry. Let’s move on. Okay. Salty news. Let’s do it. What’s happened in the last week since we last covered in great detail all of the wonderful changes that are happening in the AI space.
[00:02:49] Matthew: You mentioned, I haven’t read this, but you mentioned it. You seem to be wanting me to tear this up, so I’m going to the MIT study. So you’re assuming I’ve read it well, no. Well nobody’s assuming that it’s salting news. It’s assumed that we skimm it and are going to, we, yeah.
[00:03:05] Dara: And in that vein, I have not read the actual reports, but I have read lots of articles about it. ‘Cause that’s how you consume information, isn’t it? Yeah. You don’t read the actual source, you just read the stuff around. So meta to read the actual thing. I know, yeah. Would. I waste my time with that. And I, to be honest, I don’t even read these sources. I just feed them into something like Notebook lm and just get it to gimme a very simple, you know,
[00:03:27] Matthew: yeah.
[00:03:27] Dara: Digest podcast. Yeah. Podcast. And then I listen to that podcast and I ask it to make it really long as well. Oh, there’s a bit of news you can now get, notebook LM to debate in the podcast. So this wasn’t actually on Oh, really? The, yeah, this wasn’t officially part of the Salty News, but I just thought of it so you can now, ’cause it’s, yeah, it’s a bit boring when they tend to agree with each other, don’t they?
[00:03:49] Dara: These two, yeah. Robot voices. So this, there’s new modes now. One of them is debate. So you can actually, I haven’t tried, I want to see how, if I can get ’em to actually have a full on. Like blazing row and then have one of them maybe storm off the podcast and then nobody else around the
[00:04:04] Matthew: world could never get ’em to do a podcast again because they fall out.
[00:04:07] Dara: It’s broken. I want a break notebook. Lm. But yeah, one of them, I think, I’ll probably get this wrong now. There’s four modes now. I think one was whatever the original one is, and then I think one, something like critique or something and then, but anyway, one of them is debate, which is probably going to be the fun one to try out.
[00:04:22] Matthew: Right. Just a mix of Google does love just secretly releasing things and it feels like it. That kind of feature on CLA Door, what’s the other one? Open air. That’s boring for the open air. Oh yeah. They drop to the bottom of my thoughts there. Yeah, they just kind of, it’s all fanfare, but Google seems to be releasing so many little bits across such a wide space that they just don’t bother. It’s like someone will find it. You did. Honestly, it’s probably very good.
[00:04:45] Dara: I did. I did. Yeah. Yeah. So many themes with this podcast. So many little side plots and subplots, but you’ve now just killed open AI as well. We’ve killed Adobe. We killed, what did we kill? SaaS products or something a while back, ETL products L they’re gone and now open.
[00:05:00] Dara: AI’s gone. So you know, AI’s gone. It’s just philanthropic and Google now. Nobody else. Yeah, it used to be another player in the AI market list. Can’t, can’t remember who they are now. Anyway, MIT almost lost the train of thought completely there. There’s an MIT report, which is doing the rounds and has terrified everybody.
[00:05:17] Dara: And it says that 95% of gen AI pilots fail. This is within enterprises, but when you go beyond the clickbaity title as some people have done, I haven’t, but I’ve read their thoughts. It’s very suggestive, very clickbaity. Title, I think there’s some interesting metrics. It’s only really looking at kind of profit and loss metrics.
[00:05:41] Dara: It’s doing it over a very specific timeframe. It’s not taking into account anything around operational efficiencies or any other valid metrics that these pilots might be actually achieving. So it’s, the thoughts out there on the web seem to be that it’s, it’s quite a narrow, it’s, it’s taken quite a narrow view and the methodology’s a little bit iffy, but as always happens with these things, it’s had a huge effect.
[00:06:07] Dara: Especially ’cause it’s come from MIT and it’s been all sorts of kind of follow up things around, oh, you know, the bubble’s going to burst and all this kind of stuff.
[00:06:15] Matthew: But it’s cyclical, isn’t it? It it is like think at the minute going on LinkedIn and a few other places, everyone’s flipped back to the belittling of AI and it’s a flush in the pan and this, that and the other, and then goes back the other way and there’ll be a lot of sort of doom, say, doom saying, and.
[00:06:31] Matthew: The end of the world is coming and it just goes back and forth and back and forth. Yeah. whatever. But just whatever gets the most clicks that particular week to be cynical. I think that’s pretty much it. Yeah. That’s what it’s all about. So anyway, did it say if these sort of AI endeavors of enterprise were just, were they taking into account just efficiency gains of a user using it just in their day to day without it being some distinct project within the enterprise?
[00:06:57] Dara: I don’t know. I think again, yeah, there was a, there was quite a narrow definition of what was even included and what they meant by a pilot. But operational efficiencies weren’t included. They weren’t looked at. It was literally, is it driving revenue? So it’s like that’s the only thing that would matter.
[00:07:11] Dara: It’s like if it’s not driving, and I think there was even some wording around, it wasn’t even just incremental revenue. It was like rapid revenue generation or something like that. So it’s basically expecting it to be some kind of. Silver bullet and didn’t look at any of the other metrics that it might have been improving.
[00:07:26] Dara: So for our listeners who can’t see Matthew’s, just given a little virtual thumbs up there, you must really agree. I did know how, but you must really agree with what I’m saying. So yeah, I think the bottom line is just like our salty news. I think this is a very salty report, but it’s certainly getting some attention.
[00:07:42] Dara: So if that was their, if that was their real objective, then I think they’ve achieved it.
[00:07:47] Matthew: Well, not so tenuous link into what we could talk about a little bit with Claude is that they, they released these, what do they call ’em now, that Global Indexes Economic index? E Economic Index, yeah. And it looks at, essentially, it looks across industry, it looks across countries, looks across various different metrics to try and figure out what impact is clawed in this case having on specific geographies and specific sort of verticals.
[00:08:15] Matthew: Obviously they know that because they know where people are querying from and the types of questions they’re asking so they can figure out like what kind of job this person has and what aspects of that job are they attempting to automate or augment, and what stuff aren’t they touching. And if the idea is for them to get a, just a, an idea of sort of how systems permeate and where and when, this might have far reaching economic impacts, I suppose if automation takes off or whatever in particular sectors.
[00:08:43] Matthew: I will describe this for the listeners, but I am going to do, I’m going to share my screen, which is radical, radical on a podcast, but we will describe what we’re seeing a little bit. So I’m just looking at the latest one here and a couple just chucking out a couple of thoughts. So this is like the top 30 countries by share of cloud usage.
[00:09:01] Matthew: And as you can imagine, the United States. Takes up sort of 21% of the entire world usage of, of CLO UK being a piddly 3.2% of global usage. But then they adjust it for population size. So people within a working age within a country, how much are they using it? And actually Israel by a long stretch out in front at the top of the most people use per capita working age of it.
[00:09:32] Dara: Interestingly, Singapore quite high as well,
[00:09:34] Matthew: aren’t they? Because they called that out, I think Australia. Yeah. Yeah. I think the United States is still up there. What they are sick of and the United Kingdom are down in, I don’t know. I can’t even count that high. I think it’s, I think over 10 are from Ireland. Yeah. So, yeah, they are actually, huh?
[00:09:52] Matthew: They’re on the top of the board AI usage index of 2.26. Proud of that, whatever that means. I think one is supposed to be what they would expect, the expected amount of usage for any sort of place based on their geography. And it’s over. That’s how much more than they expected there is. So yeah, just thought it was interesting that even though the US is obviously dominating in terms of using, it’s because of the size of the population and it seems like there’s other PE places like Israel, Singapore, Australia, New Zealand, South Korea, are all coming above the United States and what they’re using and Israel seems to be way in front in terms of its use.
[00:10:30] Dara: Yeah, and there was something in the, sorry if I’m stealing your thunder and you’re going to get to this, but it was something that I saw called out around the kind of income correlation and it’s, AI usage is higher in countries that have higher income per capita, which is maybe not, it’s one of those things that maybe is and isn’t surprising at the same Yeah.
[00:10:48] Dara: Time. Yeah, it’s, it’s income and
[00:10:49] Matthew: income and, and I’d probably AI usage. Index back country. So yeah, the more GDP per working age in US dollars, the more likely they’re going to be using it. The higher index of its use. C it, I guess. Yeah. Like you say, it’s obvious if you think about it, but interested to see is pretty tight correlation
[00:11:09] Dara: usage to capita, which is something, and that, I think you’ll hear this come up in a couple of different contexts, but I think this is something that is a point around some of the inequality around not just ai, but I guess tech in, in general.
[00:11:23] Dara: But you’d think, you know, anybody with an internet connection can use this. But it’s not that straightforward obviously. And I think people who have access to better education and, and potentially have, you know, come from higher income backgrounds are, are more likely to take further advantage of this technology.
[00:11:40] Dara: Even though you might, and you really shouldn’t, but you might think that it could be a leveler, because. To some extent this is free. Obviously it’s not entirely free, but you know, it is available for everybody to use in theory, but in reality it doesn’t quite work out that way.
[00:11:54] Matthew: Yeah. And your hope, your hope is going to be a leveler, but I think we might even talk a little bit like about this on this podcast or one coming up, but the trends may well be the powerful, just get more powerful.
[00:12:04] Matthew: Yeah. This technology. ’cause they just have access to it, which is worrying. Yeah. And the other thing, so they’re going to do this blood for a global audience as well, but for right now they’ve released a very specific US usage report, which shows what was quite interesting. So you break it down by state and who, which states are using more of the technology versus others.
[00:12:22] Matthew: As you can imagine, like the West coast and, and east coast are, are using, are the, the highest users of it and sort of the mid states and the low states aren’t, aren’t using it as much. What I did find interesting. So they break this down, break another bit down by top industries. Like what are each of these states using it for when they are using it?
[00:12:43] Matthew: Obviously as, as a lot of people would guess, computer and mathematics is the top in pretty much every state. The top state for its use is actually DC and it’s only 7% of their usage is computer, mathematical. And the other 5% is arts, design, entertainment and sports, which just doesn’t appear anywhere else on the map.
[00:13:02] Matthew: community and social services. So it doesn’t appear anywhere else on the map. Yeah, that one state’s got a different use of it than anything else. Just interesting.
[00:13:11] Dara: Can you dig down into that? Can you, ’cause one thing I found with this is. Not knowing, what would you call it? Even the kind of demographics of the different states of Yeah.
[00:13:21] Dara: America I, reading through it, it, it was less, I took less from it because I didn’t know enough about what I would expect in different, you know, on a state level within the, within the states. But if you want to, could you dig, can you dig a level down?
[00:13:34] Matthew: I dunno, you, you can, you can look a little bit about the overall category and group that this just shows you that Washington is, well, yeah, Washington DC’s top, I mean, the only thing I know about it, it was probably, any American that’s screaming at us, but I know it’s the capital.
[00:13:51] Matthew: and I suppose a lot of politics and, and that sort of type of thing goes on in that state. And I guess I expected, for example, that the west and east coast would be heavier in terms of their usage just ’cause of, you know, Silicon Valley and California’s over on the west, isn’t it? And you’d expect to have a higher concentration of usage from computers and things like that, I guess.
[00:14:11] Matthew: Like, universities as well. ’cause education seems to be a, a larger proportion in, in certain states like California, Washington education comes out really high as a use case. So anyway, we’re kind of guessing at things here really. But, one final point of this, I suppose it’s, I also broke it down by augmentation versus automation.
[00:14:32] Matthew: Mm. So augmentation being, they define it as, you know, you are kind of using it as a copilot. You’re working through the thing together until you get to an output automation, they’re defining you as you, you’re happy just to hand tasks off to AI and just say like, you do it. And they, they do mention somewhere in the report, we’ll, we’ll put this whole thing in the show notes.
[00:14:51] Matthew: It ventures somewhere in the report that automation is rising. It’s almost like the trust in this stuff. People are getting more and more used to it or more and more trusting in it, or they’re getting more and more skilled at using it. But whatever that is, they’re kind of handing more and more stuff just to you to do it.
[00:15:08] Matthew: And I’ll take the output. Yeah. Apparently Utah Love automation. Well, they’re not, they are known for it.
[00:15:14] Dara: I mean, I don’t know that much about the states, but I do know Utah loves, loves a bit of automation. I thought that was interesting as well. And I think the one way to look at it is, and I think this might have been in the report itself, it’s kind of framing it like, oh, the tools are getting better so people are happier.
[00:15:30] Dara: Yeah. But maybe the cynic in me says, well, actually, is that the case? Or are people just getting lazier and they don’t want to check as much? So they’re, they’re just handing tasks over now thinking, oh, well, you know, it’s probably good enough. I think there might be two different ways to look at that.
[00:15:47] Matthew: But the other thing, probably a mo it’s probably a mix of all of it, isn’t it?
[00:15:49] Dara: Probably a mix of all the factors. Yeah. Most likely. Yeah. ’cause it will be, ’cause it obviously is getting better, but at the same time, the more familiar people get with it, the, it’s like with anything, isn’t it? It’s like you, you know, at the beginning you’re very cautious with things and then you start to get a little bit complacent over time.
[00:16:06] Dara: It’s like my phone, when I get a new phone, I don’t drop it and then as soon as I drop it, I drop it pretty much daily after that. After that point, after that point. ’cause you think, well, you know, I’ll just, you know, it’s, it’s, yeah, it’s fine. It’ll be fine. but the, the other interesting thing I thought about, automation versus augmentation is that the countries with lower AI adoption per capita are, tend to be using it more for automated tasks, whereas the, the, yeah, it’s the, you know, the reverse, the other way around with high adoption countries preferring, augmentation, it seems.
[00:16:40] Dara: So what exactly does that mean? I don’t know.
[00:16:43] Matthew: I dunno. You can make, you can, I can guess a hypothesis of like, you know, you’ve got maybe more work going offshore and they’re using it just to deliver Yeah. Tasks to make that work. Yeah. And then put on, on the, in say, some of the higher per captive countries, it’s individuals using it within companies to augment themselves.
[00:17:02] Matthew: Yeah. But that is, yeah. Like that, that’s a complete guess in hypothesis. But that’s the first thing that popped into my head. Yeah.
[00:17:07] Dara: I think it would make sense. And it’s going to be interesting when the global version comes out to see some of these trends Yeah. Do the same trends exist across different countries, or, you know, what are the, what are the differences and how it differs over time as well.
[00:17:21] Matthew: Yeah. Very, very, very serious, straight-laced, salty news this, this week. Not very salty at all. Really? No. We actually had data to back up all
[00:17:32] Dara: those saying real, real hard data visualizations. Yeah. And I’m not really news either. This is like a live show, it’s a live demo. Wow. People are really getting a treat this week.
[00:17:40] Matthew: Yeah.
[00:17:41] Dara: With your saws reacting to it in real time. And now it’s done. And now it’s done. Okay. So coming up next, we’ve got our guest this week. Who is Brian Clifton? Who’s going to be somebody that a lot of our listeners will know. So anyone who’s been working in web analytics will undoubtedly know the name.
[00:17:56] Dara: Brian Clifton. He’s a published author. He was actually part of the Google Analytics EMEA team in the early days. And, a really interesting guy. We’ve a great chat with him. A lot of the same kind of themes come up that we’ve been talking about lately, things around obviously ai, but privacy being a kind of perspective that we, we had the conversation through, given Brian as a, a key privacy advocate, but quite a wide ranging conversation and.
[00:18:23] Dara: I feel like we challenged him a little bit. I think I apologize at one point for saying we’re kind of dragging him in one direction and then dragging him in another direction. But he handled it well, and I think we had some quite thought provoking conversations.
[00:18:35] Matthew: Yeah, really good chat. I, I think people who have enjoyed the recent sort of episodes of talking about AI and how it pertains to what we do will really get a lot out of it. So.
[00:18:46] Dara: Enjoy it. A very big welcome to this week’s guest. We’re joined by Brian Clifton. So Brian, firstly, big thank you to you for agreeing to join us and welcome to the Measure Pod.
[00:19:01] Brian: Pleasure. I’m looking forward to it.
[00:19:02] Dara: So we always kick off, Brian, when we’ve got a guest. We let them do the job of introducing themselves.
[00:19:06] Dara: So if you could tell. I have a feeling a lot of our listeners will obviously know who you are, but if you could tell us as much as you like about your bit of a whistle stop tour of your background up to what you’re doing today.
[00:19:20] Brian: That’s always the hardest part, isn’t it? It’s, this is the tough question yourself. Some people love it. I absolutely hate it. Anyway, yeah, I’ve been around a long time. So I, crazily set up my first internet business in 1997, which was, one year before Google was even launched. So it was a silly thing to do, really. I just ended up explaining what the internet was to people.
[00:19:43] Brian: But anyway, I’ve been a long time and, and now I’m, I’m going around giving presentations and, and the title is typically, you know, 25 years of web analytics, what I’ve learned sort of thing. So, it makes me feel old when you stick that number in front of it. But, a lot of experience. I’ve learned a lot.
[00:19:57] Brian: I, I guess, I’m known for the four books that I’ve written around Google Analytics in the early days when people actually read books. So I think the first one was 2008. and that’s a, you know, a big, a big thing I think for any practitioner to be out there in print and to be recognized for that.
[00:20:17] Brian: I also worked at Google, so I was the very first head of web analytics for emea, so for Europe, Middle East, and Africa. I joined in 2005. I had a very strong association with a company called urchin, urchin Software, which is the company that was bought by Google and became Google Analytics, in fact. so I joined in 2005.
[00:20:38] Brian: It’s a grand title, head of analytics, head of web Analytics. But, it was just me at the time. so I was tasked with building a team. But I think my, my, I guess my only achievement really at Google, I mean, it was a fantastic time, like a hands-on MBA, I mean, it was chaotic, fun, frustrating, you know, we grew as a company.
[00:21:01] Brian: I think we doubled every year. So when I joined, there were about 3000 people. The following year, 6,000 the following year, 12,000 and 20,000 or something. And I dunno what they are now, 150,000 maybe. But, it’s a fun, you know, chaotic time. I think the one thing that I’m proud of there, apart from of course, building a team, but if you’ve heard of the Google Analytics individual qualification, the GA iq, that was me, that was my baby.
[00:21:28] Brian: I built that, from, conception and I trained a few hundred Googlers and to get their knowledge up to speed in, in the early days. And then it was taken, you know, publicly and used externally. And it became, you know, I think pretty fair to say, the qualification to have, if you wanted to show off your skills with Google Analytics, it’s changed and something different now.
[00:21:49] Brian: I think for 15 years it was the dominant one. So I’m very proud of that. I mean, those are my big achievements in my career.
[00:21:55] Dara: I have a contentious question then straight off the bat. So the G-A-I-Q-I had a theory for years that it would intentionally give you the odd, wrong answer even if you were convinced you were right.
[00:22:06] Dara: maybe this is just me trying to justify not getting the full a hundred percent, but I, I used to argue with some of the answers and think, I don’t think that’s quite right. So is there anything to that wild theory or Certainly not when
[00:22:17] Brian: I was there, no, there was nothing like that. It was hard enough just getting it launched globally.
[00:22:21] Brian: It was one of those very rare events. I think it was the first event at the time where someone from Europe developed a product that was picked up by Mountain View. so, Google is very US centric and at that time was very California centric. Nothing was developed outside of Mountain View at all, to my knowledge.
[00:22:39] Brian: And it was the first one, so it was like, you know, yeah. Hey for Europe. So we were, we were just struggling to. Get it out the door, make it relevant, make it interesting. Yeah. Get something out of it. There’s nothing like that then, whether it is now, I, I don’t know. It wouldn’t surprise me if there’s an odd ringer question in there to pick up any people that are cheating on orbit more tired than doing that.
[00:22:59] Brian: But I never got a hundred percent, and I wrote most of the questions in the early days, so I used to catch myself.
[00:23:04] Dara: Oh, that makes me, I’m fine. In fact, that’s even better. I’m happy with that answer. stick sticking with ga then we’ll probably meander a bit, maybe a good bit of this will focus around ga some of it won’t.
[00:23:17] Dara: What are your, and I’m going to start with probably a pretty, pretty big broad question, but given you were so close to it at that time and having seen it go from, you know, urchin becoming Google Analytics through to Universal and now more recently, GA4. What’s your big picture take on where Google Analytics as a product is at now and where it sits in the broader kind of analytics landscape?
[00:23:43] Brian: Well, I mean if you go back to those early days, I mean, when we launched, I joined just before the launch to sort of do that in Europe. So I kind of remember it pretty well, in those early days that the first task that we had was to figure out, okay, how many users are there of a commercial strength web analytics tool?
[00:24:03] Brian: and we came up with a number of, around 30,000. So globally there were about 30,000 accounts that we were kind of competing against. That was the total size of the industry. And we had to figure out, well, okay, what, what’s this difference? You know, what, what difference is this tool going to make?
[00:24:18] Brian: And as you know, launching it for free was a massive change at that time. ’cause everyone was having to pay and, and not just paying, you know, to have a product they were paying typically, you know, per report. You know, if you, if you wanted to customize that report, you might have to pay for an extra custom variable to be added.
[00:24:35] Brian: So that was the state of the industry. And within a week of launching, I remember this, we got a hundred thousand signups. Wow. So, you know, instantly we tripled the size of the industry, which was fantastic. A little bit scary ’cause we weren’t expecting that. We were hoping maybe after one year we’d have that kind of volume, but within one week.
[00:24:55] Brian: So we actually had to, you know, close the service for new signups that a hundred thousand were still running, but we couldn’t take any more, new ones that were closed for a year. That’s why we got more strains to say this, but, you know, more hardware in place, more processing power, more memory. I mean, literally tens of thousands of machines will require them unaccounted for.
[00:25:13] Brian: Now, when you look at where adoption is, it’s difficult to count accounts because, you know, people have multiple accounts and test accounts and blog accounts, and. Some accounts are dead and they’re no longer around, but it’s about 30 million. So you’re gone from an industry size of 30,000 to where we are now is around about 30 million thousand fold increase.
[00:25:31] Brian: I mean, it’s a fantastic journey and this real kind of, the big phrase that was always talked about right from the beginning was data democratization. So the fact that everybody has access to the data, everyone can do their own analysis or, you know, come up with their own theories and plans of how to improve.
[00:25:48] Brian: You know, in those days we talked about improving the world. You know, we talk about improving not just businesses and marketing, but just making things better for everybody. That was Google’s mantra. You know, make better decisions. We could data. So the big part of that was a premium model. And, and that was a double-edged sword because it is, by making it free.
[00:26:06] Brian: Yeah. Massive adoption of course drives it. You also have to have a good product, you know, and it is, I think a very good product, but also it’s difficult for the competition. ’cause obviously how you compete against a, you know, 800 kilogram gorilla in the room that can do what it wants because it hasn’t. A very profitable ad network behind it.
[00:26:22] Brian: So I think the big thing, the big change has been this just, you know, massive adoption. And some people only know Google Analytics as their analytics tool, but now everyone’s using data and expecting to use data and expecting to have access to it. So I think that’s the, the big win from Google or Google Analytics or even a practitioner like myself point of view is, you know, from a niche kind of industry, a niche set of skills to, to one where, you know, every marketer, every analyst is expected to have some pretty advanced skills now.
[00:26:51] Brian: So they’ve been a, yeah, that’s a fantastic journey in my sort of professional career.
[00:26:55] Matthew: Do you see that changing? ’cause you get, say with the emergence of GA4, you got so many people who weren’t happy with that change from Universal Analytics to, to Nalytics four, shout into the hilltops that they’re going to move to this, that, or the other.
[00:27:08] Matthew: But it still strikes me that that freemium model on that 800 pound gorilla as you put it, still exists. And I don’t know how many people there are actually really. How much attrition Google is really seeing with the move to GA4 or, or if things have changed.
[00:27:22] Brian: Yeah, it’s more like 8,000 and it’s kilograms, not pounds. So, yes, I’m an English man, but I live in Sweden now. I’ve been here for 15 years, so I’m a Swedish citizen and everything, but, but still English and, but, but kilograms is my thing. Yeah, it’s been, I’ll say a couple of things is one, the big transition part of that transition of being dream is tools like web analytics, sorry, Google Analytics and others moved away from being very, IT focused, you know, reports for it, people to just measure load, if you like, overload, to being much more about marketing.
[00:27:55] Brian: ’cause marketing, you know, is the purpose of having a website at the end of the day for the vast majority of businesses. So he moved out of it into this kind of marketing more generalist sphere because Universal was just such a good user interface. It was very intuitive. I think that’s why people liked it.
[00:28:12] Brian: It was very hard to sell training services in those days. I mean, that, that’s what I did when I left Google. I became a consultant in a small business. And, training was one of the ideas we had. You know, let’s get people to use this product, you know, in a smart way. But, people just said, well, no, we don’t need training.
[00:28:27] Brian: We can figure it out ourselves. It’s so easy to understand the news. So you had this massive generalization and I think of the product and sort of good understanding. But I think what’s happened with GA4, and I think why people have been and still are frustrated, is the fact that we, it’s become a very technical tool.
[00:28:44] Brian: Again, it’s kind of moved away from this kind of canned reports, if you like, predefined reports, which were very, very useful, very, well thought through and you could do some customization and it’s become very flexible. Yeah. A tool which is, is, can be used in many more different ways than universal could ever be.
[00:29:06] Brian: But with that flexibility has come more complexity. So it’s become more focused, more technical. Again, it’s got quite funny, we’ve gone this kind of full circle. Whether that’s a good thing or a bad thing. I don’t know. I miss Universal. I generally do, I think, that there is a, a real, hole in the market for a really good marketing analytics tool where you have good analytics skills but you’re not a data scientist.
[00:29:29] Brian: We just need to make quick decisions about, you know, ROI and material ad spend and things like that. That’s quite difficult to do in GA4. You really have to go deep into it. I feel you can’t just look at GA4 on a kind of once a week basis. You have to be in there almost daily to just kind of remember where everything is and have to customize it and get the answers you want.
[00:29:48] Brian: So a double-edged sword, you know, great to have a more advanced product. I think Google looked at Adobe and the customer general analytics and thought, wow, that’s powerful stuff. And just thought, well, we have to go after that from a big enterprise point of view and kind of left a lot of people. Not wanting that.
[00:30:03] Brian: And that’s where I think the opportunity is for this competitor’s tools.
[00:30:07] Dara: How big of an impact, so we were talking there about whether the, you know, the change in the technology is driving people away from it or not. What about Google’s kind of, well both their approach and also maybe their perception people have of them in terms of privacy.
[00:30:22] Dara: So do you think with the changing legislation, do you think more companies are starting to think about moving away from a user privacy perspective rather than necessarily the differences in the skill sets required to use the platform?
[00:30:35] Brian: Yeah, I think it’s a double whammy. I think you had the UI fiasco and to be honest, it wasn’t launched very well.
[00:30:41] Brian: Or Google claiming GA4 was yeah, just as good if not better than universal. And actually it was so, so many features missing. I think they’re just about at parity in that respect. So I think it was part of that. And the privacy issue, I mean, privacy is, you just said it’s. Much more part of the mainstream conversation now.
[00:31:00] Brian: You know, marketers talking about privacy, which was unheard of, you know, five years ago, that they’re worried about it. And I think what’s driving that, it’s not just the legislation ’cause we’ve always 20, 30 years, maybe even longer, had privacy legislation that was still applicable to the digital world.
[00:31:15] Brian: But when GDPR and made it a more sort of, yeah, European wide, more solid framework, not a lot changed. People said, oh great, there’s a legal framework, but when fines, you know, the data protection authorities started issuing fines, which started, I think about. 2022, that’s when people took notice because now there was money on the line.
[00:31:34] Brian: Of course, a couple of online pharmacists last year or two years ago were fined about one or 2 million euros. The bad practices with data that get people’s attention. No one wants to pay a fine and no one wants to be responsible for that data breach. ’cause that’s your job on the line basically. It’s not good on your CV as well as the bad PR for the company, which is, you know, incalculable really.
[00:31:54] Brian: Who’s going to, who’s going to buy from an online pharmacy that’s had a data breach? Oh, you should not go there, are you? I think privacy is definitely there now. And I think the challenge you asked about before is about people really moving away from Google Analytics? There’s a lot of talk, there has been a lot of talk of people wanting to move.
[00:32:11] Brian: What they figured out is that change is really hard. It’s hard to re-interpret your website for another tool. And it’s hard to retrain everybody for another tool and, and of course then there’s the free, the lack of a free model typically you have to pay. So there’s a lot of barriers I think, to stop that.
[00:32:29] Brian: As well as, you know, this massive shift that we had in the past 20 years from it to this data democratization. Who owns web analytics these days? There are so many stakeholders. You, you have it, you have marketing, you have, HR is a big, you know, a large organization is a big stakeholder. You know, the CEO, the board level.
[00:32:47] Brian: They want access to the data. You know, there are so many people, sales teams, support teams, no one’s in overall charge. And so there are cracks and when you want to have change, if you don’t have a leader responsible for the entire product or ecosystem of data collection. It’s very hard to drive that change.
[00:33:06] Dara: I was trying my best to hold off till a bit later to introduce the buzzword ai, but I might, I might have to just do it now.
[00:33:13] Dara: So I was going to ask you this later, Brian, but I think maybe it’s a good point of conversation to ask it now. So you’re talking about the kind of resistance, which is, which is natural when a company’s been using a product for a very long time, it’s not easy to just change that overnight for a variety of reasons.
[00:33:26] Dara: Some technical, some cultural, maybe some down to, you know, ownership with things like, you know, the advances in AI technology and things like MCP servers. Do you think if the technical barrier to switching to another platform is reduced, could that be a tipping point to make people actually shift away from, and not even necessarily to shift away from Google Analytics, but potentially just pick and choose different tools for different jobs and it could just become a lot more modular and people are just using AI to bring in whatever technology is right for the given task at hand?
[00:34:00] Brian: Yeah, I think. I think you’re right. I think AI has a very large role to play. If you think about me, well, I was talking earlier about my love of Universal. I still think it’s a great product, a great user interface, so much, you know, information to get across. and yet. Pretty straightforward to navigate year round.
[00:34:18] Brian: So user interfaces are really tricky to build. I mean, I know that ’cause I, I’ve built my own tool, to do auditing of data collection for example. And, you know, you want to highlight stuff, but then once people have found it, they don’t want to be kind of highlighted so much because they get it. Now you want to highlight something else and, and you can’t do that dynamically.
[00:34:37] Brian: You have to build a one size fits all interface. But if you remove the interface, if you have some standard reports and say, okay, let’s just use a prompt, you know, so we don’t have the visual stuff and the menu navigation, let’s just use a prompt and ask you questions. I mean, Google started going down this route, I think about a year ago or so, you could start asking good questions.
[00:34:54] Brian: I was at an event, I think earlier this year in Amsterdam where a Google event where they openly said, you know, that the GA4 interface is not really going to change. ’cause people still complain about it. It’s not as good as universal or it’s not intuitive or it’s clunky. And it was very much made clear that the GA4 interface is not really going to change much, but what they’re going to push a lot more.
[00:35:15] Brian: Is AI prompting. So you build your own interface, essentially look at your own chart. I think that’s going to be massive because then you don’t have to think about the body of knowledge you have just in order to find stuff and do stuff in analytics tools, you know, they’re so complicated now and you know, you have to invest so much time in it.
[00:35:32] Brian: You know, if you were to move away from that, you would be able to switch to a different tool a lot easier. And even, you know, peer Week Pro, which is a much smaller player than Google, they have a very dated interface. I mean, I think that’s part of what’s holding them back. Great product, but dated interface and it’s, you know, I’ve talked to them, I’m involved with Peer Pro at some level and, you know, I’ve talked to them about, you know, let’s get this interface sorted out.
[00:35:56] Brian: And, but it’s a huge cost for them. So many dependencies of the backend tool. It’s not just re-skinning, but they’ve said, you know, we’re thinking about doing this with ai, using a prompting interface, basic tables that give you the data, the raw data. You want to see something graphed or something like that.
[00:36:11] Brian: Unless you’re on a prompt, those tools are good enough. Now,
[00:36:13] Matthew: It feels like the path of LE resistance, doesn’t it? Like even with search. You think that it was such a well established thing. Going to Google, finding a topic, flicking through a few sites, pulling out the information you needed, gathering that together and making a decision or, or whatever.
[00:36:28] Matthew: But that was so quickly interrupted by OpenAI search features and the deep research that’s begun to happen. And we, and we were saying last podcast with Gunner that may be in our minds, it’s difficult for us to imagine moving away from these interfaces because potentially we’re too far into our careers and too much of curmudgeon to think about just opening up a, a text interface and just interacting with that instead.
[00:36:51] Matthew: And like you say, maybe Google will just. Tag that on the end and slowly nobody complains about the crusty GA4 interface as much.
[00:36:58] Brian: I think it’s a challenge for us older generation of analysts, should we say, to move away, but I think you’d be surprised at how straightforward it is. You know, it took me, for example, to use something like chat GTP or Claude, you know, within a week.
[00:37:11] Brian: I was like, oh, how did I ever live without this? Not just because it was fast and quick and summarized, you know, hours of research in perhaps a few minutes. Not just that, but just how quickly you get into writing prompts. You know? Good question. Questions basically. I mean, you, you know, a year ago, all of my professional life, it’s been when you do a search, you know it’s been keyword searches, you know, oh, what keyword should I type?
[00:37:32] Brian: Do I use a plus sign or a minus sign to make sure it’s definitely included and or not included, and add quotes and all of that nonsense we’ve had to go through for, for 20 years. Now you just write a question and you get a pretty good answer and not always perfect. You’d be surprised just how. Yeah, quick and easy.
[00:37:48] Brian: That is ’cause it’s natural. What we’ve done for 20 years is, you know, unnatural. Now it’s just going back to natural language. It’s just how do we not do this from the beginning? That’s kind of always amazed me.
[00:37:58] Matthew: There potentially, I suppose, lies a problem in how easy it is. And we’ve talked about this a couple of times over the past, sort of a couple of months, pretty much triggered from when Google did the next 25 and they released all these data engineering and data science and conversational analytics tools.
[00:38:15] Matthew: And Dara and I were sort of commenting like, you’ve really gotta make sure that the data at that point, because it’s so democratized and so in everyone’s hands and it’s so easy to just query is right. And it feels like there’s a lot, and maybe it’s to your point before about it, it becomes more technical to make sure that all of that underlying stuff is right before everyone is just spouting garbage. Off garbage data, garbage in, garbage out.
[00:38:39] Brian: I, I dunno when that praise was invented. I think it’s like the 1970s or something. Still. Well, more applicable now because we have so much. And let’s face it, most data is either noise or junk.
[00:38:51] Dara: I mean, just thinking about privacy and what you were just saying there about people, you know, now you can just ask a question.
[00:38:57] Dara: You can be really descriptive, prescriptive, you can give loads of qualifying context when you’re prompting. So by doing that, and I’m sure we’ve all done this, you are, you’re actually sharing quite a bit potentially about you. That’s, that’s identifiable. And we seem, it’s weird because I, I probably have been sharing less on websites over the past, say five, six years, whatever, due to, you know, all of the different privacy scandals and everything else.
[00:39:22] Dara: And yet now I’ll go onto Claude or Chachi BT and I’ll happily just say, I’m in this place and I’m looking to do this thing, and here’s some more information about me. So there’s kind of a, I guess where I’m going with this is a natural incentive to people. They’re getting value by actually sharing information, but there’s obviously a cost to that, that maybe people aren’t aware of.
[00:39:42] Dara: So do you think that with the kind of growing trend of people using LLMs to research and find information, do you see this as a step backwards potentially in terms of user privacy or, or can it be managed and and balanced where you’re, you’re getting a good trade off between what you’re giving and what you’re getting?
[00:40:00] Brian: Well, I think the good thing with, with regulation that’s just, you know, made this very strict, even the, with the AI Act, but before that, just in general with the websites, is there has to be some kind of consent. So the nice thing with the AI tools, well they have that, but I think with the AI tools, I mean, I’m a big fan of Chap GT P and I’m getting into.
[00:40:20] Brian: More recently, which is a kind of ethical version of chat, GTP. There are very easy settings to just turn off the fact that they will use your data to train their models, but it is opt out, so it’s on by default. It’s very easy to find an opt out. So I always, you know, make sure that, you know, my team, you know, they’re using chat GTP, they have that off.
[00:40:40] Brian: And it’s the same if you’re using Google as, I dunno about the pre account, but the enterprise version, you know, with the Google Docs and, and all of those things. That is off by default as well. I think if we didn’t have the regulation that we have now, particularly GDPR, which is the gold standard globally, if we didn’t have that, I, I’m sure those settings just wouldn’t exist.
[00:40:58] Brian: And it would be like, well if you’re using our product, you have to share your data. That is the kind of, the meta approach of, data privacy. Thankfully we have those and so there are some guardrails there. but yeah, you have to be aware of what you’re doing, you know, with data. And you know, the reason why I know this is because.
[00:41:14] Brian: We use it in the company. We use a lot of AI tools to help us and we, we obviously work with client data and we have to tell the clients, look, you know, none of this data, if we ask a question about a particular table of data, make it simple. You know, none of this data is actually used by, by Chap, GDP.
[00:41:29] Dara: Maybe this is a bit of a hypothetical ’cause I think this isn’t possible right now, but let’s say people are using their own, people are using their own AI models on their website or on their app. That’s the list.
[00:41:38] Brian: More people do their own custom stuff. Yeah.
[00:41:40] Dara: Ex exactly. So if say, and even if there’s some element of consent, so let’s say I go on an app or website and I give some element of consent to have some of my data collected. The power of these AI models is what they can infer from potentially from a relatively smaller amount of data.
[00:41:58] Dara: It seems like consent has become a little, has well, has obviously become more complex because it was always an issue even with websites and apps and cookies. Making sure the user is giving informed consent is tricky because the user has to be informed and most users aren’t. So that, to me, that’s only going to become even more complicated when, you know, you get your little pop-up box that now just says, we may use some of this data for AI modeling, and you think, yeah, fine, it’s just, you know, generic information.
[00:42:25] Dara: But then they’re actually using that potentially to infer more information about me. So. Sorry, rambly question, but I guess the question is, do you think AI is going to make that issue of consent a lot more complicated, where users will actually be less informed rather than more informed?
[00:42:42] Brian: Yeah, it could be. I mean, what are you consenting to, like you said, and, and AI modeling? What does that mean? You know, it could mean so many things to so many different people. I think the good thing is all of this is built on trust, and as soon as a tool loses trust and Google knows this more than anyone, as soon as you lose trust, you know, your product can disappear rapidly because there are alternatives.
[00:43:01] Brian: Now, I, I mean, you know, 10 years ago it was Google or Google, you know, now there are so many different, you know, tools out there, whether it’s search engines or whether it’s AI or analytics even. There’s been a resurgence. We have the depth of web analytics competitors 20 odd years ago. Now they’re coming back because there are niche areas.
[00:43:18] Brian: Google’s one size fits all product. For example, everyone pretty much gets the same product, whether you are pros and gamble or some mom and pop store. So I think this proliferation of tools means there’s competition and the one that gets this wrong is just going to disappear. So there is some motivation there, apart from regulation, but you’re right.
[00:43:35] Brian: You know, when it comes to using these tools, who owns the data? Who’s responsible for it? Governance, especially if it’s being moved around, it’s different. Third parties process it. Now who’s privacy are we talking about? You know, if I have a subject access request, so saying, look, you know, I want to know what data you have on me and I want the right to edit it or delete it, but it might be wrong.
[00:43:55] Brian: Or it might be something that happened 30 years ago, which I don’t think is relevant. Something like that. Who’s going to process that for you? I mean, you know, good luck. You know, getting that when it’s in this giant, you know, pile of data that Google or these other providers have. And also just as an aside to privacy, who has the copyright?
[00:44:13] Brian: You know, if I write content. In a book and I print that or I put it online. I did that for a purpose for individual users to read a book. I didn’t grant access for someone to scrape all of that content and monetize it for their own purposes. So who owns that copyright? So there’s all sorts of, you know, privacy and as well as legal, intellectual property questions that need to be figured out.
[00:44:33] Brian: I, I think the big ones, the ones we talked about, the open air and, and the clots, and I think they are, I’m not saying they’re on top of it, but they’re certainly thinking about it and trying to do the right thing. That’s what I hope anyway. But the smaller players, which are just building their own little tools perhaps to connect to an analytics account, to offer some kind of insights into analytics, what are they doing with that data?
[00:44:54] Brian: Really, you know, they’re just a small 10 man company based in some outpost of Eastern Europe. Clever engineers, I’m sure. But you know, do they really take privacy as much as seriously as we do? Do they understand the regulations as well as we do? Are they, you know, thinking of collecting data now and then using it for different purposes tomorrow?
[00:45:12] Brian: You know, all of those things. It’s, that, that’s a real challenge tho those kind of offshoot companies rather than the main players who know if, if a main player slips up, they know there’s going to be another, you know, if chat GTP messes up, there’s some massive data breach, Claude, which is basically some of the original founders of OpenAI left there and set up Claude.
[00:45:33] Brian: You know, they will pick up and everyone will simply move to that tool. So at least we have that competition, which is also a good balance.
[00:45:40] Matthew: I do wonder though, if, especially with websites and the fact, I think we, like we already said about search and a lot of that information being surfaced directly in a chat interface rather than a user then going off to traffic a site and go to a site.
[00:45:52] Matthew: And I suppose the content creators are the big people hurting people who earn money off content on their sites. I do wonder, one theory I’ve got is all this personal information that you chat to a chat got about, or a Claude, that they could start sending packets of data to websites about you to try and give something back to these websites and, and give them information about who’s consuming their content.
[00:46:15] Matthew: That’s my cynical crystal ball view, that they may have the richest kind of data that you could want from these open air, from these foundation models sent back to sites. You’d hope that doesn’t happen and there’s enough privacy stuff in place already to circumnavigate that, but it feels like an avenue.
[00:46:31] Brian: Oh my, my big worry is, you know, professional journalists. You know, the big, you know, proper journalists we’re talking about The Guardian, the BBC, the Herald, the New York Times, you know, proper journalists doing investigative work, holding democracy to account, or being in war zones, trying to tell the truth, you know, if all that’s just going to be scraped.
[00:46:48] Brian: Just given away for free or summarized without any in-depth analysis. You know, there’s a worry there as to how it, yeah, it, it gives a false narrative of the story, but, but also, you know, these companies have to make money. You know, the Guardian or, or whatever newspaper you read has to survive. You know, journalists have to live as well.
[00:47:03] Brian: So that’s always the worry, isn’t it? I mean, the traditional model of appetizing is so destroyed with digital over the last 20 odd years that they’ve struggled to survive. And I think AI is possibly making that worse. So, I dunno what guidelines or guardrails are, are there for them, but I do worry about professional journalism.
[00:47:19] Brian: Yeah.
[00:47:19] Matthew: Yeah. It seems like a lot of the frontier models like open AI are just going with their hat in hand to try and make posthumous agreements. Like, yeah, we’ve already nicked all of your data and trained everything on it, but could we make an agreement now that we could have done that if we wished.
[00:47:34] Matthew: I don’t know how lucrative they’re,
[00:47:36] Brian: I mean, maybe some good will come of it if they do that, but it, yeah, it’s yet to be proved, isn’t it? I mean, that, that’s the problem. You know, you know, all this data and bad things can happen, or people just over summarize it and you get a one word answer to a very complex problem.
[00:47:48] Brian: And I think there’s enough complexity in the world that we, we don’t want to, you know, shrunk down to one word. We need the nuances. We need to understand the differences between, I dunno, different cultures, for example. We can’t just all unify in one word. We, you know, there are subtle differences that can be, you know, highly offensive or, or whatever to people.
[00:48:06] Brian: And that, that’s important. You know, those things are what make us human. That’s important on the, on the, kind of risks to industry and professions.
[00:48:15] Dara: Just switching back to our own industry, so. Brian, you’ve been consulting and you’ve been teaching people through the books that you’ve written how to implement analytics solutions, how to interpret the data, how to use the data for years.
[00:48:30] Dara: as you mentioned at the beginning, I think I read that you said somewhere that, you know, AI tools can help speed up the process for some analytics tasks by 10 times, and that allows the analyst or the, or the analytics consultant to focus on deeper work. And this is something that came up on our last episode as well, actually with Gunnar Greece.
[00:48:48] Dara: He was saying the same thing and I asked quite a cynical question and I said, look, in reality, will some people at least just still find busy work to do even when they free up that time? Like, is this an illusion, this idea that we’re going to free up this time to do something more meaningful, valuable. Or do you really see that that’s going to happen, that we’ll just basically get rid of the mundane, we’ll off source, the mundane to the air and we’ll all do brilliant, creative, strategic work?
[00:49:15] Brian: Yeah. I mean, I think, yeah. So will the analysts, you know, be replaced or will they do Yeah, mundane work? I think what AI will show up is, you know, who are your strong staff? Who, who are the people who know how to not just use the tools but actually do something useful with it versus, you know, the slackers if you’d like, who, who just kind of, you know, fill the gaps with something else.
[00:49:35] Brian: More meetings, maybe, something like that. So I think it will show, yeah, who your, your good staff are, which is, yeah, not a bad thing, I don’t think. I think in terms of the analyst, I don’t think our role is going to disappear, you know, if you’re a good analyst, you know, there’s only so many hours in the day and finding good analysts, good data, people who think creatively.
[00:49:56] Brian: ’cause it is a creative science. I mean, you need to know a lot of technicalities about the web and about data science and models and things like that. But ultimately. The point of analysis is to be creative, to change something, to change a website, to change the user experience, to build a better product or, or have a better company or whatever.
[00:50:14] Brian: And having more time to think creatively. ’cause I, you know, I’ve spent a lot of time down in the weeds looking at log files, things like that. It just takes away all of the time, which is very hard to quantify and very hard to monetize, but time to be creative and come up with an idea that can change something that can turn a company around or make it much more profitable or grow the ROI by one percentage point.
[00:50:37] Brian: So I think having a tool, a set of suites of tools, basically, which is an automation, that’s where we are at the moment with ai. It automates all the mundane stuff. So not automating, digging roads, but automating thought processes and, and manipulating data. It frees up a lot more time. So, great.
[00:50:54] Brian: Okay. I’m 10 times or a hundred times more efficient. I can handle more clients. My boss is very happy about that. I can do more work, but it just frees up more time. I’m now doing projects this past year that have been on my shelf for three or four years. Yeah. But I’ve just never had time to get rid of it.
[00:51:10] Brian: ’cause I know it’s a commitment that I have to sit down and spend days, maybe weeks on, not research stuff, you know, things I’m interested in writing a book, you know, it’s, there’s no chance I can do that just now. There just isn’t the time. But with these tools making me much more efficient, I’m now doing projects, not just working faster, but I’m doing projects that I’ve wanted to do and never had a chance to do.
[00:51:31] Brian: And, and now I’m doing those, I’m thinking about new projects, which I never had the capacity to even think about. So I think, I think that creativity is, a lot of people talk about that, the speed of, of change and, and we can automate stuff and, and move faster and, and make more money faster if you like.
[00:51:47] Brian: But I think, I think the key that a lot of people miss with these, the opportunities that we have with AI is just the ability to think now, to have the space and time to think, which means be creative. And it’s being creative that solves problems at the end of the day.
[00:52:04] Dara: I completely agree. And I’ve got a follow up question and I have to apologize ’cause I feel like I’m pulling you in one direction and then pulling you in another direction. oh, I love it. But, but I’m going to, yeah, I’m going to ask it anyway. So I completely agree with you and it just got me thinking. So you’re, you’re this creative analyst and you’re, you’re really good at, you know, getting into the data, understanding it, and, and, and communicating a message.
[00:52:24] Dara: So forgetting about AI for a second and bringing it back to the privacy angle, where, where’s that trade off where you are, you know, you’re this hungry, creative analyst and you are desperate to understand the data and tell a story and the industry is maybe pushing back, maybe some would argue not pushing back enough, but you know, you’ve got this kind of seesaw between.
[00:52:43] Dara: Needing more data to be more creative and to come up with better ideas versus the kind of pushback to have less data on users, or at least maybe less personally identifiable data. How do you kind of walk that tie rope? You are a really good ambitious analyst and you want to make sure you’re doing the best job possible, but you’re also kind of taking a privacy first approach. Is there an answer? Is there a way to walk that tight rope?
[00:53:08] Brian: Yeah, and I think you just alluded it to, in your question when you said, you know, less personal data. I mean everything, you know, this past 10 years has become hyper-personalized from, you know, I know just in, I know 2000 12, 13, 14, it was becoming very personal for certainly when I was, you know.
[00:53:25] Brian: Working very heavily with Google products. And I remember talking to a product manager, Nick, you know who you are, you know, and I said, I, I just think this is the wrong way to go. And he was like, oh, well we’ve always collected personal data, we just haven’t surfaced it before. And it just got me thinking, I thought, I don’t like this at all.
[00:53:39] Brian: And that, that was my beginning of, of moving from a, I remember still a practitioner, data practitioner, but moving much more into privacy, which is almost, you know, my entire role these days. It got me thinking, and, and, and really from about, I’d say about 2015, everything has become hyper-targeted, hyper-personalized.
[00:53:54] Brian: And it’s a myth that you need all of that hyper-targeted data. And the idea that, you know, the internet would collapse if we didn’t have personalized ads or remarketing. It’s just nonsense. We had it known, Google and Meta were making billions way before you know it, it all became super targeted. So I think there’s so much you can do, with data that’s at the aggregate level.
[00:54:15] Dara: Yeah. You know, data is not evil.
[00:54:17] Brian: You know, we have to have data. If you’re going to build schools. Have the right number of teachers, the right number of hospitals and doctors and houses, and all the things that we need to do in society. You have to have data, but the idea that you need to know, you know, my entire search history, you know what my preferences are when I go to the pharmacy, you know what my preferences are in clothes or whatever, you know, the personal stuff.
[00:54:40] Brian: The idea that you need to know that in order to make money, I just think is absolute bs I, I really think it’s a myth that’s been propagated by people that make a ton of money out of this stuff. You know, online, online advertising is what half a trillion dollar industry now, of which Google probably makes about half of that, you know, hundreds of billions of dollars every year from hyper-targeted ads, which, you know, I just, we don’t need it.
[00:55:05] Brian: There’s a great phrase, which ended up, I think in, in my first book, in fact, that they go something like Advanced Word Metrics is about doing the basics really well. Applying it in a clever way. And I thought, when I heard that, I thought, wow, I wish I’d thought of that. It was like, you know, what a great phrase.
[00:55:22] Brian: ’cause that epitomizes everything that I was trying to do in my books and everything I’ve always tried to do. It’s, you know, doing the basics really well and applying it in a clever way. I was actually smart enough to marry the person that said that. So it’s, Sarah Clifton. So, you know, shout out. but it’s something, I mean, it’s a very old phrase now.
[00:55:40] Brian: I think it was 2008, was my first book anyway, so it’s a roundabout then. I still think it’s applicable today, just as much as it ever was then. Yes, there’s, there’s some, you know, personalized data. If you’ve got a transaction, you’ve got some personalized data, you’ve got a subscription form or newsletter subscription or sign up, of course you’ve got personal data that you’re going to handle.
[00:56:00] Brian: But everything else, I’m not so sure. I, I just, we could do a very, very good job as analysts without knowing Exactly. Even if it’s a fingerprint. I mean, people don’t know it’s Brian Cliff. They don’t know my name when, if I accept these third party cookies, but it’s a digital fingerprint. You know, they know it’s me and they stitch together any of these fingerprints that they find online.
[00:56:20] Brian: They stitch it all together and then you triangulate the data and yeah, of course it’s prior to an English man living in Sweden. So you know that, that’s scary, that, that type of thing. And once people realize that’s what’s going on, they don’t want anything of it. And so they bail out and don’t hit consent on a, on a form or a cookie, popup banner.
[00:56:38] Brian: So it’s a real challenge. But if we could just get people to think creatively about running marketing doesn’t require hyper targeting. I mean, let’s, we did, I mean, ironically, 20 years ago, Google was the king of contextual advertising called AdSense. I mean, they were the king of that. And they just kind of let that go or they morphed it into something that’s, I, I keep using the word hyper, but it’s hyper-personalized.
[00:57:01] Brian: So it was possible 20 years ago. I’m sure we can do a much better job now. It’s probably never going to be as good at delivering ads as knowing exactly who you are and all of your preferences. How could it be? Look, I’d love to do the research to show, ’cause I think it’s intuitively true that the difference between a hyper targeted ad and a contextual ad, I think it’s, it’s, it’s so small and the trade off in privacy benefits, having stronger privacy for users means they’re much more likely to give you their data.
[00:57:33] Dara: Yeah.
[00:57:33] Brian: When you ask for it. ’cause they trust you. So, sorry, it’s a bit of a run map. Long answer to you.
[00:57:39] Matthew: I do wonder just to seamlessly dovetail this back into ai, that it is going to go more towards ultra hyper personalization because you’ve got, you’ve got people like say Sam Altman, CEO of open ai, who has been on record a number of times now saying that he envisages these is knowing as much as PO humanly possible about you.
[00:58:00] Matthew: Like, and hopefully that’s a personal choice, but. Pumping more and more data into it. And I suppose the difference there is the user ends up getting more out of that product, I suppose with hyper-personalized advertisement, what you’re getting a lot more out of it is just a slightly more relevant ad. But when we start talking about advertising in these LLM platforms, which it feels like might be an inevitability, Google’s just released it for say, the AI overviews, that they’re releasing.
[00:58:27] Matthew: If you couple that with these tons of data that people are voluntarily chucking into AI models to get more utility outta them and then layering in advertising, that’s quite a scary thought, yeah. Having more data than they’ve ever had potentially. And if that’s the new user interface, like we said earlier, I’m going on a rant now.
[00:58:46] Dara: I’m, I’m going to delete all my, my chat between my Claude now, you’ve just terrified me.
[00:58:51] Brian: Well, I think very relevant points, but I think some also, there’s two things. So Claude came out of the open air. I don’t know the politics of it, but people left and they wanted to set up, you know, ethical AI, whether it uses different models or what, what else is different?
[00:59:06] Brian: I don’t know. But they wanted to do an ethical version of what Open AI was doing. So there’s some politics there. And they used something called Constitutional ai, which is based on, I think, right in saying it’s based on a kind of framework, based on human values framework, how, how it comes to its decisions.
[00:59:24] Brian: Whereas the, the, the chat GTP one is about, I think it’s human prefaces varied, aren’t they? So it’s very susceptible to, I think, for example, people clicking on things that make them laugh, like funny cats or things that make them outraged, like racism. So a lot of, you know, you get a lot of clicks for the wrong reasons, the classic social media problem that we have.
[00:59:44] Brian: So I think, you know, so we have an alternative. That’s one thing. And, I think people like Sam ultimately. If they go down that route, they’re just going to push people away to alternatives. You know, when you talk to users and, you know, anyone that listens to this conversation, you very quickly come to the conclusion is, why am I giving data to these people, these tools, or why, what’s in it?
[01:00:05] Brian: Well, the first question is, what’s in it for me? I don’t think so. So a lot of people just opt out and they make the effort to find the buttons that say no. The course. We have to understand whether they are actually honoring that opt out. But at least people are doing that. And if people don’t honor that or respect that, then they’re going to get into trouble at some point.
[01:00:25] Brian: But if you look at opt out rates, there’s some very good data presented this year, which was survey data from the US and it was survey data from the EU. The EU body was Eurostar, which is a statistical body of the European Union. So it’s like top panelists, top data scientists, and they made the available data so you can pull it down and query it yourself.
[01:00:44] Brian: And, I compared the people using ad blocking software in America. Versus people that are changing things on computer browsers, I’m guessing, to protect themselves for privacy. Not exactly a direct one-to-one comparison. That’s the best comparison I could do. But the opt out rate averaged at 50% in the eu five zero, which was surprisingly high for, I knew it would be, yeah, in the twenties or thirties, but 50% was the average.
[01:01:11] Brian: Some were much higher. Finland was very high from memory. And oddly, I think Germany was relatively low. I mean, still close to 50%, but, but relatively low. I think. Wow. 50% of people are doing something on their computer to protect their privacy. And then in the US it was a smaller sample set. In the US it was 33%, 33 or 34, something like that.
[01:01:30] Brian: And although it was ad blocking software, and obviously that’s to block ads, when they asked, okay, why are you using this software? The number two answer after blocking ads was to protect privacy. So you have the US at 33% and the EU at 50%. Okay. You may. I would expect that to lag behind. ’cause the US doesn’t quite take privacy as seriously as we do, but they’re still high.
[01:01:51] Brian: My point is, that’s still a lot of people and there’s only one way that line is going, it going up into the right. So people like Sam Altman, if that’s what they’re saying, they want to, you know, target the hell out of everybody is anybody that shares data, then I think you’re going to push, you’re going to see those consent rates go up.
[01:02:08] Brian: You know, what happens when it reaches a hundred percent, a hundred percent say no. What happens then? Does all this collapse? Or are we all using data from 2010 because everyone opted out? Do we never move forward or do we, you know, do the tools, the more ethical tools like law, do they suddenly become the dominant place?
[01:02:25] Brian: So there’s, there’s a lot at stake here, not just about building smart algorithms, but really making sure that you have. Trust end of the day from users on, I don’t think we’re there with the trust. I think that’s very much on a cliff edge at the moment and, and looks to me like it’s tipping in one way to be decided.
[01:02:44] Matthew: I think that’s what I was really getting at. And I agree with what you’ve said. I imagine the utility of AI and, and imagine, I, I guess the, the best way to look at it is like it’s an assistant. The more your assistant knows about you, the more powerful and better it is. So I guess we haven’t got that asset test right now or like, are people going to start just wanting more and more utility outta the assistance as they get more and more useful and they just naturally start to pump that data in.
[01:03:08] Matthew: But I hope, like you say, there are other players and we, I think we literally said last week that we’re becoming a little bit of Claude fanboys ourselves because of those things you’ve mentioned that they’re really transparent with their research and. Things that go wrong and with funny results, sometimes tungsten cubes and shocks, shops, et cetera.
[01:03:28] Matthew: They just feel like the people who are trying to be as responsible with this stuff as possible, I just hope Meta doesn’t win. They’re the ones I just don’t want to win out on the race.
[01:03:38] Dara: I don’t think anyone wants ’em to win, do they?
[01:03:40] Matthew: No. I hope No. Maybe Mark Zuckerberg.
[01:03:41] Brian: Well, I think if you think of all history of the web, I think, you know, Meta’s been, a big, I don’t know, north Star in terms of, you know, because like I said, Google, were, were massive into contextual advertising was dominant.
[01:03:55] Brian: They had a different way when I was working, and I think they saw what Facebook was doing around about 2008 and thought, wow, this is the future. We have to do this. Otherwise, Facebook would overtake us. And maybe they would’ve done May, maybe they wouldn’t, but that was a definite fork in the road for Google.
[01:04:11] Brian: I think they, they weren’t doing that, and then they decided to go and do that and take it, you know, exponential. So, yeah. Not, not play.
[01:04:19] Dara: Not meta. Yes. Not meta. No, not meta. Not meta. Sorry, Mark, if you’re listening to this, yeah, I think we’ve been down on Zuck before on this podcast actually, so he probably doesn’t listen.
[01:04:28] Matthew: Yeah, it’s an easy target. Yeah.
[01:04:29] Dara: I just have one more question for you, which is a bit of a philosophical one. So my partner’s a musician. I’ve been talking to her lately about what, what she thinks about, you know, when does creativity stop being true? Creativity. If you’re using ai, is it a, is it a sliding scale?
[01:04:46] Dara: So I’m curious to know what you think in terms of, because we can verify your book was definitely written by you because, it was pre, it was pre pre ai. but if you, you mentioned earlier that maybe that’s a project on your shelf to write another book. So what are your thoughts? How far would you take the use of ai?
[01:05:04] Dara: Because it seems to me that it would be sensible to use it to a certain extent at least. But do you feel there is a line where if it crosses it, it’s not truly your book anymore and it’s. An AI written book.
[01:05:16] Brian: Yeah, I mean, for me, I use AI a great deal to generate content, but the way I use it is I get all my thoughts down, kind of like a brain dump.
[01:05:24] Brian: So if you’re writing a book, for example, it would take me about four weeks to write a chapter. May, maybe a bit longer given the fact that I have been thinking about a book probably several years in advance. So you start off, the first week is a brain dump basically, and then the second week is organizing that brain dump into some kind of structure so that it actually makes sense to yourself.
[01:05:47] Brian: And then the third week you look, you rewrite it because you write, not writing for yourself, you’re writing for an audience. You really have to put yourself into the shoes. Okay, what does my audience know about this subject? What do they need to know and, and how can I help? Then the fourth week it goes to an editor and the editor, you know, if you were the professional publisher, the editor, you know, beautifies it and takes it to a better level, better structured sentence, better language, and then it comes back to me and I do some edits.
[01:06:12] Brian: So we’re into five weeks at least. It takes a long time. What you can do and what I do. I’m not writing books at the moment, but what I do is I shorten those first week two to week four. So I still do the brain dump. So these are my thoughts. This is what I know, and this is what I, Brian understands about what I’m writing.
[01:06:31] Brian: I don’t need to worry about whether a reader of my book understands it or my content. I want to understand, I want to make sure it’s correct, scientific, rigorous, and those sorts of things. And then I just run it through, you know, Claude or, or chat GTP, and it purifies it for me, and I read it back, think, oh yeah, that’s what I’m trying to say.
[01:06:49] Brian: And oh yeah, that’s good English and much better structure. And you know, if you tell it, you know what audience you want the content to be written as it will turn it around and, and it will use structure and language based on, let’s say a marketer or a privacy professional or something like that. So suddenly those weeks, whatever it was, 2, 3, 4, suddenly now compacted down to a couple of hours.
[01:07:12] Brian: Actually, it’s a couple of minutes, but I have to obviously check what is written, so that it’s not made mistakes. So suddenly you go from week one brain dump to in second week you, I mean, you could possibly, possibly even skip an editor. You could possibly go to the final draft in week two. So the six week process is now suddenly a two week process.
[01:07:33] Brian: I haven’t written a book since 2015 because of the shit effort and time it takes for the best part of a year writing a book. Now, 10 chapters, 12 chapters a year. Now we’re talking about, okay, you could do a book in a couple of months. You know, I could take a couple of months off and just focus on writing a book that’s like, oh wow, okay.
[01:07:50] Brian: Now I’m interested. I might write another book. So it’s obviously a fine line how far you take that. But the reason hopefully that I’ve been successful as an author is that the first week of brain dump is important because those are. My experiences, my knowledge, that’s what’s going to help someone. And everything else is just really beautifying it, making it easy to read and easy to absorb and getting the audience right.
[01:08:14] Brian: So it’s, I think it, it’s massive, for creative people, whether it’s, yeah, I’m only talking about writing. It could be, I don’t know, painting or something or drawing. It can really, you know, supercharge you in terms of what you can do. And like we said at the beginning, do projects that have been sitting on the shelf for so long because you haven’t got the type.
[01:08:31] Brian: So that’s what, how I use it, it’s the original content. It’s my thoughts, it’s my points, my experience, my knowledge, but it’s written in a much, you know, I would agonize, you know, when I do a LinkedIn post, I used to agonize for hours to make sure I’ve got the tone right, that I’m not too, too critical. I want to be constructive, you know, especially if I am criticizing someone’s privacy approach.
[01:08:51] Brian: You still want to be constructive. You want to give them the opportunity to fix it. I spend hours, you know, making sure, and I go to my wife ’cause she’s a marketer. So what do you think of this? Is it and the, ah. In the end, she just said, why are you bothering? Now I’m writing a lot more posts than I’ve done for years because it’s much easier, but it’s still my original input.
[01:09:11] Dara: Yeah, that’s what’s important, isn’t it? And, and I get it. I just hope your editor’s not listening ’cause he or she’s going to know now they’re out of a job.
[01:09:17] Brian: Yeah. Dick, sorry, sorry Mike.
[01:09:20] Dara: You’ve done a great job, Dick, but unfortunately AI has replaced you.
[01:09:23] Matthew: He’ll just use AI as well as He will himself.
[01:09:25] Brian: He’ll, he’ll only like bigger and better projects. I mean, you know.
[01:09:28] Matthew: Yeah. I ask this question every time and, and they’re a bit, it’s a bit of a funny question in the context of these conversations. ’cause we’ve kind of been talking about the future quite a lot. But essentially it’s a crystal ball over the next sort of two or three years, where do you see things going?
[01:09:42] Matthew: Where do you see things heading? What’s, what’s the big changes coming our way?
[01:09:45] Brian: Yeah, I mean I, I think hyper personalization, I think that’s a tipping point. We talked about that and yeah, at some point users are just going to say it. Users not, not legislators. They’ve already said, you know, there’s a framework for this.
[01:09:59] Brian: If you have consent, that’s pretty much what you need, informed and transparent and all of the other qualifiers. So the legal framework is there, but users are still giving away data often unknowingly. I think that’s going to grow. I think there is a tipping point. The problem is, I’ve said that since 2008 when I left Google, well, I was just walking out the door, I think, and I was at an event and Jim Stern, who was a, you know, a, a great analyst that’s been around a lot longer than me, even, I’m sure you won’t mind me saying that, asked me the question, what’s coming next?
[01:10:29] Brian: This is 2008. And I said, privacy, privacy is the next big thing. And of course, yeah, everyone laughed at me. I said, oh, that’s the end of your career. Brian. I’ve been saying this a long time, predicting it a long time, but I, I really do think that, you know, all of the momentum of Snowden, of Cambridge Analytic, you know, the GDPR of course, and Max Rems, big advocate and just mainstream journalism, you know, focusing on this and, and the homes that it can do and the homes that we all know.
[01:10:53] Brian: If you have kids, we all know. The harm that is happening on social media. So I just think privacy is so important. At some point it’s going to tip over the edge and if the Sams of this world are not, not listening and not looking in the same direction, I think that’s going to affect us obviously. Anyway, I was interested.
[01:11:11] Brian: It’s going to affect us enormously, so I think that that’s a big thing. I think also just to say about AI and trying, trying to predict the future is, you know, I mean I think we’re at a very, like any industrial revolution, you know, major revolution change in our times. You know, we’re at the very beginning, at the very, very beginning.
[01:11:28] Brian: It’s early days. So someone said, you know, electricity was not invented just to replace candles. Yet you think about what we do with electricity these days. It’s not to make a better candle. But that’s how it started off, you know, in the very early days and, you know, computers didn’t become, you know, take off and become successful because they were faster at typing than typewriters.
[01:11:49] Brian: So there’s so much that we don’t kind of know and don’t understand where we’re going, but I think the future, the future’s bright. I, I think absolutely, with an AI perspective and, and we’re only just kind of figuring out how to use the data and that there’s still, apart from the privacy issues, how do you address that when it goes wrong?
[01:12:06] Brian: We haven’t really seen that. But how do you address mistakes? How do you tell G-G-G-T-P No, I know better than you and I know this is wrong. How do I correct it? You know, like a Wikipedia editor would do, they would make a judgment and say, look, I’m an expert in this field. I know this is factually incorrect.
[01:12:23] Brian: Here’s the correct facts. How do you fix that in these AI models? That there isn’t a mechanism to do that. And there isn’t a mechanism to say, I, you know, the data about me personally is wrong. how do I correct that? Or how do I even know what you have on me? What you scraped from my 30 years being online?
[01:12:40] Matthew: That’s the trouble, isn’t it? I don’t think the model creators know what is inside the models either. I don’t think they could tell you.
[01:12:47] Brian: But isn’t he worried when you say that people don’t know what’s in their own models? They built these tools, doesn’t it sound like, I don’t know, the banking crisis from 2008? The bank didn’t know what they were packaging and selling.
[01:12:58] Brian: Y you know, it does scare me that it, you know, it is like, really you don’t know what you have in your own product.
[01:13:04] Dara: That’s a really interesting and terrifying analogy. Yeah.
[01:13:09] Brian: So anyway, there’s, I don’t want to be too negative, but there are lots of things to think about and we should be thinking about. But I think, you know, building better candles and making typewriters faster.
[01:13:18] Brian: Great. Let’s do that. Let’s be more creative because we have the space and time to, there’s lots of stuff, you know, that I’m just hoping is, just around the corner, you know, so we never have a coronavirus again or, you know, we find a cure for cancer and all, all of the things that we should be using AI for instead of trying to build a better ad.
[01:13:36] Matthew: That’s my fault. I was thinking in my head, this is one of the more optimistic future things we’ve had, fact rounds have had, and then I instantly brought it into a negative space. So I’m glad you were positive against let’s, let’s, let’s
[01:13:48] Dara: call it balance. Brian, I’d just like to thank you again, appreciate your time for thanking me for agreeing to come and chat to us.
[01:13:54] Dara: It’s been a really good conversation. And, left room for more, I think so. If you’re open to it, maybe we’ll see how these predictions pan out in a, in a future episode maybe. Oh, yeah. Scary. But yeah, for now, thanks again. Thanks guys. I appreciate it. I enjoyed it. That’s it for this week’s episode of the Measure Pods.
[01:14:09] Dara: We hope you enjoyed it and picked up something useful along the way. If you haven’t already, make sure to subscribe on whatever platform you’re listening on so you don’t miss future episodes.
[01:14:19] Matthew: And if you’re enjoying the show, we’d really appreciate it if you left us a quick review. It really helps more people discover the pod and keeps us motivated to bring back more. So thanks for listening and we’ll catch you next time.