#133 The role of AI in analytics (with Juliana Jackson at Jellyfish)
In this episode of The Measure Pod, Dara and Matthew sit down with Juliana Jackson, returning for her second appearance, Juliana shares her journey since 2010 and emphasises her identity as a product person and how her role has developed. With a knack for blending technical data insights with commercial strategy, she currently holds the position of Cloud Director at Jellyfish. Tune in to discover how her varied experiences shape her approach to product management and the evolving landscape of AI.
Show notes
- Juliana Jackson & Jellyfish
- BrowserOS
- NotebookLM upgrades
- Company knowledge in ChatGPT
- More from The Measure Pod
Share your thoughts and ideas on our Feedback Form.
Follow Measurelab on LinkedIn
Transcript
“I’m a product person at heart. I’m always going to be a product person.”
Juliana
“I think we need to take like 200 steps back and really understand what artificial intelligence is, what machine learning is, and what generative AI is.”
Juliana
[00:00:00] Lizzie: Hello and welcome to the Measure Pod by Measurelab podcast dedicated to the ever-changing world of data and analytics with your hosts, Dara Fitzgerald and Matthew Hooson. Between them, they’ve spent more years and they’d like to admit wrestling with dashboards, data quality, and the occasional Google Curve ball.
[00:00:32] Lizzie: So join us as we share stories about how analytics really works today and where it might be headed tomorrow. Let’s get into it.
[00:00:41] Dara: Hello, welcome back to the Measure Pod. I’m Jarrah. I’m joined as always by Matthew. How are you doing this week, Matthew?
[00:00:46] Matthew: I’m okay. I think. I think I’m okay. How are you?
[00:00:49] Dara: Yeah, I’m good. I’m all good. No complaints. We’re a boring pair.
[00:00:53] Matthew: I say I’m good. I have. I’m just about to finish a book.
[00:00:57] Dara: No, sorry, I’ve, I’ve asked the obligatory question. You’ve given the socially acceptable answer of I’m fine. Let’s move on.
[00:01:03] Matthew: Yeah. This book is Relevant Promise.
[00:01:05] Dara: Okay. Oh, fine. Okay. In that case, we’ll make an exception.
[00:01:07] Matthew: I’m reading a book and it’s a cheery title called, if Anyone Builds It, Everyone Dies.
[00:01:13] Dara: It’s not really all that is it?
[00:01:14] Matthew: It is, yeah.
[00:01:15] Dara: Brilliant. I need to, how have I not heard of it? If I need a copy.
[00:01:17] Matthew: Yeah. Just by Elsevier Gutowski and Nate Sores
[00:01:22] Dara: Who is our guest today? It’s not, they’re not, but we would not, yeah. Let’s aim for that.
[00:01:27] Matthew: No. Yeah. It’s a new level of existential dread, not about jobs, and not about how, how people are going to keep working. It’s more about the survival of the entirety of humanity, which is Oh, so nothing serious then?
[00:01:39] Dara: No, not as serious as the life of an analyst or engineer.
[00:01:43] Matthew: No, no, no. That is, that’s real high stakes stuff. Yeah.
[00:01:46] Dara: Okay. Just so yeah. The mere survival of humanity. Trivial.
[00:01:49] Matthew: Yeah. So I have, I, I’m okay, but I have the rumblings of the apocalypse. In the back of my mind. But apart from that,
[00:01:56] Dara: I mean, isn’t that just normal life?
[00:01:57] Matthew: Yeah, pretty much.
[00:01:59] Dara: Okay. Well, let’s ground ourselves with some industry news. Some nice, I think we’ve got, you know, nothing charming, nothing, nothing slop related this week, although it tends to find a way in and nothing particularly salty. I think we’re going for good, straightforward vanilla, unsalted vanilla news this week.
[00:02:16] Matthew: Salt is where we make it.
[00:02:17] Dara: I’ve just scared off all the lists. Recall it already two sentences in and I’ve just said, yeah, no, nothing interesting this week. no, it’s not that there’s nothing interesting, but there’s nothing, nothing, nothing crazy or terrifying this week. but there is some news. So we could start off with maybe something a little light, around the notebook.
[00:02:35] Dara: LM updates maybe. So for anyone who’s using Notebook, lm, and actually for anyone who isn’t, it’s, it’s great and you really should. So it’s one of the many things Google doesn’t really shout about. It’s almost like they just build these things and think, well, you know, if people are interested, they’ll find them rather than.
[00:02:51] Dara: Marketing them particularly aggressively. But we at the company, and I think some of us personally use Notebooks. Lm, it’s a little bit more, you’ll probably describe it better than me, but the way I would describe it is it, it hallucinates a lot less. It’s basically only going to use the source material you give it, which can be YouTube videos, it can be Google Docs, it can be websites, and it will analyze those sources.
[00:03:13] Dara: But it will give you factual replies based on that information only rather than trying to please you and making stuff up.
[00:03:21] Matthew: Yeah, yeah. Then it’s like. It has various different kinds of outputs that are really cool. So it’s obvious you’ve got your standard chat where you can just ask it questions and it’ll respond to you.
[00:03:32] Matthew: But like, like Diara says, it’ll be very specific to the, to what’s in there. But then you’ve got stuff like a podcast, which I think we’ve mentioned before, like you can create like a little podcast. It can have an interactive podcast where you can interrupt the podcast hosts and ask questions and things.
[00:03:45] Matthew: And it’s also got stuff like this mind map where you upload all this information, then it automatically makes this sort of hierarchical sort of expandable. My map of the subject. It’s got really cool stuff in there. And yeah, they just, and like you say, they’ve barely mentioned it. It seems so powerful and yet so strangely just sort of, yeah, aside to Google.
[00:04:02] Dara: Totally. You just made me think as well, we should create a meta version of the bot. We should feed the podcast into it and get it to create a podcast. Talking about the podcast.
[00:04:11] Matthew: Yes. I didn’t really start an episode.
[00:04:13] Dara: Yeah. Just for our own, our own phone really. But yes. Anyway, sorry. So they have released some updates, which basically are all around making it more powerful.
[00:04:21] Dara: So increasing the processing capabilities, increasing the conversation context window, enabling history. ’cause I don’t think it had history before, or at least it’s automatically enabled now. So it’ll save all your history. You can drop out of a conversation and pick it back up later. which I guess is something you couldn’t do before. I’d never actually needed to do that reading before.
[00:04:42] Matthew: No, I don’t. You could really. I think it was like when, I think it was in that sort of session almost you could, it could maybe remember what you’d been talking about. But then after that, when you came back in, I think you were kind of just starting again.
[00:04:54] Matthew: But I suppose having it remember that context of the stuff you’d done just saved as another resource, I guess would, would, would be really well, would be, is presumably now a really cool addition to it, and then it looks like you can now personalize it as well, so, so kind of give it a, a, a job, a role almost.
[00:05:14] Matthew: Oh, you froze them. Sorry. It’s, I’m getting the red, red, the red warning to see a serious one. Yeah. So it looks like you can also now personalize the chat to say like, treat me like I’m a PhD candidate. These are examples off their website. As a marketing strategist, et cetera, et cetera. So you can give it loads of context, get it to act in a particular way, and then use that as you see fit.
[00:05:35] Matthew: But yeah, really it’s a really cool, powerful thing that if anyone hasn’t already checked out, they definitely should.
[00:05:40] Dara: Yeah, I was a little confused about this, maybe it’s just the way it was worded, and I haven’t, I haven’t played around with it yet, but this bit about goals. They talked about goals, but then they gave examples, some of which were more about what you call it, like tone or persona.
[00:05:53] Dara: And it was like, you know, treat me like a PhD student, but that’s not really a goal, is it? So, I don’t know. Is that a difference, I was slightly confused by that bit as to whether. That’s what they’re calling a goal or whether that’s a slightly different thing that you can now give it a persona. And then the goal would be something separate saying, my goal is to finish my thesis, or whatever it may be.
[00:06:14] Matthew: Yeah. I think it’s just what they’ve called it, but it’s part, so as part of that, so the word in the exact word that they use underneath that is you can customize chat to adapt to a specific goal, voice, or role. So I think they’re calling it goals, but it’s got a couple of different things underneath there that you can, it’ll do, yeah.
[00:06:31] Matthew: People like to give it the, these companies like to give us everything a spiffy title, don’t they? And. Yeah. You know, goals, skills, yeah. Et cetera.
[00:06:40] Dara: One thing, I don’t know, and I didn’t see that they upgraded, and I know there is a limit, but I haven’t personally yet, but there’s a limit to the number of data sources.
[00:06:48] Dara: I’m, I’m, I have the number 200 in my head, but I might have completely made that up. Is it, is it something like 200?
[00:06:54] Matthew: 200 what?
[00:06:56] Dara: 200 things?
[00:06:57] Matthew: Yeah.
[00:06:58] Dara: 200. No, the data sources that you can connect to it. ’cause I know it’s, oh, right. I know it is limited in some way. I don’t know how big or small that limit is. Let’s see if I’m just, I don’t
[00:07:09] Matthew: .I’m not hit it either.
[00:07:10] Dara: I’m going to say
[00:07:10] Matthew: 200. I slightly Google it on the side.
[00:07:12] Dara: Yeah.
[00:07:12] Matthew: 50 sources each. So I dunno. I don’t know. I’m just sort of 200 megabytes. I don’t know.
[00:07:18] Dara: Yeah. Something
[00:07:19] Matthew: I’m, I’m not going to sit here and go read this.
[00:07:21] Dara: There’s a, there’s a limit.
[00:07:22] Matthew: There’s some limit. There is a limit. There’s a number.
[00:07:24] Dara: Yeah. I mean, the reason I mentioned it
[00:07:26] Matthew: We made it salty.
[00:07:27] Dara: We, we did, we always do. There’s always a bit of salt somewhere to be found. Yeah. 200. Yeah. That sounds like a good number. Yeah. the reason it’s relevant, obviously, it’s like if you’re using it to, for like a one-off as this is what I’ve done sometimes where I’ll just put something like a big document into it so that I can ask questions about that document.
[00:07:46] Dara: It’s like a one-off situation. Or maybe you might come back to it, but it’s only ever going to be that one document. Or you might put a book in, you know, your book that you read. If you had that in PDF d form, you could chuck that in there and then probe it and ask it questions. But if you wanted in a more kind of business context, you might wanna have lots of documents in there.
[00:08:02] Dara: You might wanna have all of your company documents in there, or you might wanna have all your, I dunno, every proposal you’ve ever, ever written or whatever. and I, I guess that’s where that limit would become a problem if it, if it exists. but to be able to use it, Yeah. You know, to, to really truly use it as a, you know, this kind of database of all of your information that you can then query and get a straight answer back from a non sycophantic LLM.
[00:08:28] Matthew: It would be nice if it, one of the things it doesn’t really do currently is that you can’t link it to a Google Drive folder. Like if you just point it at a folder and then everything in there gets indexed would be great, but at the minute you have to point to specific docs. So. It gets difficult to have it like a living, ever growing memory, but maybe that’s what they’re going out with. Gemini Enterprise and things like that with company knowledge bases.
[00:08:51] Dara: That’s probably where they’ll start, they’ll come together. And actually that’s a good, Mr. Segue, you just segued into another piece of news there talking about, yeah. As a Google and enterprise company knowledge, another piece of news is around open AI releasing their quote unquote company knowledge. I don’t think you can really trademark the words company knowledge. Can you?
[00:09:12] Matthew: Oh, well, people try and trademark anything, so you can, you can, there we go.
[00:09:17] Dara: Yeah. So this is, again, very similar to what Google is. I don’t know, was this, tell me if I’m, you know, tell me if I’m getting this wrong, but is it, is this, where was this the artist formerly known as Agent Space that then became part of Google Enterprise and now it’s within Google Enterprise that you can have this cross company knowledge.
[00:09:36] Matthew: So I don’t know.
[00:09:38] Dara: I think something like that.
[00:09:39] Matthew: Yeah. I think Agent Space is now like, yeah, Google Business. Google Enterprise, and then that knowledge base sits within it. So we’ve been playing with Google Business a little bit, and you can add various sources that it can access like Jira and things like that.
[00:09:54] Matthew: Drive obviously. And then chat. PT seems to be doing similar things. Interestingly, it had a couple of apps that I was like, Ooh, that looked quite cool. So it’s obvious it has all the Google apps, but it has Slack as standard and nice.
[00:10:06] Matthew: Yeah. Jira and things and GitHub, which I’m sure Google does have, but I didn’t see Slack in Google’s version. So yeah, it can pull in all these different sources when you’re talking to it and mm-hmm yeah.
[00:10:17] Dara: I mean, it’s really powerful.
[00:10:18] Matthew: It does seem like, yeah, that’s, I think we’ve said. Recently it feels like that’s kind of the holy grail of the minute. It’s been able to, as a business, just chat to an LLM and it knows everything you know and everything that the company knows and just be able to get it to perform tasks and like couple that with say Claude Skills, skills stuff, skills.
[00:10:36] Matthew: Thinking that it’s got particular templates and things and it gets, yeah, it gets really powerful. Just unfortunately they’re all doing it separately and different versions and names to all the stuff they are.
[00:10:47] Dara: And there’s definitely an element of, you know, keeping up with the Joneses because, you, Claude released their memory as well. I think the same.
[00:10:55] Matthew: Good segue.
[00:10:56] Dara: Yeah. See I learned from the best. I think. they released it on the same day as OpenAI released the company knowledge or maybe they didn’t release it, but I think it was one of these features that was available to some, some page users before and then it rolled out to everybody.
[00:11:10] Dara: So, you know, now Claude has a memory as well. So it’s like they are just doing their best to keep up with each other. and I think we might have promised, and by we, I mean you promised to do a, like a scorecard. We were going to look at which ones were performing best for which different types of use cases. So, just called you out on that day.
[00:11:28] Matthew: Yeah, I did that. I’ve released it and then taken it down since.
[00:11:31] Dara: Yeah, it was too, it was, it was too, it was too good. And, we just decided it was too popular. Yeah, yeah. But yeah, so they’re, you know, they, there very much is this case of like, one of them will release a feature and very soon afterwards with an equivalent version of it. Yeah. For whether it’s Google with Gemini or philanthropic with cloud or whatever. They’re very much trying to, I mean, it’s an arms race, isn’t it really?
[00:11:51] Matthew: So yeah, literally an arms race, it does feel like it’s Google’s turn, but they’re not quite as, I don’t think they’re quite as granular with the releases as the others at the moment.
[00:12:00] Matthew: And I saw I took a bit more salt in there. I saw, I think somewhere that something like Google lost a hundred billion up its market cap when OpenAI released the Atlas browser. Really? Wow. Something. It’s hard to know if that’s a lot anymore.
[00:12:15] Dara: Yeah, just numbers.
[00:12:18] Matthew: Yeah. To multi-trillion dollar companies as they all are now. But yeah, you’ve got to feel like they’ll be spurred into action at some point with something, maybe Gemini three, which I predicted to have already come out, which hasn’t, but that’s not all.
[00:12:29] Dara: Well, no, listen, I’ll defend you. I’ll come to your defense here on this one because it wasn’t just your prediction.
[00:12:34] Dara: I think the whole industry was expecting it. and the reason I know that is because I was checking up on your prediction and when I checked to see if it was out on the day you said it was, there were a whole lot of news articles saying it was expected. So I don’t think your prediction was, you know, based on nothing.
[00:12:49] Dara: But it is weird ’cause usually the industry knows when these things are going to happen. So are they holding off? Yeah. ’cause it’s going to be something big or are they just not ready? I guess we’ll find out soon enough.
[00:12:58] Matthew: Yeah. With the safety stuff, you don’t know, do you?
[00:13:00] Dara: No. No. and we have said before that they probably do, whether this is right or not, we, we suspect at least they might have to hold themselves to slightly higher standards in some respects, just because of the nature of their business and issues they’ve had in the past and, and whatever.
[00:13:16] Dara: Yeah. But yeah, what’s going to be in 3.0 we don’t know yet. But speaking of predictions, Mark Edmondson, who’s a guest recently did say that he thinks Google is going to finish the year in the lead. So he certainly thinks, whether it’s Gemini 3.0 or, or something after that, but that it’s going to be significant. So let’s, let’s see if he’s right.
[00:13:35] Matthew: Yeah. I mean, that has been floating around, that people have got early access to it via playgrounds, things like that. They look powerful.
[00:13:42] Dara: Mm.
[00:13:43] Matthew: Whatever it is. If it’s Gemini three or whether it’s called something else, Gemini 2.8 0.6.
[00:13:50] Dara: Yeah.
[00:13:50] Matthew: Or whatever. You never know, but yeah. Interesting. Yeah. I guess the final thing really is, potentially that OpenAI has. Think now completed their restructuring. Mm-hmm. So probably everyone knows that originally OpenAI was founded as a not-for-profit. It was kind of like a research lab. And originally founded by Elon Musk in Sam Altman and Elvar Kutch cover, which I’ve almost certainly butchered his name.
[00:14:19] Matthew: and then obviously they’ve released Jet GPT, it went gangbusters. And since then they’ve kind of been struggling with this identity of being a not-for-profit whilst also making giant deals and, and looking to spend massive amounts of money. So they’ve kind of restructured in such a way that they now have a for-profit arm that is ultimately controlled by the not-for-profit arm.
[00:14:41] Matthew: It still exists, it’s some strange structure, but it looks like it may wanna go public and go for a multi-trillion. Valuation when they do so, which would be pretty wild considering what their actual revenue is at the moment. But there you go. Yeah, I just, yeah.
[00:14:59] Dara: Don’t even know what to say. Like you said earlier, do these numbers even mean anything anymore?
[00:15:03] Dara: Yeah, it’s, it’s probably, I, I get the feeling it’s one of these, it is just restructuring for that really, isn’t it? Yeah. It’s just making it, you know, paving the way for that, that IPO. I thought it was interesting that Sam Alman still has no shares.
[00:15:16] Matthew: Right.
[00:15:16] Dara: And I think I read a line, a quote from him saying, I have enough money already. Which is refreshing from somebody like that, because that doesn’t tend to be the, the blueprint does it? You know, you wouldn’t have, no our Pal Zuck. I don’t think he would give up too many of his shares, would he?
[00:15:30] Matthew: Friend of the podcast, no friend of the podcast, sucker book. Hi, Mark.
[00:15:34] Dara: Yeah. Hi, Mark. How are you?
[00:15:35] Matthew: Yeah, definitely seems like, well, it’s, it’s a bubble, right? Yeah. It’s a bubble. But I, I’ve been, there’s a lot of. It’s quite a hot topic at the minute. The fact that it is a bubble or isn’t a bubble, I think it’s definitely a bubble, but it’s what kind of bubble it is. And I, I don’t like saying this, but I kind of agree with Jeff Bezos on his take on what it is. It’s like, yes, it is a bubble, but it’s an industry bubble.
[00:16:00] Dara: Hmm.
[00:16:00] Matthew: Similar to the.com and the biotech booms in the nineties where massive overinvestment, a lot of shareholders lost money. But ultimately what came out of it when it all sort of settled down and popped was value and really powerful technology in terms of the modern internet and a lot of technologies and medicines that saved lives compared to the 2008 crisis, the banking crisis.
[00:16:21] Matthew: That was a bubble that just hurt everyone and was a disaster in that way. I think the value here is undeniable. Will still exist after any bubble bursts and will still be transformative technology. It’s just, it’s probably going to come out in the wash at some point and the investments are going to be found out, but hopefully just rich people lose some money.
[00:16:39] Dara: Yeah. Let’s hope so. Yeah. yeah, I, I, listen, I think you’re right. For what it’s worth. Okay, so our guest on the show today is Juliana Jackson, who has been on the Measure pod before, but not with Matthew and myself. So she was on with Dan and B previously. So we were very happy to get her on and have what I thought was a really interesting conversation.
[00:16:58] Dara: We got her thoughts on a lot of these themes that we’ve been discussing lately around AI and its role in analytics, but also its kind of broader role in society. And we covered a lot of interesting things around. You know, how companies are using it and maybe misusing it and what it means for people’s jobs, including data analysts and scientists, and even what it’s doing for things like critical thinking and what Julian’s thoughts were about that.
[00:17:24] Dara: So I thought it was quite a wide ranging and quite deep conversation, which, which I really enjoyed.
[00:17:29] Matthew: Yeah, slightly different takes to some other guests or perspectives than other guests we’ve had recently. So it’s nice to get a broad view of, of perspectives on all this stuff. So it’s really, really interesting.
[00:17:39] Dara: Alright, enjoy the chat. Joining us on the Measure Pod today, we have Juliana Jackson who is actually joining the Measure Pod for the second time. Although the last time you were on it was Dan and BAV that were hosting. So I think I can speak for Matthew, and say that we’re both really excited to have you back on and we’re looking forward to having a conversation with you.
[00:18:02] Dara: Some of our listeners will know who you are already, either from your podcast with CMO or from some of your own content. Just for the benefit of any listeners who maybe are less familiar with you, I’m going to ask you that horrible question now. Get you to introduce yourself. You can be as brief or you can go into as much detail as you like it’s up to you.
[00:18:21] Juliana: I’m glad to be back. Very, very nice of you guys to invite me the second time. It means it’s something good on my first, yeah. I’m Juliana Jackson. I have been doing stuff on the internet since 2010. And, that’s it. That’s all I got.
[00:18:40] Dara: Well, listen, the stuff I’ve, I’ve immediately got a follow up question. It’s interesting you say you do stuff on the internet. ’cause one of the things I was going to ask you is how you would actually. If someone said to you in one sentence, and this is such a hard question, but what do you, you know, what do you do because you, you’ve got such a varied background, haven’t you?
[00:18:55] Dara: You’ve, you’ve kind of gone through various different evolutions. Yeah. Everything. So what would you say to somebody now if they said, what, you know, what do you do currently? What’s your, what’s your focus professionally at the moment? Could you do it in a sentence?
[00:19:09] Juliana: No, no. I don’t know. I mean, I’m a product person. I like to say this. I’m a product person. That’s what I am for sure. A hundred percent. That happens to like data that is technical enough to be dangerous, but also very commercially minded. At the moment I work at Jellyfish. I’m a cloud director. It’s a big switch from what I used to do before. I’m actually quite thinking about this lately.
[00:19:34] Juliana: Like, damn, this is a big difference to what I used to do before. I’m quite interested in performance media right now, which is also, wow, that’s a big change. Cloud for sure. AI for marketing, measurement, and creativity. So that’s kind of like where I am. I think, as I said, I’m a product person at heart. I’m always going to be a product person, so I like to think about things as, I don’t know, solution engineering, different solutions to help with different problems.
[00:20:02] Juliana: I quite like ai. I mean, that’s kind of like what I’m known for, for having opinions on ai. I also have actual work that I’ve done that is known in the public, and hopefully, fingers crossed, I’m going to win the Love Award. But yeah, that’s, I don’t know, I’m just a very curious person. I like data, I like products, and I’m just genuinely figuring it out every day at the time. I suck at speaking about myself, man. I just, yeah.
[00:20:31] Dara: No, that was a, that was a really good way. I, I got more out of you there than your own intro. So that was a, we’re we’re, we’re getting there.
[00:20:36] Juliana: Questions. I didn’t approve.
[00:20:40] Dara: Yeah, we do, we do. This is why we don’t run questions past people. We wanna, we wanna put them in the hot seat. But we’ll, we’ll be nice.
[00:20:46] Dara: Don’t worry. We’re nice people. We’re, we’re not going to be, we’re not going to be mean. That’s cool. I’ll be honest, your opinion is partly why we got you on, and I mean that in a good way. So your content is always really considered. you, you always seem to have kind of a point of view, even if it’s going against the grain.
[00:21:03] Dara: And again, partly because of your kind of diverse background. You’ve, you’ve, you’ve gone through so many iterations, and even where you’re working now, you’re kind of at the intersection. You know, you, you said it’s very focused now on performance marketing, which is new to you. You’ve always had that kind of technical curiosity.
[00:21:17] Dara: So I think I’m going to make a bit of a confession here to our listeners. We kind of teed this up when, when you and I spoke and we said, look, how about we do something a little bit clickbait and say, you know, is AI going to make data scientists and analysts redundant? and we thought that could be kind of a bit of a starting point, but actually it’s probably going to be a lot broader than that.
[00:21:36] Dara: But if we maybe kind of start. Vaguely there, but rather than just asking you that question directly, what’s your kind of take on, I’m going to, maybe this is going to be a bit difficult again, I’m going to ask you to summarize probably a lot of different thoughts that you have, but how would you sum up the kind of current state of play with, with AI in terms of its usefulness, I guess, for, for, for businesses?
[00:22:00] Dara: And that could either be, you know, on the consultancy agency side or even on the brand side. Where do you think it’s at in terms of like, is it, is it ready to be used properly or is it still just a bit of a pipe dream and we’re a long way off being able to actually use it practically.
[00:22:16] Juliana: Okay. Expect a very long answer. So what is AI really, you know, because most people will say AI and think about LGPT or Gemini or another LLM. AI has been around since the 1940s and everybody that works in performance has been using programmers. Advertising and AB testing and I don’t know, multi-arm band. And that’s also ai. That’s machine learning.
[00:22:40] Juliana: So AI is a whole encompassing umbrella that covers machine learning and deep learning. And generative AI is just a small part of deep learning. So I think the biggest issue that we have as an industry is we really don’t know what we’re talking about, but it’s provocative, right? Mm-hmm. It gets people going just like that side.
[00:22:59] Juliana: And I think because of the lack of understanding of what we are seeing, when we’re seeing ai, we’re just making very stupid decisions or we’re, you know, just trying to chase things like look at vibe coding right now. Everybody a few months ago was showing, oh my God, look lovable. Look at their valuation.
[00:23:19] Juliana: Oh my God. Look at the cursor. And as you can see, every media publication in the last week or two has reported that people don’t use vibe coding anymore because it’s stupid. No shit. Like it’s, it’s kind of like, it depends, you know, AI has a lot of applications, I think in the context of digital marketing analytics.
[00:23:37] Juliana: Sure. AI has its place and has been having its place for years. Mm-hmm. I think it’s about making the distinction between machine learning and generative AI. I think all of us here in this recording have used ML to a degree. Every type of advanced analysis that we do to be a, I don’t know, propensity modeling, propensity scoring, prefetch, all of this analysis that we do in, you know, using, using SQL or what, what the fuck we’re using is still artificial in use of artificial intelligence, but it’s a branch of it.
[00:24:10] Juliana: So, sure. In the digital analytics community, we have been completely disrupted because now we think that we can use a large language model to do analysis for us. I’m really terrified of people using cps. I have very strong opinions, which I’m trying not to give right now with CPS and using LLMs to do data analysis.
[00:24:33] Juliana: Just because you have Gemini and BigQuery doesn’t mean you should use this to analyze your tables. So, yeah, I think to wrap it up, I think we need to take like 200 steps back and really understand what artificial intelligences, what machine learning is, and what generative AI is, and to not conflate the use of large language models that are used for textual analysis to use them for tabular analysis because those models are not built to do tabular analysis.
[00:24:58] Juliana: So, yeah, it’s, it’s hard to answer your question to, in short and everything has to do with the poor understanding and education of, you know, of the public with what artificial intelligence is. I don’t know, does that answer you? Does that show you how angry I am?
[00:25:15] Matthew: Yeah. I want to, I want to scrape into the anger a bit. What is it about? Well, we’ve been talking about a lot of our CPS recently. We’ve had a couple of people on who. Have been singing their praises. So I’m interested in why you recommend not using them or, or in what situations you might recommend using them, or is it in the right hands? Is it people who don’t know enough giving tools that could be dangerous? What’s, what’s the, what’s the anger?
[00:25:40] Juliana: So every time I hear about cps, I don’t think there’s anything wrong with the MCP and the technology. I think it’s very useful to use it to combine, you know, your dataset and your backend and you know, power different AI models. But it also depends on the AI model that you use.
[00:25:58] Juliana: So I wanna give this example, which is a culture example, because I’m a person of culture. So if you’re listening to this, I want you to go on Google and search for the Kelly Rowland Excel meme. So if you open that meme, I’m waiting for you, you’ll see that she’s trying to text Nelly using Excel now. Can you write an Excel with text?
[00:26:21] Juliana: Sure. Will that message ever hit Nelly? Probably not. Does that make Excel a bad tool? No. Excel is great. Excel is the best CDP out of all time. But when I think about cps, sure, you can put numbers in a sheet and pass it to an LLM. Will that LLM be capable of the proper data analysis? No, because it’s a textual, it’s a textual, analysis model.
[00:26:48] Juliana: It’s a transformer that takes text and makes predictions based on the tokens or what is the most likely thing? That would be the answer. L lms. It’s in the name of language models. It’s not number models. So if you use an MCP to connect maybe to an XG boost or another type of model that does tabular analysis, but if you use an CP to connect your shitty dataset to the cloud. Or to GPT and then you’re asking, what are my best customers And clusters will, will you get answers?
[00:27:27] Juliana: Sure. How are you measuring for accuracy? How are you doing precision and recall, what is the confusion matrix? How do you know that the data that that model spewed out is correct? Even if you use, like, I pay for Gemini Enterprise because I like their AI studio. So every time I do a Google sheet with some numbers, I always get prompted by Gemini.
[00:27:49] Juliana: Lemme realize this data for you, which is cute, but even if you test it in a Google sheet is wrong 80% of the time because it’s not capable of, of, looking at tabular data. I’m going to actually write this blog article. I wrote this like months ago where I looked at all the science papers in 2025 from the first six months that were testing LLM on tab, on tabular analysis versus XG Boost and other methods.
[00:28:18] Juliana: And all of the time they fail because it’s not enough. You can use LLMs for different things like rehydrating data sets. For example, let’s say we’re agencies, right? So we’re working with a company that does social listening. So let’s say you decide to export brand watches or can, which is a social listening tool in BigQuery.
[00:28:38] Juliana: Well, of course these tools will have a very shitty API. So you realize that you’re missing a lot of data in your table. What can you use an LLM for? Well, luckily, LLMs are tests are, are trained on a lot of data. So you can use the LLM to rehydrate some of that data, meaning you can get some more context into what somebody said on Facebook or on Twitter using deep research.
[00:29:02] Juliana: You can use an LLM to rehydrate the data that you have. But again, this is textual data from social media. It’s not numbers. So what I’m trying to say is that language models are made for analyzing language. It doesn’t matter how much you invest in technology. To connect a language model to your data is still going to produce shit results because it’s not made for the type of data.
[00:29:27] Juliana: It’s good for text analysis, it’s good for reviews, customer support, chat, social media comments, metadata for sure. Everything that is text is good for a language model. If you need images, you should use imaging. If you use numbers, you would want to use classic machine learning. And that’s it. Was that, was that, was that a good answer, Matthew? I’m sorry.
[00:29:49] Dara: Matthew’s nodding.
[00:29:51] Juliana: I’m, I’m, I’m just
[00:29:53] Dara: I think it was a good answer and Matthew was nodding as well. For anyone who’s listening rather than watching. It, it sounds so obvious when you say it that the, you know, the clues and the name is larger language models, but people are just making this mistake be, and it’s, and it’s because, and gosh, I guess my question is going to be, who’s, who’s accountable for this?
[00:30:11] Dara: But you’re going to use it if you think it spits out something that looks sensible. So people who don’t know are just going to think, they’re going to ask the question, can you, can you tell me what these numbers mean? Can you do this for me? Can you do that for me? And it’s going to spit out an answer, because it wants to, I’m anthropomorphizing it now, but I’m going to say it wants to please you.
[00:30:31] Dara: But it kind of does, it wants to give you an answer. I don’t want to say, I don’t know. So who’s. I’m not going to say who’s to blame for the fact that that is, that that just exists, but who’s accountable within a business to make sure that it’s being used in the right way to stop people from just thinking, oh, have you tried chat GBT? It can analyze all of our data for us and tell us what we should do.
[00:30:54] Juliana: Obviously it should be the data science team and also the analytics team. Like we all should know better by now. Like, this is our job, this is what we do, we all should know better. It’s our job to educate. I don’t think it’s anything wrong to try to use an LLM to analyze data or to show a graph and say, what do you think about this?
[00:31:14] Juliana: I think there’s caveats to this. For instance, a good use of an MLM, let’s say you have a big, you, you connected your search console with a query through a service account, and now you have access to your search queries. Then you decide to use Vertex AI and you can use ML generate text and you can prompt inside the SQL query.
[00:31:36] Juliana: In the concat function, you can prompt the model to do some analysis. What does the model do is basically just transform your prompt into a SQL query, right? It’s not necessarily doing the analysis for you, like you would have with a, A conversational tool is basically you’re using Gemini to write SQL for you to make sure that that data is correct.
[00:31:58] Juliana: You need to know sql, at least bare minimum. I’m not a SQL expert. I know bare minimum sql, bare minimum Python. I know enough just to be dangerous.
[00:32:09] Dara: Yeah, yeah.
[00:32:10] Juliana: I’m not an expert in this, but I know enough to know that a query is written wrong or like a query is going to return crap back to me. So I do use Gemini and Claude a lot for code, but I also have the basic knowledge.
[00:32:25] Juliana: So parenthesis I can make here is with vibe coding. Like if you’re a developer and you have developer experience and you know how to code, if you use Vibe coding, it will be easy for you to spot what’s rank. But if you don’t have any prior experience in coding or analysis in the digital analytics case and you’re using these models to do analysis, you will not know how to qa.
[00:32:48] Juliana: You will not know what went wrong and where, and you’ll basically not know how to solve your problem. So I think it’s important for either client side or agency to have this person in the data science team that tells people, Hey, hey, hey, maybe just maybe this is not correct and we should take some measures.
[00:33:07] Juliana: So I think. This doesn’t mean that as an industry, we shouldn’t experiment and test these models. I think I’m in love with Google Cloud and Vertex AI Model Garden, but it does say Model Garden. So it means there’s different models that are made for a specific job. So if you want to do analysis, choose the right model inside Model Garden that will help you with analysis.
[00:33:27] Juliana: If you want to do review mining and search query mining, sure, use Gemini and they’ll generate text and that’s it. It’s more than enough. It will do more than enough. But if you want to do propensity modeling, then just, just just step through and focus on how you do it from a machine learning perspective and use methods that are approved.
[00:33:47] Juliana: I think how can we not get excited? You know, our jobs are quite difficult and it takes so long to get anything done, so of course we’re excited as an industry about, I don’t know, making our lives easier. There’s ways, like, like there’s ways that you can do data transformation and data cleaning using ai.
[00:34:07] Juliana: But do we, should we really just trust that? Like, should we just jump on it? Should we replace the data form with Canvas? Like there’s some things there to consider. So I don’t know why as an industry, we got into this situation. I do like to blame a lot. The, the narratives that come from Silicon Valley and all these valuations and all this propaganda that’s happening on social, which is kind of getting people to become,
[00:34:35] Juliana: I don’t know, unaware or it’s, I, I wrote about this quite a lot that I really think that n LMS are killing critical thinking because of how they glaze you.
[00:34:45] Juliana: So it’s like you’re saying the most random shit. Oh my God, you should write a book about it. Just, just, just write a book about it. Go out there. So I know this might sound like a conspiracy theory type of shit, but it’s okay. I’ll take responsibility for it. I think this is the beginning of the degradation of critical thinking.
[00:35:06] Juliana: That’s, that’s my perception. And using these models and having the impression, you’re right, all the time. You as humans, we feel the need to be validated. Like, I’m a validation idiot, help and can, my LinkedIn, I was like, oh, look at me. That’s hard. Just because, you know, I wanna feel validated and it’s real and I’m okay with being open about it.
[00:35:26] Juliana: Some people will be open or not. I’m very open, I’m very validated. But at the same time, I do have critical thinking and I don’t believe things. But some people, maybe we don’t know their background. Maybe they’re sad, they’re depressed, and they use this model and anthropomorphize the relationships that they have with them because they need that validation.
[00:35:45] Juliana: They, it makes them feel good. So the problem is quite psychological. It’s just, I don’t know if this is turning into ways, I don’t know. I don’t wanna be emotional. But yeah, it’s, it’s kind of what it is. I think it’s a psychological thing that is happening to all. And because we get that validation, we tend to forget that things, you know, are not necessarily as, as, you know, as easy, as simple as, you know, social media or industry outlets are trying to portray them. I don’t know.
[00:36:12] Dara: I think some something else that I, I read in, in one of your articles where, you know, on the, on the subject of, you know, it potentially killing off critical thinking, and I know you also said about, you know, convenience being prioritized, I guess over understanding. And that seems to be, you know, something that’s being, I mean, that was already, you know, when the internet came along that became, that, that was kind of the first step, wasn’t it?
[00:36:33] Dara: It’s like you could kind of, you don’t really need to know certain information anymore. You can just, you can just find it whenever you need it. You don’t need to store it in your own brain anymore. But LLMs are just exacerbating that problem, aren’t they? Yeah. So what, what, what do you think? And I guess maybe go a bit broader.
[00:36:49] Dara: This isn’t just a company or professional thing anymore, but what can people do to combat that, do you think? Where, you know, you, you are obviously not against using LLMs for the, for the right reasons and neither are we, Matt or I, but how do you try and make sure that it’s not eroding your critical thinking while still being able to take advantage of what it can offer in terms of the, you know, efficiencies or whatever.
[00:37:13] Juliana: Don’t anthropomorphize them. That’s it. Like it’s a tool, it’s a piece of technology and we should treat it like that. It’s just very hard to do it because, do you remember when they had that leak because they expose all the Chad GPT chats online? Yeah. And, if you look at the conversations that people have online, like people use them as a therapy or, I don’t know, six partners.
[00:37:37] Juliana: Like I’ve seen some wild shits. I’ve read some, which you should not do with Chad g pt. Like that’s the line, don’t go there. But people use them for romance. Like there’s red threads where people are married and dating or caught. Like, this shit is crazy right now. What you can do is to remember, it’s a piece of technology at the moment to close your screen and time touch.
[00:38:01] Juliana: Gra, we need to touch more grass like this in general. Yeah. Yeah. But I think the struggle here is that we are millennials, right? We’re analog. We grew up in a different type of environment where things are different. But look at our kids that are in a digital first world. My son uses Claude right now to do his homework or do research and I didn’t talk to him.
[00:38:25] Juliana: He learned that on his own. Yeah, because his friends are doing it. So we’re like, they are basically onboarded and into this lifestyle where you use a model as a sparring partner, but even it’s kind of like. Making the difference between a piece of technology and real life, which sometimes can be hard because you don’t know nobody’s story.
[00:38:45] Juliana: And I really feel for people that are lonely and this is all they have. So I don’t judge people that use these models in a personal way. I mean, I do judge the ones that have sex with, that’s for sure that’s fucked up. But like the people that use them because they feel the need to talk to somebody or they feel the need to, you know, it’s, it’s, you cannot judge that.
[00:39:06] Juliana: Like people are lonely and it’s what it is. But if we go out of the human part of it and back to our industry, I think in analytics, as I said, we should know better. We are in charge of educating our clients and educating the community. And we shouldn’t probably be the ones that are, you know, riding the hype.
[00:39:25] Juliana: We should be the ones that say, okay, this is great, but let’s test this. Let’s take a step back and see exactly what is, what is do and not, and it’s, it’s quite hard, you know, even client side or agency side, you want to sell this stuff. You wanna be the first, you wanna develop, you wanna help your clients.
[00:39:41] Juliana: And your clients want AI and you want ai, and everybody fucking wants ai. But you have to take the, I always give this example, take the Marie Kondo approach that maybe if it doesn’t spark joy, you shouldn’t really do it. And a lot of times the answer for a lot of the stuff people want to use the app for is an Excel sheet.
[00:40:01] Juliana: And if the answer is an Excel sheet, maybe you don’t need that level of sophistication, maybe use something else. It’s hard. I mean, I don’t, I don’t judge anybody at this point. I just, I think we’re. Dealing with a lot of misinformation. I think we’re dealing with a lot of people that have interests, to sell their tools or technology or services that will have, I don’t know, more share of voice on social or in different, you know, outlets and those narratives are going to, you know, go down into different circles and it’s, it’s quite hard to protect yourself.
[00:40:37] Juliana: But I think having critical thinking, not trusting, you know, verifying things, verifying information, it’s important. We should be very alert. That’s all.
[00:40:46] Matthew: I think critical thinking, full stop. It is worrying to say it’s eroding because it does feel like it needs to, it is more important than ever. Like I’m of the opinion that I, I’m, I’m seeing the utility of these things, like from someone who, you know, can, knows SQL knows web development and databases and stuff.
[00:41:02] Matthew: I like it, I’m using it and it is unbelievable. But like you say, there’s a lot of context there. There’s a lot of technical understanding and there’s a lot of knowing how to utilize domain knowledge that probably makes it more powerful. Plus also has a layer of protection in, and I think just everyone pointing, I mean, me and drab have been making this point over a few months now when I think about the Google Cloud.
[00:41:24] Matthew: Data engineering agent was touted at next 25. It’s like that’s fair that that was just going off and doing things. It was just like, well, I’ll do this and I’ll join this and I’ll query this and like, but what if you don’t know what’s in your data and you’ve just chuck some stuff in there and you point that at a raw GA four table that’s been there for two years and, and you get a cloud bill.
[00:41:42] Matthew: That is absolutely wild. So yeah, I think I agree. I half agree with you in terms of like, it’s dangerous, but I have seen the utility. I think you are saying you have seen the utility.
[00:41:52] Juliana: It’s just, I mean, I’m known for AI work.
[00:41:54] Matthew: Yeah, yeah.
[00:41:55] Juliana: That’s my reputation. So I am using ai don’t, I don’t want people that listen just to think I’m a boomer that hates on ai.
[00:42:02] Juliana: I don’t, I just hate that people are being mis, you know, misinformed. I hate the fact that people are hyping shit that doesn’t make sense. And I hate the fact that this is affecting people that are just coming into the industry as an analyst. Like, think about it, think, think that this is your first day as a digital analyst and instead of learning how to.
[00:42:22] Juliana: Use a Google sheet and clean up your data set in an Excel sheet or Google sheet. You’re just jumping straight into prompting a model to do analysis or data cleaning. And you don’t know the basic way of cleaning a row. You don’t know the difference between columns in a row. You don’t understand data validation, you don’t understand all of these things.
[00:42:39] Juliana: So I just am afraid for the people that are just joining the industry more than I am for us that have been here for some time. So that’s the clarification that I want to make. I’ve been using AI models since 2022 before it was even cool. I have video, reviewing chat, GPT 3.5. I have videos. I’ve been doing this for a long time.
[00:43:03] Juliana: I learned from my colleague from Kir. As I said, I was not the person coding, but I was the person creating this product. And I guess. I’ve been, because I’ve been using it and I’ve seen the downside of it. I am more critical than most people and I am not that easy, easily convinced than most people because I have the experience of doing it.
[00:43:25] Juliana: And I will be happy to talk a bit if you guys want to talk about what we did for Starbucks. Because for Starbucks, we deliberately chose not to use a large language model. We used a specialized language model that we had on hugging face. So what you need to understand, for everybody listening, a language model is basically a transformer model.
[00:43:45] Juliana: Transformers appeared in 2018. It was the bird model that was created by Google. It was this paper called Attention is All You Need. And that was the birth of the first transformer called Birth, which is like a funny birth to birth. So this birth is basically the basis of the large language model infrastructure’s a transformer.
[00:44:06] Juliana: So what me and Simi did is we decided to use a Bert model instead of an NL NM to do analysis of app reviews for Starbucks across four countries. And we deployed this model with a mix of Vertex. I model gardens as well as BigQuery into GCP. And then we analyze the reviews across these markets. But we built a glossary, we built a vocabulary, we built vector embeddings that we fed our model.
[00:44:33] Juliana: So the model is only trained in Starbucks language and you know, they’re different product names and so on. And this model did not have previous training data. It was basically, it has some prep, some training data, but not as large as an LLM is.
[00:44:48] Matthew: Mm.
[00:44:48] Juliana: So the idea is we deliberately choose to use a specialized model because it’s safe, it’s privacy first.
[00:44:53] Juliana: It’s privacy by design. The client owns it end-to-end. It’s deployed in the security of your cloud. There’s no endpoint. You know, where data can, you know, go out of it. So the reason why I’m really proud of this work, and this is the work that we are a finalist for Libby Awards on, is the best use of AI and ML in Europe.
[00:45:15] Juliana: That, that’s the word that we’re on, is I’m proud because this is not an LLM use. Mm. This is classic machine learning, but it’s still artificial intelligence, but it’s with responsibility. I’ve also tested a lot of LLMs in different analyses. I’ve, I played, I really like ML generate text, which does use Gemini and you can do a lot of codeship and I in BigQuery.
[00:45:35] Juliana: So, and I’m very keen on Google Cloud. Like I love Google Cloud, I love verex. I model gardens. So me being so conservatively excited about it doesn’t mean I’m not using these tools. It’s just my experience taught me how things go wrong and I don’t want people to. Trust that things will just happen because you know it’s painted as easy on social media.
[00:45:58] Dara: Mm-hmm.
[00:45:59] Juliana: It’s not magic, it’s just math. It’s STAs. Yeah. I think it’s very important to know that the data science role and the digital analyst role is more important than ever because we are the ones that can come with the guard rails to protect our clients and their clients.
[00:46:13] Dara: Are you finding though, that, I completely agree with that, but are you finding that people are bypassing or trying to bypass you more and more?
[00:46:23] Dara: And I don’t just mean you personally, but you know, any data analyst, any data scientist, do you think, are people just thinking, well, I know Juliana says, you know, I shouldn’t distrust this, but you know, my favorite LLM told me I can’t trust it, so I’m just going to, I’m just going to go with that.
[00:46:36] Juliana: I mean, this is already going to happen and I think it’s fine. I think we should all experiment, right? Like think about the matrix if there is no spoon. Like we are in a spot where there is no spoon. We can experiment so much, but. You can experiment on your own synthetic data sets in your own BigQuery project, in your own environment and make sure you take all the safety, you know, requirements.
[00:47:00] Juliana: But when it comes to companies and data of companies, you have to be very careful because there’s GDPR, there’s the UI Act, which I covered extensively in my newsletter for everybody that wants to read it. There are some things that we need to respect, which is our clients’ clients’ data. We cannot just play experimentation games in this type of environment.
[00:47:20] Juliana: We need to create a safe sandbox for these experiments and test that. I’ll give you an example very, ’cause I’m very open about this stuff. I applied for the trends API, Google Trends API still waiting like everybody else. I want to mix that data with YouTube data and search console data to do organic discovery measurement.
[00:47:42] Juliana: That’s my plan. That’s the new product I’m working on right now. But I’m using my own data first. I’m going to use my own GCP project, my own search console, my own YouTube analytics API, and the Google Trends data because I wanna test this in my environment first to see all the problems that can occur. Yeah.
[00:47:58] Juliana: Before I even go to a client to say, Hey, this is the new way to measure organic discovery. So what I’m saying is it’s good to experiment, but you have to respect the companies that you serve and you have to respect the company that you work for and be careful to protect the company. I think the risks are so, so high and we shouldn’t mitigate these risks.
[00:48:18] Juliana: We should respect it and understand them very well and be responsible for how we use these technologies so we don’t create tissues for our clients. That’s all. I’m just very careful with that. Privacy is so important to me.
[00:48:29] Matthew: It’s interesting, isn’t it? ’cause I mean, some, some clients aren’t, aren’t great at. Protecting their own, the PE people who are adopting the models. And I remember now, and it’s interesting that Starbucks example you gave, I was at a conference a few years ago and they, they kind of described the sort of post training training of LLMs, you know, where you can sort of fine tune it as a giant, a giant opaque black box full of stuff and you get a handful of your stuff and just chuck it into the soup and then close the lid and it’s like, yeah, it might be a little bit better at doing what you’re doing, but you don’t know what you’ve just chucked it into.
[00:49:03] Matthew: You don’t know what all the various cross contaminations that are going on in there. And what you’re getting out of it is probably not very robust. Whereas something like the approach you took where you just kind of went from scratch and trained it from the bottom up, I imagine first of all is a lot better from a PII privacy perspective, but probably gives better results for a very specific need.
[00:49:22] Juliana: And it’s easier to pipeline because how are you going to pipeline a prompt tell me. Tell me how you’re going to pipeline a dashboard and a prompt and a workflow because data changes. So if your prompt works today, will it work next week? Will it work during peak? Will it work during seasonal sales? Will it work during summertime?
[00:49:40] Juliana: When you are using a specialized language model, the data, even if the data changes, the model is already learning on its own because you set it up, set it up for success. So my pipeline that I built with Crossy never crashed once in three years.
[00:49:55] Matthew: Yeah.
[00:49:56] Juliana: I was laughing with that. We had no days off, but the poor model had no days off. Like just like us for the last few years.
[00:50:07] Dara: Remember, it’s not a person, Juliana, it was, it’s not a person.
[00:50:10] Juliana: Exactly. Yeah.
[00:50:11] Dara: It’s not, it’s not.
[00:50:11] Juliana: But you see, you anthropomorphize stuff because this is what we do as humans, A human being. But to your point, Matthew, yeah, you’re a hundred percent correct. I think again, the message that I’m sending to people is not, oh, do not experiment, do not play, do, do not touch this.
[00:50:28] Juliana: No, use it, but use your brain and be careful with what you do. And I love the fact that I work in an agency and I’m, I’m an agency person now. I used to be client side, but I’m an agency person and I love the fact that I get the chance to support and educate as well. And every time, you know, like in most of my projects are a i, because it’s in, it’s in a cloud, like cloud data sciences.
[00:50:52] Juliana: That’s what it is. It’s all my, my project is ai, but I’m always careful. I’m always verifying and I always try to educate the client back. Something doesn’t make sense. I’m going to tell them honestly, like, we shouldn’t do this because of this. And I don’t think we should push things or just make people temporarily happy.
[00:51:11] Juliana: But then long term it’s, it’s, it’s going to affect, and I think again, it’s us as agencies. ’cause you guys are an agency too. It’s more important than ever for us to be the trusted advisor to our clients and to our customers and teach them, you know, and support them in their learning because they, they wouldn’t know what we know.
[00:51:28] Juliana: We don’t know what they know. And they don’t know what we know. Yeah. So it’s. Having these conversations and candid open conversations with our customers are so important because we are the trusted advisor.
[00:51:40] Dara: On the subject of training or teaching and not, not clients as such. But you’ve gone back to something you mentioned earlier about people joining the industry.
[00:51:48] Dara: What kind of fundamentally human skills which you get, you know, newer people, junior people in, in a data analyst or scientist role to focus on, to make sure that they’re still relevant. So it does sound a bit, again, I’m getting a bit click ba here, but what advice would you give to make sure that junior data scientists and analysts are making use of the technology that’s available, but continuing to add value on top of that with their own kind of critical thinking or reasoning or narrative skills, whatever it may be.
[00:52:19] Juliana: Just start from the basics. Learn basic Google sheets and Excel sheets and formulas. I learned that, like I, I teach at the university here in Mano a data storytelling and visualization course. Mm-hmm. And my final exam for them was A-A-C-S-V of raw data. ’cause that’s what they need to figure out in life.
[00:52:37] Juliana: Yeah. That’s what they will need to do. And they had it in Google Sheets. They had to clean it, they had to normalize it, then they had to put it into Looker Studio and create a dashboard. And then they need to make a deck and then they need to story tell it and present it in front of me. But also I told them to write a document, a Word document, where they explain every step of the way.
[00:52:57] Juliana: So what does this mean? Is that, okay, we have this data, how are we cleaning it? How are we noticing? And on purpose, I sneak to make mistakes there. For instance, a funny mistake that a lot of people make, some make sense, some don’t. For instance, I had a specific product category that had low revenue and low A OB, but it had a high customer satisfaction score.
[00:53:19] Juliana: So when you see low revenue and low OB or instant reaction is to cancel that product. But if it has the highest customer satisfaction score, you have to ask yourself, are we promoting it enough? What’s going on? So it’s about knowing how to work with raw data in a simple data set and asking the right questions.
[00:53:36] Juliana: That’s kind of where it starts. And having the curiosity to, to ask questions. So I think I am not, again, I always say this, I am not an expert in SQL or Python, but the reason why I learned a bit of sql, a bit of Python, a bit of JavaScript, is because I want it, it, it taught me critical thinking. Because when you speak to me, when I speak to you guys, you can understand me, right?
[00:53:57] Juliana: But when you speak with an machine, you need to be fucking specific above what you ask in that code. ’cause if you’re not specific, and if you don’t write that code in the way that it should be written, you’re not going to get the results that you want. So, learning code, learning the basics, learning math is going to give you that critical thinking to know how to express yourself and like document work or, you know, do a solution.
[00:54:20] Juliana: For me. That’s what’s most important. So my advice for people entering the industry: learn maths, learn google sheets, and Excel, learn data transformation validation, clean the basic stuff, learn the basic stuff, and then learn how to ask questions and better questions. And that’s it.
[00:54:36] Matthew: I’ve been thinking about it a lot. I’ve got two kids, one of which keeps interrupting the podcast. I think about it often because right now we’re where we are, right? There’s still holes in things, there’s still problems, there’s still domain knowledge and the need to, the need to critically think about what you’re doing and, and the output.
[00:54:57] Matthew: I think probably all of us here would admit that we don’t know what’s coming down the line. Two years later, we don’t know what’s next. In two years time, this conversation may seem completely ridiculous. I don’t know. With that, in my mind, I’m like, well, what is it? What, what should, what do people need to know?
[00:55:11] Matthew: What do they need to learn? What’s, what’s going to future proof ’em? And I think it is, to my mind, there’s three things that keep coming into my head. Critical thinking, curiosity. I’m problem solving with those three things. You can leverage the tools, whatever technology comes about in the right way, and point them in the right direction and think through things.
[00:55:29] Matthew: And it may be that, I don’t know, in two years time, a lot of individual jobs have melded into one job and the person is, is using technology and, and doing multiple different things. But you need those, those specifics and, and I worry with that erosion of critical thinking. I mean, Dara and I were talking last week about Sora and, and of the video generation stuff that’s coming out there and how it’s almost impossible to tell with some of those what’s real and isn’t you, it’s so important to be critical.
[00:55:58] Matthew: Go question, is this real? Did Michael Jackson really steal that fried chicken off that person in that shop? and not just take things at face value So. Yeah, that was a little ram, but that’s what I’ve, I’ve been thinking very similar things from a kid’s perspective, not just an entry into work perspective.
[00:56:15] Juliana: No, and I agree. I have a 14-year-old son and a five year-old son. So with a 14-year-old, it’s quite interesting because, I think it’s so hard to be safe on the internet these days as a teenager. So what I’m trying to do with him is to build his confidence and self trust, because the more confidence you have as a person, the less likely it is that you’ll be immediately affected by what you consume.
[00:56:41] Juliana: ’cause he consumes TikTok. I didn’t let him until I was 13 to get on TikTok or on social media. I’ve been quite, you know, conservative with what I let him, but you cannot stop him from doing it ’cause his friends are doing it. But I feel confident that the shit that he consumes right now is not going to affect him as a person.
[00:57:01] Juliana: Because he, I, I, I work to build trust and confidence. So if you take this parenting thing to our lives right now, the more you learn and you educate yourselves about the basics, the more you’re not going to be affected by what’s going on in the industry. So I will, I will share a personal story that many people maybe don’t know about me, which is fine.
[00:57:23] Juliana: When I joined monks, I had no analytics experience. I come from product, I come from product analytics. I didn’t know how to set up a flood like that, back then I hated floods like that. So I went into that role. I applied, nobody was hiring me. I applied from July, 2021, or 2020, I don’t remember until September. I knew some people, I wasn’t, I was less irrelevant than I am right now.
[00:57:49] Juliana: I got rejected by every job. A lot of people in this industry now, it’s quite funny to me. But I got rejected from every job because I was not a GTM expert. Because I know I was not a GA expert back then. It was just that debt with Universal analytics when I moved into the industry. So I only knew GA for no ua, which in hindsight, Jesus Christ.
[00:58:11] Juliana: and the only reason I got hired at Monks is because Doug showed me curiosity. And he said, okay, you might not be an analytics expert, but you know, business and you know, product and you know, commercial, you are curious enough to learn. Mm-hmm. So I went into the job and I, the first six months, it was me working as an analytics person and I completely sucked at it.
[00:58:34] Juliana: I was good at, you know, creating, you know, solutions and documentations. God, I sucked at the technical implementation. So I had the opportunity then to move into this experimentation data science role where I could find myself in who I am as a person. So what I’m saying is that. All the skills that I’ve built from a commercial point product perspective, I carry through because there’s things that hold true.
[00:58:58] Juliana: So if you’re coming into this industry, you don’t need to necessarily be an expert. And I’ll always say this, you don’t need to be a tool expert. You need to be a thinking expert to have that critical thinking and know how to move. I think it’s more important than ever to have a commercial mindset to understand how to position different things because you can learn.
[00:59:17] Juliana: Did I think in 2022 that I’m going to be where I am today and work with all this technology? No, I had no idea. I was just a product person. That new product analytics and sql and very minimal and the stuff that I used to do at CXL. But I joined this industry and I build so much and I learn every day. So even if it looks maybe on social media that I have my shit together, no, I don’t.
[00:59:39] Juliana: Like I’m learning every day and I’m also scared. I’m also scared about what’s happening. I don’t know what to expect. But at the same time, I know that I have the basic skill and foundations of an analyst that will help me navigate whatever the fuck I have to navigate next. So yeah,
[00:59:55] Dara: I really hope some listeners are at that stage, the early stage in their career. And listen to this episode ’cause I think that’s really insightful and good advice. I think you’re right. You know, whenever, when you know, there’s no difference with this podcast or any other content that goes out there, it’s obviously been edited and polished and it doesn’t mean the people doing it know everything where, you know, everything’s changing so quickly now everybody’s figuring it out. So yeah, quite refreshing to hear that I think.
[01:00:19] Juliana: I’ve always been very, you know, open about my background and how I came and a lot of people think, oh, she’s an expert. No I’m not. I’m just figuring it out just like all of you. And I have no problem. I’m not trying to portray this image of self that doesn’t exist.
[01:00:36] Juliana: And I just say as long as you’re curious, as Matthew said, and you have this problem solving mindset, there’s nothing you cannot learn or do. And I’m probably the living example of that, that I jumped from product to data science in three years just because I was curious and I wanted to learn. And I think as long as you want to learn, you’ll do it.
[01:00:55] Juliana: But the commercial mindset is what saved me. So I also encourage people in, from an industry perspective, to take a product marketing course, to take a business course and learn the basics of how a company operates, how a company makes money. I think the further we are as analysts from revenue and how a company makes money mm-hmm.
[01:01:15] Juliana: The easier we are to not to be, you know, removed from the picture. So learn revenue models, learn business models. That’s the stuff that I knew before and I had 14, you know, 13, 14 of expert years of experience. So that really helped me in the analytics context because sure, I don’t know how to do a floodlight maybe, but I know how to, create a project and sell a project.
[01:01:37] Juliana: I know how to understand and sell something to a business and build a relationship with the client. Which I think really, really helped me. So don’t get discouraged about Yeah. All the tools and technologies and learn basic code. You need to learn basic SQL and piper. Mm, that’s it. Oh yeah. I went and I went so emotional. I wasn’t expecting that.
[01:01:57] Matthew: So final question then. I said earlier that we, all three of us, probably agree that we have no idea what might be coming down the line in a few years’ time. So to that end, what’s coming down the line in the next two years? Where do you think things are going? it can be about AI or it can just be just generally, but have you got any predictions of what might be coming our way over the next couple of years?
[01:02:16] Juliana: I think AI for marketing is going to be the next big thing. I think creative intelligence is going to be the next big thing. I advise everybody in analytics to get excited about creative because one thing that we haven’t thought about is like, okay, all these models are creating videos and images and you can do creative and asset development at scale.
[01:02:38] Juliana: Great, but how do you know what’s the right asset for the right channel? How do you know which asset is producing ROI versus ones that are churning? This is a huge opportunity for the analytics community to build that creative intelligence. Bone. AI for marketing is going to be, in my opinion, the next big thing.
[01:02:55] Juliana: I’m super bullish on Google Cloud and what they’re doing. It’s incredible what Google Cloud is doing right now. So even myself, my plan for 2026 is to go more into the creative part of the business and creative analytics. It’s a very low barrier to enter because there’s nobody doing this yet. So I’m always happy to share what I’ve learned back to people.
[01:03:19] Juliana: ’cause I don’t feel competition or anything. I don’t care that much about that. So I would invest into creative intelligence. That is where analytics is missing and it’s much needed by creative people.
[01:03:29] Matthew: Yes.
[01:03:30] Juliana: Performance, profit, profit realtime being pan on Satter, Phoebe, love, love, love, love, love, love. SGTM and for measurement is kind of a great opportunity for us to use different data sources than the ones we get from platforms. So the Google Trends, API of the world, YouTube analytics, API, Google, search Console, social listening, adding all these types of data sources on top of our first party data to get a better understanding of user journeys is really, really exciting.
[01:04:00] Juliana: So very big, on Google Cloud, AI for marketing. That’s kind of where I think things are going for us, in our little digital analytic space for the world. Fuck knows, man. I don’t know. I don’t know.
[01:04:16] Matthew: Let’s not think about it.
[01:04:17] Juliana: Yeah, I don’t know. I just hope I’m still alive. I hope I lose like 50 pounds ’cause I still haven’t gotten rid of the COVID weight. And that’s it. That’s it. That’s my plan.
[01:04:29] Dara: Alright, Juliana, thank you. Thank you for, for, for joining us again and well joining Matthew and I for the first time. But yeah, thanks for coming on the Measure pod. Again. I really enjoyed the conversation and I think you gave some really good advice to people, who might be listening, as well, which is, which is a huge bonus.
[01:04:44] Dara: Will undoubtedly, if you’re open to it, get you on again. I’d love to continue. I think I had about. 15 questions we didn’t get time to ask. So if you’re open to it, we’ll get you back on. But for now, thank you again for joining us.
[01:04:56] Juliana: Thank you so much for having me and thanks to everybody that put up with me for the last 40, 50 minutes.
[01:05:03] Dara: That’s it for this week’s episode of the Measure Pods. We hope you enjoyed it and picked up something useful along the way. If you haven’t already, make sure to subscribe on whatever platform you’re listening on so you don’t miss future episodes.
[01:05:14] Matthew: And if you’re enjoying the show, we’d really appreciate it if you left us a quick review. It really helps more people discover the pod and keeps us motivated to bring back more. So thanks for listening and we’ll catch you next time.
Subscribe to our newsletter:
Further reading
Webinar: Meet Dataform, the smart solution to fragile SQL setups