Skip to main content

#139 The role of AI and semantic layers in BI (with Colin Zima at Omni)

Will Hayes · 27 March 2026

In this episode of the Measure Pod, Dara and Matthew welcome Colin Zima, CEO and co-founder of Omni. Colin shares his journey in the analytics space, detailing his extensive experience with data tools, including eight years at Looker, where he held various roles in product, customer success, and support. He reflects on his time as one of Looker's first customers and discusses his vision for Omni, a platform aimed at revolutionising analytics by integrating AI and combining the best features of existing tools.


Show notes

More from The Measure Pod

Share your thoughts and ideas on our Feedback Form.

Follow Measurelab on LinkedIn


Transcript Show transcript ▼
"The fascinating thing about AI now is Structured data can sort of, it works in the same way, but AI has enhanced it." Colin
"If your data is a complete mess, you're not going to get a good experience out of any tool." Colin

Show full (AI-generated) transcript

[00:00:00] Lizzie: Hello and welcome to the Measure Pod by Measurelab podcast dedicated to the ever-changing world of data and analytics. With your hosts, Dara Fitzgerald and Matthew Husson. Between them, they've spent more years and they'd like to admit wrestling with dashboards, data quality, and the occasional Google curve ball.

[00:00:32] Lizzie: So join us as we share stories about how analytics really works today and where it might be headed tomorrow. Let's get into it.

[00:00:41] Dara: Okay. A very warm and excited welcome to today's guest on the Measure Pods, who is Colin Zimmer from Omni. So Colin, firstly, hello. Welcome to the Measure Pod and thank you for agreeing to come on and chat to us today. 

[00:00:55] Colin: Of course. Thanks for having me. 

[00:00:57] Dara: So we always get our guests to do the tough job of introducing themselves.

[00:01:02] Dara: So it's over to you, really. You can go into as much or as little detail as you want, but if you just wanna give our listeners a little bit of background leading up to what you're, what you're working on today. 

[00:01:11] Colin: Sure. so I guess my current position is I'm the CEO one of the founders at Omni. and we're just rebuilding analytics from the ground up.

[00:01:19] Colin: a little bit of all of your favorite tools in one, a lot of ai, but really trying to build a platform that can kind of do everything. I guess my background that's relevant, I spent eight years at Looker. Led different orgs there over time, but product customer success support. and then before that, I was actually one of the first Looker customers at Data Guy for 15 years.

[00:01:40] Colin: So a user of data tools for a long time and builder of data tools for a while at this point also. 

[00:01:47] Dara: Yeah, I would, before we get too much into the current day, I was actually really intrigued. So you were a customer of Looker, so what was the, do you want to give, just give us a little bit around that.

[00:01:57] Dara: Like how, how, how was that, and how was the jump from being a customer of Looker to actually moving over to Looker? 

[00:02:05] Colin: Yeah, so I mean, I guess the backstory there was that a company actually started with one of my co-founders here, and didn't end up going anywhere. We ended up selling it to a company called Hotel Tonight.

[00:02:15] Colin: and we had a shared, sort of venture partner in the first round, got an intro through them and I think we were maybe the fourth Looker customer. My background then on data was like essentially writing SQL and using Excel in the past, like not being a really heavy tool user. But just really liked working with the team and sort of the value prop of not doing the same work over and over again really resonated.

[00:02:41] Colin: it was a very different tool back then. It was all in the command line for building the data model. but like cloud data warehouses were sort of coming to be. Redshift had launched sort of right after it, and it just. It was really obvious there was a wave happening that Looker was really well connected to and liked working at hotels tonight, but like, loved the Looker product and sort of said, Hey, can I join you and do anything?

[00:03:05] Colin: and I joined without an org or role. I was the chief analytics officer. I think they were just like, this is the customer who understands us we'll find a place to go use them. and started with support and customer success and actually took over the product and sort of floated around doing a little bit of everything at Looker.

[00:03:21] Dara: Joe, that's the best time to join a company, isn't it? When you, when you don't even have a job role, it's just a yes. We'll find a place for you. You could figure something out. That's the, that's often the most exciting time, isn't it? 

[00:03:31] Colin: It worked out great. yeah, like I just loved the product and felt like I could go talk to a lot of customers at evangelize and got to sort of do a lot of that externally.

[00:03:41] Colin: And then I think there's something really special. When a product person or a product team really intimately understands the user and the product. So like when I led product at Looker and at Omni, we've hired a lot of people outta the sales engineering org and you know, like folks that have been a customer at some point in time, it's just such a superpower to both.

[00:04:05] Colin: Sort of like, at a logical level understand what you're trying to do, like be able to decompose a problem like any product manager would, but then also have that personal experience on what is good, these weird friction points. just sort of really intuitively understanding how people use the product.

[00:04:24] Matthew: That seems like a pretty good segue into giving us a bit of an overview of what Omni is because Yeah, pretending the audience has no idea. If you, if you had to do your sort of elevator pitch as to what Omni is, what, what would that be? 

[00:04:39] Colin: I'm gonna assume I'm in the elevator with a data person. 

[00:04:41] Matthew: But yeah, and it's a big elevator. It's a pretty tall building. 

[00:04:45] Colin: The simple pitch is a bi tool that does everything. That was the original pitch for Omni. The interesting thing is that everything has sort of shifted even over the last four years. So the original vision actually goes back to, I sort of mentioned, I came from Excel, worked at Looker a lot.

[00:05:01] Colin: There's a lot of good things about semantic layers and data models and centralized data teams and governance. You know, I believe in all of those things. There's also a lot of times where those things get in the way of being productive. Like if you just have a question, sometimes you just wanna go write a piece of sql.

[00:05:14] Colin: Sometimes you wanna download the object and just start exploring it. And so the original vision was like, why can't one tool do both of these things? Why can't I write SQL and use Excel and also have the semantic layer? The interesting thing is Chat GT launched, I don't know, like six months after we started the company and.

[00:05:30] Colin: Candidly, I was pretty dismissive of AI for data. I was sort of like, you know, that's for people that don't understand. There's so much nuance here, it'll never get it. and the interesting thing has been that AI has turned into kind of, maybe the primary way that most of our customers have started using the product and that sort of interest.

[00:05:50] Colin: Like fold back into what we were already doing is that semantic layers are actually really helpful to make AI work properly. like on the one hand you can hook up Claude to a database and get it to write SQL in two minutes and it will do 80% interesting stuff. Well, it'll do a hundred percent interesting stuff, but it'll do 80% of it, right?

[00:06:09] Colin: and when you bring a semantic layer in and you actually have real structure to things and real guardrails, and you can provide sort of documented instructions for how things work. Then you suddenly unlock, you know, the last 20% of precision and control. And so in some ways we got a little bit lucky because the whole product was built around, you know, like writing SQL and doing Excel and creating semantic layers underneath it silently and sort of this balance.

[00:06:35] Colin: but. It's been amazing to see the kind of tools that can now get attached to BI products and sort of like natural language used to be the hard part and sort of like the bi part was the easy side and now we're back where it's trivial to plug into semantic, or, sorry, not a semantic layer, like a, a natural language layer.

[00:06:53] Colin: And the hard part is back to the sort of bi fundamentals like how you build semantic layer and coordination of users and all of the sort of bread and butter of bi. 

[00:07:02] Dara: Just to go back to the top of that, Colin, where you said. It's a bi tool that does everything, and I appreciate what you're saying, that everything is changing and I think everyone's experiencing that, but.

[00:07:14] Dara: I, I get that and agree with it, but just playing devil's advocate, is there a risk, is there a risk then, or I assume there is, and how do you balance this? How do you risk, or sorry, how do you avoid becoming a tool that's trying to be all things to all people? How do you, how do you kinda strike that balance where each aspect is good enough rather than dilution?

[00:07:31] Colin: It's true. Like I think we've tried to do everything slowly, so there's still parts of everything that we don't do yet. Like, for example, we don't do Python or data science. And we even didn't do Excel for a while. It really started with SQL and semantics. I think our vision has been that there's gotta be a strong foundational layer, and that foundational layer is this ability to really quickly build semantic layers.

[00:07:53] Colin: Like whether you're writing SQL or writing AI or doing Excel, we understand the pieces that exist underneath it. We can store them, we can version them, we can coordinate them really well. I think then it's sort of been this game of slowly expanding the surface area where. The pieces that are there are good or great even, but we can sort of take on more.

[00:08:14] Colin: So a great example of this is we didn't originally wanna put an actual spreadsheet in the product. We wanted the calculation layer to be Excel. So rather than writing Dax or Tableau functions, or Looker calls them, I think custom fields, those are all like proprietary languages. We said we're gonna copy Excel.

[00:08:31] Colin: Exactly. So we use like an A one B one syntax. You write some, as you know, like some parentheses, a colon a or whatever. but they still work inside the rigid sort of structure of a VI data table. So it fills all the way down the column. You've got no control, it's attached to the table. And that was sort of like, great, we're doing Excel for data.

[00:08:52] Colin: Then what ended up happening was we had a lot of customers that were showing up and it was like, Hey, this is great, but. I'm trying to build a financial statement and like, I can't do it in the BI tool. I can only do this thing in Excel. And so I actually wrote a bunch of posts that were like, you know, we're never gonna do actual Excel.

[00:09:08] Colin: You know, we're not gonna, we're not gonna do that. But I think what I saw at Looker especially is that when you sort of draw a wall in what the tool does and say like, you know, we're a semantic layer, this rigid. or you know, like we're SQL people are gonna try to find these bootstraps to work around the framework that you have.

[00:09:29] Colin: So yeah, like Looker had a really strong semantic layer. We would get requests all the time to write sql. We would've people just dump SQL in the data model that doesn't suddenly make that SQL governed. You just created a bad workflow for writing SQL in your tool. And so my point of view is rather than fight that, we embrace it a little bit.

[00:09:47] Colin: And you do have this cohesion problem now where you could have a spreadsheet where you could just make up numbers and you could have a, you know, a semantic layer that is really rigid. But my argument would be that the even more watered down version is. Forcing your user outside of the tool or into your sort of rigid framework for doing that same problem.

[00:10:08] Colin: like to tell you a little story I vividly remember at a Looker sort of customer conference. I had a customer that was asking me about, sort of visualizing SQL and SQL tables, and I was like, great, why? Tell me more about this. And the answer was that they were trying to transpose a data table and they couldn't do it in the ui.

[00:10:28] Colin: So they were grabbing the sql, they figured out how to transpose it, and they were dropping it into the SQL Runner. And really all they needed to do was pivot the table on the side. And what you realize is that no matter how much you sort of think that you're drawing reasonable boundaries. If your users need to do things, your tool needs to help them do those things.

[00:10:49] Colin: And they're either gonna do it the happy path and the tool that you make, or they're gonna go, you know, take the cow path into Excel or some even more horrific framework. And so instead our philosophy has been like, let's, let's just try to do it. And so I don't know if we're doing it perfectly, but it's, it's kind of unbelievable when you start getting all these release valves available, the types of things that you can actually shove into the tool.

[00:11:14] Dara: I, I like that. I, I, I think you're, I think you're right. You know, if people are gonna find a way to do it, you might as well at least try and support them and, and make it a bit easier. I, I was gonna, I was gonna ask you a kind of a related question then. I, I think I, I, I read and tell me if this is wrong, but you, one of the founding ideas of Omni was that you wanted it to have looked like governance, but be much more flexible.

[00:11:35] Dara: So similar questions to the last one, like where, how do you, how do you tow that line? Because obviously that's a difficult thing. They, they, they can be in complete conflict, those two things, 

[00:11:45] Colin: and they definitely fight each other. so my philosophy there, and again, I think this actually relates to the previous one, is we just wanna give admins lots of tools.

[00:11:53] Colin: So the kind of weird thing is we have some customers that just use this exactly like looker. Like they model the semantic layer in git outside of the tool. In some sort of external IDE, you only use model top, what we call topics, but explorers and looker. There's no SQL getting written. We have some customers that are 70% SQL and you know, 30% existing objects.

[00:12:16] Colin: And so like on one side we've got these people that are 99.999% modeled, and then people that are just using us almost like mode. And my belief is that even different users inside the same organization could have different workflows in the product. Like I probably think that most end users should go through model data.

[00:12:36] Colin: I'm not sure about that, you know. Your, your sales rep or your support lead or a person on the finance team should just be writing random SQL in the warehouse. I think there's enough context that that's probably a bad idea. Some organizations feel differently. I think I'm on the data team. The idea of not being able to write SQL in my tool or do completely open things is kind of absurd.

[00:12:59] Colin: the, the kind of humorous story I have here is when I was at Hotel tonight, I made every single employee of the hotel tonight an admin of our instance. 'cause that was the only way I could handle SQL queries. And sometimes people would walk up to me with a question that was not modeled. There was a table that showed up yesterday and they'd be like, Hey, what's going on here?

[00:13:16] Colin: And I would write them two lines of SQL or, or whatever, 10 lines of SQL in two minutes. And that solves the problem for a little while. And so my, again, like, my belief is that you need controls for these things. We even see this around ai. Some of our customers are afraid of natural language because of, you know, the lack of precision or just the lack of understanding of, you know, the probabilistic nature of it.

[00:13:38] Colin: Then again, like we have some customers that are probably 90% AI right now, and I don't think that a tool, again, can dictate that. I think you're gonna want knobs, those knobs will shift over time for your organization even. but you need to give the sort of owners control over those things. 

[00:13:55] Matthew: How is, are you finding that on the subject of ai, that it is accelerating the need to make these little changes and, and to change your mind on things?

[00:14:03] Matthew: You mentioned a couple of times, like originally I was not gonna, I put posts out there. I'm not putting Excel in or I think there's a, there's a story I heard you tell another podcast about your, your attitude to like pie charts and things like that, and they're, they're more low hanging fruit. Things, but obviously the pace of change right now is pretty crazy.

[00:14:20] Matthew: So you, I'm guessing you have to keep, you can't be precious about any decision can you? You gotta keep moving and changing. 

[00:14:27] Colin: I think that's right. The tools that are available to us now are so crazy compared to what we had six months ago or 12 months ago. Like, not just the ability to parse language and how good the natural language is because at sort of a written level.

[00:14:42] Colin: It's better at writing queries. It's better at picking things up, but also just the tools that are available now, like we have these natural language products, can start ingesting spreadsheets and actually interpreting the results and structuring them. We did a demo that we showed off a month ago where we built Microsoft Paint essentially on top of the dashboard.

[00:15:01] Colin: And you can just start drawing pictures and it can now read the picture, understand what you were drawing on, and then go do a thing. These are the types of primitives that you would've had to make like a two year bet on your vision processing to go do that. Sort of like there's a, there's a foundation model release and now you've got vision models that work great and sort of do things and so I, I think you just, you do need to be more pliable on the roadmap.

[00:15:31] Colin: And sort of what people are able to do. Again, like the idea of an A of AI building a dashboard for you 12 months ago was impossible. Like they just weren't the, the models weren't at that quality level. Now, like one out of every 10 customers comes to us and expects to be able to build a dashboard, start to finish with ai, and it's just like if you are not prepared for the speed of that change, there's another company out there that is going to go do those things.

[00:15:59] Colin: And I, I'm actually, we have our first like sort of half day user conference on Thursday and my first slide is just a picture of how fast tools got to a hundred million users and Instagram was like 18 months or something like that. Chat GBT was two months to a hundred million users. And it just, what is happening is just that users have become accustomed to a new way of doing things so quickly that if the tool set is not changing with that, you're just gonna get put behind because.

[00:16:28] Colin: If nothing, the foundation models will do it, but otherwise someone's gonna pick up that foundation model and go do it for you. So it's just, it's crazy what's happening. 

[00:16:37] Matthew: And, and the fact that maybe we talked, we talked about this a couple of times on the podcast now, but the, if you are baking in features and things that are, that do use LLMs.

[00:16:47] Matthew: Essentially every new foundation model that comes out, just you've automatically got an improvement to the products that you've released. It just gets better because the underlying models got better and it's like, great. Everyone's kind of expecting that now, a bit more magic every time. 

[00:17:01] Colin: It is like, and, and it's visible, like quality does improve over time. I think the hard part sort of for the builder is you have to then reflect on what value you are providing? Because again, like. Hotspot built the whole company on natural language processing 15 years ago. And I would argue that like their core IP was sort of just naturally beaten by chat GBT, you know, two years ago.

[00:17:27] Colin: And that's really weird because then you need to think about like, okay, what is our foundational sort of secret sauce? And for us, again, it's like semantics. It's all these different surface areas. It's being able to translate AI into ui, but. Someone could just as easily decide like, Hey, I don't wanna ever build dashboards in the UI anymore.

[00:17:45] Colin: I wanna vibe code dashboards, and your UI has to get thrown out. Now I don't know if we're quite ready for things like that, but I think that we already have some customers that would be happy to get 2000 lines of HTML for their dashboard, if it looked a certain way. And I think that's the sort of hardest part to figure out.

[00:18:06] Colin: How it, it actually goes back to the sort of semantic layer, sort of structure versus the freedom of sql. But now in a UI concept, like do you actually want dashboards that are structured or do you just wanna sling HDML and get stuff that you don't really understand, but that also works. And as you might expect, I think the answer is both. I really do think that both of those things you probably want in the future. 

[00:18:31] Dara: Just on the, on the, you, you said there that the kind of the, that the magic or the secret sauce is the, the kind of foundational focus on the semantic layer. How good, so customers that come to you? How good is their warehousing, their data collection, their transformational layer, how, how, and how solid their foundations need to be to plug into Omni?

[00:18:52] Dara: Is that a kind of blocker for you? Do you have people coming to you where you say, look, you're not gonna get the best out of us because you don't have your house in order? 

[00:18:59] Colin: I think if your data's a complete mess, you're not gonna get a good experience out of any tool. So I think in some sense, like that's not different anywhere.

[00:19:07] Colin: I think the thing for us is that first I'm gonna, this is like beating the dead horse here, but like AI can start building these models. We've had people just hook up like their production apps backend to the semantic layer and just be like, go write a data model for us. I think, again, a little bit risky, like I'm not sure if it's right or not.

[00:19:25] Colin: But it definitely will build your data model really quickly. So I, I think that you need this sort of normal level of data readiness where like, you know, what's in there and what it does. But I think the, again, like the way that people are building these things, I think is changing. Also, like Looker, it was like, you know, handcrafting sort of, it felt like artisanal data modeling.

[00:19:46] Colin: And I still think some people think about things that way too. I don't like it. There are things that you probably should pay attention to. I do think at the same time. You don't need to be thinking about all of those things if you're willing to cut corners and sort of go quickly and use AI tools and sort of get directionally correct.

[00:20:04] Colin: So I think it depends on the risk appetite. Like when I'm producing board decks and financials, I probably want stable financials that I understand that are not moving very much. If I'm doing product analytics again on that app where the backend is hooked up to the repo, I might not even look at the data model at all.

[00:20:20] Colin: Like does it really matter if I had, you know, 10 million page views or 10,000,005 page views? Probably not. So I think that even the backends of these approaches are changing really significantly. The other small thing I would add is that the world now is very different even than it was 10 years ago with Looker.

[00:20:40] Colin: like DBT was not there. And so a lot more of the transformation was materialized in the warehouse, and maybe it was a little bit more important that you got in front of those things. I think now with DBT and other similar types of tools, because you can sort of do transformation in place, you really can sort of bootstrap governance over time and you can sort of create quality over time and the BI tool can take advantage of reading and understanding what is happening in DBT and again, sort of avoid having to build and rebuild models all the time and instead just absorb those things.

[00:21:18] Colin: It's hard to figure out how all of these things layer. 'cause also Snowflake and Databricks are trying to build semantic layers now, again, for ai, but like I think that the best BI tool probably is absorbing as much information as it can about your organization from everything that's happening below, whether it's clean or dirty.

[00:21:37] Dara: But that link leads nicely to another question I was gonna ask you. So do, do, do those different layers. Presumably if they're being managed by different teams, they're gonna be in conflict at times as well. So the semantic layer, the metric layer, the transformation layer. These could, there could be duplication, there could be conflict tension.

[00:21:54] Dara: Is that, I guess that's just back to, that's maybe more of a people problem or apol internal politics problem. 

[00:22:00] Colin: To me that's, life is messy. Like I, I think it's funny 'cause I look back four years, like really when DVT and there were a bunch of headless semantic layers that were sort of pushing this idea of like, well create a universal semantic layer and it'll be perfect.

[00:22:14] Colin: And I hope what we learned from that is a noble goal. The reality is organizations are messy and you will have semantics that are in your report. You'll have semantics in your BI tool. You'll have semantics in your semantic layer. You'll have semantics in your warehouse. You'll have semantics in your data.

[00:22:28] Colin: Pipelines like data is, I think, naturally messy, and you could probably produce one really clean table that is sort of universal and has one source of truth. I think the reality is just that organizations are a lot more nuanced than that because if you look at an organization that really needs sort of a religious semantic layer, that that is right all the time and is the only source of truth, it means that you're either deleting all semantics that exist above it, or you're just not serving those use cases.

[00:22:58] Colin: And back to my sort of original point, you know, if, if we have a report and I need it, I need a two week moving average instead of a one week moving average. I'm gonna find a way to go make a two week moving average instead of a one week moving average. It doesn't matter if it's in the semantic layer or not.

[00:23:15] Colin: And so I think that there's a pragmatism that needs to be embraced, that is, this stuff will exist everywhere. And the hard part is actually sort of the management of these layers, how they talk to each other, how they interact, versus just saying, you know, like, no semantics in my bi or no semantics in my warehouse.

[00:23:33] Colin: I think you can aim for goals like that. I think you need to sort of pragmatically realize that's not actually true. 

[00:23:40] Matthew: Is it it, and I'm guessing back to I'm like a broken record, aren it back to ai. I'm guessing that its ability to look over did these different sources and different semantic layers and kind of make judgment calls.

[00:23:54] Matthew: It's not like it's got no information. It might have. It can put things together and be intelligent about the decisions it makes when it's going to then go and make a decision and write something. It's probably getting better to just have something somewhere than not at all. 

[00:24:08] Colin: I think that's right. And the other sort of weird thing for me as sort of a data person is again like. I feel like we talked about unstructured data over and over again for the last 20 years. You know, like, oh, there's so much information in unstructured data, and it was mostly a lie before, there was a lot of bits of data, but there was very little sort of like organizational high quality data in those unstructured objects.

[00:24:32] Colin: The fascinating thing about AI now is that structured data can sort of work in the same way, but AI has enhanced it. You now actually can use unstructured data in, in sort of two very interesting ways. So the first one is that you can actually just push all of those documents in this context. Like us, we hook up to Google Drive and Slack.

[00:24:54] Colin: And when you ask a data question. It can actually go read those threads and those documents and say like, it's probably not gonna do this, but it can be like, oh, you're, you're in sales. Like you think about these things or you just changed this metric, or something like that. I think that's a little bit of fantasy 'cause you probably don't wanna do that at query time every single time, but there's a lot of information there.

[00:25:14] Colin: The second one is that at query time we can actually parse unstructured stuff much better. So the example I give there is when I go look up an opportunity records like a sales record for us in a data context, you are usually pulling numbers like, you know, the account value when it got opened, the expected close date, who the rep is, things that are, you know, 15 characters long.

[00:25:37] Colin: What's amazing with AI now though, is instead of pulling five columns with, you know, a tiny amount of data. You could go pull back a hundred columns, some of which are gigantic text blobs, or you know, a kickoff document that you had with that customer. And the AI can summarize that into a two paragraph summary of who that customer is.

[00:25:59] Colin: And so now you've got this sort of really interesting mix of classic data analytics, which sort of numbers, you know. Structured data and you can start asking questions about completely unstructured stuff or what I call like sort of semi-structured, you know, it's got like a, it's got an org key, but then it's a big text blob to the side of it, and I think that data becomes exponentially more valuable with AI because you don't need to go read those three pages of data.

[00:26:26] Colin: You can actually just summarize it into two sentences if you want to. And so. You've sort of turned all of these documents now into dynamic lookups in some way, which is incredible. Like that has, I think, been the single biggest transformation that we've seen at using AI in the data context is that you can now work with, you know, bDR notes or call transcripts or blobby Salesforce records, because of summarization.

[00:26:58] Matthew: How, how do you, what, what are your, what's your thinking in terms of, of sort of the storage and retrieval and, and bringing these things into context of that un unstructured layer? Because we, we've been doing a lot of experimentations ourselves, like mark down files and, and rag, vector stores and sort of trying to find, and there's, and there's pros and cons to each where you get loads of blindness with rag and you'll get over stuffed context. So. How are you finding it? And I guess it's probably horses for courses, right? 

[00:27:28] Colin: But it is like, I, I don't think that we really know right now. I, I think the first thing is it's really clear that you can go stuff, more of the stuff in the database now, so it's a little bit silly, but we'll put like a one hour call transcript into a single cell inside Snowflake and you can go look it up and have it go query lines out of it.

[00:27:47] Colin: That's probably not the best way to go. But it is sort of amazing that you can go do things like that. So I think that's sort of one side of it. I think the other one is that you've got these tech concepts like MCP, where rather than sort of doing everything in a data warehouse centric way, we're actually sort of going backwards a little bit to almost like APIs into different systems, like a little bit more MuleSoft and a little bit less tran.

[00:28:13] Colin: And the interesting thing there is, Assuming that those SaaS services have MCP connectors, you can actually just go query them dynamically and instead of querying them in sql, now you're sort of querying them with, I don't know, non determinist, non-deterministic ai, but you can actually go grab very unstructured data.

[00:28:35] Colin: So I think Slack is a great example that we don't go put in the data warehouse, but you can go hit the slack MCP. And it can go find the last 10 threads where you talked about a competitor or a feature request. And there the data pipelining is really just like an MCP connector into some sort of more universal ai.

[00:28:55] Colin: So I think the hard part to figure out is. Will BI become a small piece of a larger MCP that every customer has, and you've got your MCP connected to 15 different services, and BI is a part of it. We have some customers that are doing that. We have some where we are the MCP, so we're the universal AI and the mcps are into us and for their, like the customer gets an ease of use thing and like maybe a little bit more permissions and controls, but maybe it's more limited and.

[00:29:26] Colin: Again, broken record. I think that we'll end up seeing both. Like I do think savvy people will go build universal ais that are connected into all of these where we're just a component of it. And I think for people that maybe don't have the capability to do that or really think about it more as an analytics problem will be their MCP and the primary use case will be analytics, but these other services will tuck in. 

[00:29:47] Matthew: And, and do you think, do you see that ultimately moving towards. Say we, we've been banging on for an X number of years. Get your d, get your data centralized, get it governed, get it, you know, all into this one place owned by the first party. That the lack of friction to be able to just start to talk to these disparate surfaces, services via MCP, you'll CLI or whatever it may be.

[00:30:10] Matthew: And, you know, just have a semantic layer in something like Omni that can just sit in front of all of it. Does that happen? Start to get away from the central data warehouse argument. 

[00:30:20] Colin: Maybe I, it's, it's interesting 'cause I haven't really seen anyone sort of talk about that directly, but I think possibly like, again, the ease of use is so high that if Claude or OpenAI became sort of your browser at work in some sense, and I think this is like, you know, why all the SaaS companies are getting hit or whatever.

[00:30:40] Colin: I do think the idea that you go hook up your 20 services in there and that's your home base could make sense and then data movement becomes less important. I think there's a lot of advantages to going to centralized things. Like I think you can create more control and and like bumpers around things and, and guide it better, but it's really hard to compete with how easy things are if you just gotta go off into four services.

[00:31:07] Dara: So it'll be really interesting to see a lot of what we're talking about now, Colin, is about like, where start, we're starting to move into where we think things are going, especially with the, the, the, you know, the, the ai, the AI kind of overarching theme. So something you said earlier was kind of stuck with me. So you, you said that Omni was founded, I think you said two months before. BT came out, something like that. 

[00:31:29] Dara: And, and you quite honestly said that you were kind of fortuitous. You could, you could easily tell that story looking backwards, saying Yes, we knew what was coming and we, you know, we focused on semantic layers, but you were quite honest and said, you know, it's a little bit fortuitous and AI really benefits from this semantic layer.

[00:31:45] Dara: Do you do, do you think there's a, there's a closing window on that. Do you, do you think that the AI is gonna get good enough that companies like Omni will be, will be squeezed out? 

[00:31:56] Colin: I think it's hard to figure out, 'cause like, you know, people talk about, you know, AGI and in an A GI world, you know, the machines have taken over and, and no one gets to exist.

[00:32:08] Colin: So I think it's, I think it's, sort of irrelevant to think about what happens with a GI, because the whole world changes. I think the thing that I've seen is that there's a lot of questions that require enough nuance. A robot even doing it is gonna have a hard time because it requires back and forth and conversation and control.

[00:32:30] Colin: And to me it's sort of not about whether Claude can just go do that on its own. Like Claude will be able to write queries in your warehouse really, really quickly. So again, I think it gets to 80%. It's going to get somewhere short of a hundred percent no matter what. And so I think the question is, can it get to 99% and you don't care about the errors on the side, or does it really top out at 90%?

[00:32:53] Colin: And the, the parts that I struggle with are when you see everyone try to bootstrap to hooking up. Claude. To the warehouse, it's always like, oh, we'll use the query logs. The query logs have all the information. Well, that's true until your business changes in some minor way and everything that you were doing before becomes invalid.

[00:33:11] Colin: And for anyone that's worked at data in a company for more than, I don't know, a year has probably seen core metrics change in ways that are very significant. Where the change is declarative. It's not sort of responsive. I remember at Hotel Tonight we accidentally renamed every single one of our event metrics, and if you wanted to connect together a time series, you had to go to a mapping table and go look at it.

[00:33:32] Colin: And again, like there's a world where AI goes and figures this out. I have a very hard time articulating how it could be without. Very, very deep cross-functional understanding of how all of these things work, because it's often not like a superficial change. It's often something that is hidden somewhere in a pipeline, and again.

[00:33:56] Colin: If it has perfect information and it's connected perfectly to all of these systems, there may be a world where it can figure things out. I think that probably a declarative layer that maps those things is a faster way to usually, and I'm, and I'm a more correct way of actually going and doing all of those things.

[00:34:13] Colin: So I have trouble not seeing how either structured data transformation, so you're physically transforming the data. Or a semantic layer that is virtually transforming. The data doesn't exist in the middle of all of these systems for them to behave effectively. I do think, however, that that stuff might start being 99% done by AI and 1% done by humans, and I think that might be.

[00:34:41] Colin: The sort of middle ground that sort of appears here, which is like architecture still matters. You still need semantics, but I'm not sure that I need to go hand modify changes, like if there's an error that appears in the data model in the same way that when a bug is filed, our engineers go shoot it off the cloud code and they're like, what happened here?

[00:34:58] Colin: Go fix it. I think data teams will be able to say, Hey, this number was wrong. Go look at why and go fix it. I still think that you're gonna want to declaratively state what was wrong and go adjust things in a data model. And so I do think there's a place for all of that for us, but we've gotta go make those systems good enough.

[00:35:19] Colin: And you know, like maybe the interface is very, very different even two years from now. like I had a customer last week asked, like, you know, are people the way people are modeling data changing? And I would say it's not yet. I'd be shocked if it's not drastically different in like, even 12 months in terms of lines of code written by humans versus robots, presumably.

[00:35:39] Dara: And I'm go, I'm guessing the answer this is gonna be yes, but I, I assume your own roadmap at Omni and your own kind of, pace of, of, of releases has increased in line with for, for, for two reasons really. One, probably 'cause you have to, and two, because the technology's getting better and allowing you to be, to be quicker.

[00:35:57] Colin: Very literally, we tried to measure it and by essentially every measurable metric, so like commits lines of code, we actually demo weekly and we put out videos. They're all up by about 70% in the last six months, and that is. Highly attributable to Claude Code and sort of automated workflows. And again, like it's not perfect at everything.

[00:36:18] Colin: We've had Claude go build some really interesting stuff that has horrible performance characteristics where like we couldn't go ship it and, and so like you do, still need to review code. Like I don't think it's going to build a BI app properly or think about sort of like what your users are doing in architecture.

[00:36:33] Colin: Well, but it can accelerate new feature development. You know, bug fix. It can, it can accelerate almost every aspect of what we're doing in terms of writing code. And I, again, like I think that's the part that will come into data heavily, is just we don't need to go touch everything the same way. And just from a personal standpoint.

[00:36:53] Colin: I know how to use our tool well and like I can go click on things. I find myself starting most sessions with AI now and like maybe I do go get a coffee 'cause it takes two minutes to run and I could have clicked around for one minute and actually gotten to the answer. But there's sort of like, it changes again like the workflow that you can have as a user.

[00:37:12] Colin: 'cause you can go let it run for two minutes and go do something and come back and it effectively has taken you two seconds of attention. And I think that will become commonplace across everything that we're doing at work. I like to sort of jokingly say all this stuff just makes us structurally lazier in a good way.

[00:37:31] Matthew: Yeah. It's interesting, the, the era, the, the sort of errors that we attribute to, to AI and to LLMs, because obviously there's all, there's human error as well, but we don't know, we've not necessarily been as focused and, and, and measured that as much. And it's almost like there's a, there's gonna be a tipping point where we accept, wow, okay, well 95%.

[00:37:52] Matthew: Get me right now. 5% time is pretty damn good. It's be better than, than most individuals would do and Yep, she, you can make mistakes at a much faster pace, obviously, but, but still 

[00:38:02] Colin: it's, it can, and like in maybe in ways that humans don't understand. I mean, I think self-driving is a great proxy here. Like by every metric they're a hundred times safer.

[00:38:10] Colin: It seems like the bars need to be about a hundred to a thousand times safer. 

[00:38:13] Matthew: I wonder if that's, and and it's the interesting of whose bar, because I think individual bars are much lower, like an individual will be much happier to start handing things over. But it becomes this collective organizational bar where even if everybody individually in the organization, their bar's met, the organization itself has a much harder time.

[00:38:34] Matthew: Just like a societal versus individual with the self-driving it's, 

[00:38:37] Colin: and like, and there's, and, and different systems like I, I think famously like legal, it's, it's being used a ton, but it seems like if you make a mistake citing in a court, the court looks upon that very unfavorably. And so, again, like a human, I'm sure humans make tons of mistakes, but the court is very intolerant of non-human mistakes.

[00:38:59] Colin: In a way that has sort of shifted, I would assume behaviors for how these tools get used. And I think we've gotta go figure it out. I go back to liking product analytics, I care about directionally correct. Like I'm trying to look at feature adoption, you know, like broad stuff I want like is line going up or is line going down and correctness doesn't matter at all.

[00:39:20] Colin: Like those are great AI use cases. We've seen really silly examples where, you know, AI tries to add up a column of eight numbers and just makes up a total, and I've just found that humans really don't like it when that happens because it's it, it's it. You can't validate every single word in the response.

[00:39:41] Colin: And so it can actually slow you down pretty significantly if you have to go revalidate all of the results and things like that. So like, that's actually one of the changes that we have put in. We are very conservative about when it's able to sort of like interpret with vibes versus having to go make a very specific tool call that does addition.

[00:40:00] yeah, 

[00:40:00] Colin: because we don't just wanna say, Hey, it looks like your total is seven when your total is 12. 

[00:40:05] Dara: I think there's something psychological to it as well, isn't there? It's like we expect each other to make mistakes, but because what comes back from the LLM can seem so polished and so believable.

[00:40:16] Dara: It's like we, we, we kind of set these unrealistic expectations and then get really frustrated when it gets something that we perceive to be quite basic wrong. But actually it's, it's often. It is still maybe basic, but it's just the fact that it's, it's lacking some context or it, or it's, it's not something it's particularly good at, but we 

[00:40:32] Colin: Yep.

[00:40:33] Dara: And how confident he is. Yeah. How confident it is. Yeah. You just 

[00:40:35] Colin: dunno, you have no, like, to your point, it always acts like it's a hundred percent confident. And then if you're like, are you sure? And it's like, oh no, I just made that up. And you're just like, I communicate effectively. Like humans don't do that.

[00:40:47] Colin: If you brought a report and you were 50% confident, you wouldn't be like, these are perfect. You'd be like, I'm not sure here, but AI is always, appears to be a hundred percent confident and I think that's one of the sort of scary aspects of it. 

[00:40:59] Matthew: I suppose it, it's also the fact that, and this'll be something that, that, that's causing you to have to move quicker as well internally for development.

[00:41:08] Matthew: It's how quickly we get used to technology and how, how quickly we just settle into, like, I sometimes have to sort of sit back and reflect and go, this is my, this is literal wizardry, what's happening and what I'm seeing it going off and creating files and building, spitting back a local host link at me and there's an application sitting there and I, and it was just a couple of sentences I took at it, but we get so quickly.

[00:41:32] Matthew: It's into that level of technology. And then if it, because it does anything slightly wrong, we like, you're an a, you absolute donkey, what are you talking about? How have you got this? And we just get so angry straight away. And I can imagine being, you know, being a company that's building software and having that continuous background noise of ex rising expectation and frustration with something that's not quite right, can be pretty noisy and, and 

[00:41:58] Colin: hard.

[00:41:58] Colin: I think yes and no. Yes, for sure. I'm, it's so overwhelmingly positive though, because of the sort of magic that is available. Like it's, to your point, it's crazy the things that we can now do, like I've used it for a bunch of like random health stuff and to your point, like I've never crossed tab to any of the stuff, but it'll just like give you a diagnosis and it's ama like, that's unbelievable.

[00:42:23] Colin: Like you used to have to click through 12 different things and you're a cross tab and it's like, no, I'm just trying to get this one answer. Yeah, you need to be sufficiently skeptical of the answer. Maybe if it's like, you know, yeah, you've got a bruise, you should cut off your arm. Like maybe go crosscheck that one.

[00:42:40] Colin: But it is, if you are, again, understanding how to use these tools properly. Like is this an emergency? Yes or no? Like, are you really sure? That's really amazing tooling that you might have had to call a doctor for before. So I think if people can figure out the right ways to apply these tools, again, like a board deck without checking, probably not like product launch, you know, metric building.

[00:43:05] Colin: Amazing. We gotta go figure out how and where these things get applied and sort of where we put these checks in place and sort of how they work. 

[00:43:15] Matthew: Yeah. So it's, it, I read a book a little while ago. It's called the alignment problem. And it, it, it, it does really lay out where, where we should be moving quack and where we probably shouldn't be moving fast.

[00:43:26] Matthew: And the legal sector is one of them where, you know, years ago they created all these ML models for probation in America. And there was, there was all sorts of hidden. Biases and racism in it, that was causing absolute havoc. So I can see why some industries are a bit more slow moving, 

[00:43:44] Colin: But even there, like even there, it's nuanced.

[00:43:46] Colin: 'cause the flip side is we obviously do a lot of contracts and when you're doing contracts you're just trying to say like, what do these words mean? Like are people negotiating little words that don't matter or they changing the definition of the thing and that can just strip away the legalese out.

[00:44:01] Colin: That makes it really easy to say like, Hey, does this NDA, that's not ours. Conform with our NDA. Like I don't really care about the words in it. I'm just like, do these things match? Do they really match? Like go through, check across these 10 axes and we can take an NDA review from a lawyer for 30 seconds with chat QBT and like go flag some risks maybe, but maybe it solves 99% of them.

[00:44:25] Colin: And that's actually an example that we do use internally. Like if the NDA essentially conforms to our NDA, we don't go back and forth. We're just like, fine. That's good. But you've, yeah, like. Jailing people on explicit time with an ML model, like that's probably a little bit too rigid. 

[00:44:42] Matthew: Yeah, we, I, I, I have a habit of going dystopian, but you, you're de you're definitely right.

[00:44:46] Matthew: Like there's, you know, giving people who, who perhaps couldn't afford it, the ability to have some sort of. Representation or somebody that can advise them on, or somebody who can't afford healthcare, being able to talk to something and get some advice. Those are hugely advantage, advantageous things that aren't necessarily 

[00:45:02] Colin: Yeah, all legal.

[00:45:02] Colin: Like don't do your surgery based on what you have. GBT is saying yes, but go decide whether you should talk to a surgeon based on what chat GBT is saying. 

[00:45:10] Dara: But the unfortunate thing is that adoption is low amongst those. The kind, kind of groups of people who would be using it for that kind of, use case at the moment, at least, you know, it's, it's kind of, 

[00:45:21] Colin: yeah, 

[00:45:21] Dara: well, it's people like us, 

[00:45:22] Colin: I think that will continue to change.

[00:45:24] Colin: Like, these things have to become normal for people. Like, I, I don't know what Google's adoption curve looked like, but that was probably very foreign and then became, you know, the internet for people. I think that's happening for chat too. 

[00:45:35] Matthew: Yeah. Maybe more, even more with, with things like cowork and stuff like this, trying to get that kind of, What do they call it? Code, code code for the, for the everyday worker. But it, it certainly seems that it's moving people away from that rigid chat interface into more enacting and pulling in data in a way that, you know, to, to a certain extent. It's democratizing that the stuff we're talking about, everything we've been talking about, is just having the layer in the middle, that sort of has that semantics and makes it more usable.

[00:46:04] Matthew: And just, 

[00:46:05] Colin: and don't hook up your production database without a backup. Like they're just things that we need to, like, learn to apply over top of these systems. 

[00:46:12] Dara: Yeah. Yeah. What's your, what's, what's the experience been like in turn? I mean, I'm, I'm guessing. I'm guessing it's been pretty high adoption internally, but have you had any resistance, have you had, have you had to kind of, have, have you just kind of pointed people in the right direction and they've just got stuck in, or has there been a bit more of a concerted effort to bring people along?

[00:46:32] Colin: No, I think the sort of interesting thing from like data analytics internally for us, the fascinating thing is we've got people that I think classically wouldn't have used BI as much that are coming in and using the tool because I think it's scary to go into Slack and be like, you know. How do I go find whether or not we have healthcare customers?

[00:46:50] Colin: It's sort of like, it might be like a small ratification on your competence as sort of an individual, and so it's much easier to just not know or like maybe go ask your neighbor. When you can actually just go and be like, do we have any healthcare customers? You lower the bar now over who can actually go ask questions.

[00:47:08] Colin: Like, I actually have a side by side that's sort of like navigating the experience to go find healthcare customers. And you go into like, you know, a pivot table and you type in health and there's no hits because it's not searching values. And inversely now, like I think our highest tier groups are support and sales.

[00:47:25] Colin: Which I am not typically who you associate with, like the heaviest BI use. And so I think that's where the hardest part was, honestly. Showing people the things that could be done, like the first sales rep had to go figure out like, oh, I can go prep for my meetings by going to search all of our wins in this category and sort of why they picked us on things that might be important, and then the next person goes and learns how to use it.

[00:47:50] Colin: I think that parallels the chat GBT problem really well, which is I think people. Are extremely like, sort of wide in their deployment of AI as individuals. Like some people are just doing crazy stuff at the frontier because they just keep throwing stuff at AI and finding use cases. And some people are like, most people are like, oh, I didn't even know you could do that.

[00:48:12] Colin: Like, I'm still doing it the way that I was doing it. And I think that we've had to push a little bit. It's like, just go try. You know, like if you ever have a question about us, like just go try to ask it over there and see if it works. And that has been sort of just, I think the drivers of success like a, a great little anecdote is we saw a lot of people asking about themselves or their teams inside chat.

[00:48:37] Colin: They'd be like, Hey, how's my team doing? We didn't have any HR data in the system, so I couldn't answer any of those questions, but it then gave a data team an action item that's like, go get the HR data in there so that people can go ask these questions. And now it's probably one of the most common patterns that people have.

[00:48:53] Colin: And so it creates these really virtuous feedback loops because you can see intent from your user, but you've gotta get people over the hurdle of like, oh, this is different. 

[00:49:04] Dara: I think you, I, you're right, it's that everyone has their own light bulb moment or their own unlock. It's like when you first realize it can really do something for you.

[00:49:13] Dara: 'cause I think initially you see the potential on a broad level. And then you think, yeah, I, I get it. It could do loads of different things, but when you first have that moment where you unlock yourself, you free yourself up from something you've been doing the painful way. It's that moment and you can't really, you have to go through that to get it.

[00:49:31] Dara: You can't really, you can't really convince somebody of that until they go through it. You can try and point them at it, but until they actually have that experience themselves, they're not gonna, they're just gonna see it as like, oh yeah, I, I get it, it's great, but it's not for me. And then once they get that, they're like, right.

[00:49:47] Dara: And then they wanna, because you know, it's like once you answer a question, you have another question. It's that kind of. You know, it's a snowball. 

[00:49:53] Colin: We even have a hard time doing it. Like some of the internal use cases are so good and I'm just like, I need a, I need a whole company's data to show you this thing I did yesterday.

[00:50:02] Colin: And it's kind of frustrating 'cause like, you know, demo data, you can go make charts and you can make a dashboard. You can't have it go like, reach into four systems and like look at a thing over here and then grab it and look over here and then like, go write a query and then go check whether it validated it over here.

[00:50:18] Colin: And that's actually been the hardest thing for us to show people. Like, it sounds so trite for me to be like, oh yeah, it changes everything that we do. And you know, here is it, you know, writing a time series query. So we're having a hard time actually trying to show off real use cases. 

[00:50:37] Matthew: It's so, because it's so vast and so broad, so broad, like it could literally change an entire organization. It's like how do you demo that? How do you demo that in a call? 

[00:50:46] Colin: Exactly. Yeah, it could do, it can do anything. It shows me something or it can do, it can do anything. 

[00:50:50] Matthew: Lemme just quickly transform your organization. 

[00:50:52] Colin: We're sort of jokingly, like, should we go try to find a company where we can acquire all of their data, like their LinkedIn?

[00:50:59] Colin: Slack history, their email, and then like go build a little system over here on the side because it's just like, you want all of that stuff to be able to show what the sort of multi reedit system can do. 

[00:51:12] Dara: How are you, I, how have you found this? I find this interesting to ask people because, I think this has come up on our show quite a few times where people are struggling themselves, even personally to switch off because things are happening so quickly and there's something.

[00:51:27] Dara: So, something somewhat a previous guest said is they said, you know, the idea was this would free us up to do other things, but because of how productive we can be, we're now seeing the time we're not doing it. We're seeing all the, the, you know, the amplified loss of what we could be doing. So there's a bit, there's a bit of an adjustment I think we all need to make as individuals to know when we should switch off. How, how are you finding that on a personal level? 

[00:51:51] Colin: Yeah, I think, I think it's just a system that works really well for me personally. Like I was sort of never an off switch guy. like I'm a 365 out of 365 day Slack user. There's just like, it's the first thing I do when I wake up in the morning.

[00:52:07] Colin: Like I just wanna see what's happening. I know things. I think at the same time I've got three kids. Like I coached my daughter's soccer team for six seasons. And I routinely like to leave the office at three to go pick up a kid from school. And so I think it works really well if work and life are integrated.

[00:52:26] Colin: And so like I, I think for like the startup person or someone that just has a strong affinity to work, I think it's great because like for me, work and life are already heavily integrated. And again, like I, I think you don't need face time to be effective anymore. I think you can do so much. It's like it's systems thinking more than it is sort of like task definition.

[00:52:47] Colin: I think it's tricky for like the nine to five, because yeah, like at our corporate offsite, our engineers spent a lot of time talking about how to keep the computers on overnight so that Claude could run. Like that's a weird thing to think like, oh, I gotta keep my machine running overnight because like it's working right now.

[00:53:08] Colin: like ultimately I do think it frees up time and productivity really significantly. I do think pe like I just think people need to have effective boundaries at work and they need to go find Rick styles that work for them. But, yeah, it is like, it has been a huge productivity driver. Like I know this has created a little bit of stress for some folks on the ENG team because you can do so much.

[00:53:29] Colin: I, I guess like my last thought is. I also think that we need to change how work works a little bit. So like a great example is we have customers, they come in, they, they tell us about bugs in the app or feature requests that go through support. Support files, a ticket that goes to the engineering team.

[00:53:44] Colin: I think there's a world in the future, and I sort of talk about this with our, our head of products. Where maybe customers can start filing bugs that get sent to Claude before anyone looks now, like the computer has to be cheap enough for whatever. And like, you know, we can't let people suggest, you know, deleting the app, but it sort of inverts where you spend your time.

[00:54:05] Colin: Like if someone files a bug and Claude writes the code, it fixes it and it's, you know, a two line change that has no significant impact, maybe that's a great thing. And we've now removed it. Hours of human time, sort of like throwing things in between systems and we turn it into a review. I just think the way that people work needs to change to adapt to how these workflows work.

[00:54:29] Dara: Yeah, I totally agree. And there's naturally gonna be a lag because this stuff's happening so quickly, people don't have time to, to think about it and, and, and adjust. 

[00:54:37] Matthew: So we ask. Every guest, same question. And it's getting more and more ridiculous because technology's moving and stuff's moving at such a pace.

[00:54:46] Matthew: And we, we tend to, we tend to be talking about this subject, the entire podcast anyway, but I'm gonna continue to log that particular dead horse. What would be your sort of prediction over the next, I'm gonna, I'm gonna bring it down to two years now. Used to be five. in, in a way, you know, it could be as, it could be as narrow as the industry or as wide as the world.

[00:55:07] Matthew: What, what do you think are, are some, are some big sort of predictions coming over the next sort of two years? 

[00:55:12] Colin: I don't think it's that crazy, but I think that natural language is going to eat the bulk of analytics over the next two years. It doesn't mean dashboards go away, it doesn't mean pivot tables go away.

[00:55:22] Colin: I think it's just gonna become the primary interface way faster than people realize. I like it. In existing tools, in future tools, I think everyone will start with a text box. I think that's going to be like, I think the overall version of this is, I think in every single business app that we use everywhere, there will be a natural language interface to do what you're trying to do.

[00:55:44] Colin: So if you go in and try to configure Salesforce, there's gonna be a box at the top and you just type it in and it will do something in the ui, in the admin section of every single business tool that we use everywhere. There will be. A helper chat bot that effectively ties back to the UI and just goes faster than a human would.

[00:56:04] Colin: And I think it just means that effectively, like the app becomes the documentation for itself. You don't have to go find the conditional formatting button, you just say like conditionally format the column and the menu pops open and it highlights the thing that you were trying to do. I think that is gonna be a very significant change in how nearly every piece of software works and I think it probably is like two years. 

[00:56:24] Matthew: Yeah. I think I'd probably agree with that. It certainly feels like things are moving in that direction already. Maybe the need for that many different apps starts to produce slightly as well and, and consolidate in places. 

[00:56:37] Dara: Okay, Colin, let's call it there. I think that was really interesting. As is often the case with the interesting episodes, we could probably continue talking for another two hours. So maybe we, we test your predictions and we bring you on at some point in the future if you agree and we can, we can see how they panned out. But for now, thank you again for joining us and enjoy the rest of your day.

[00:56:58] Colin: Thanks for having me. It was so much fun. 

[00:57:00] Dara: That's it for this week's episode of the Measure Pods. We hope you enjoyed it and picked up something useful along the way. If you haven't already, make sure to subscribe on whatever platform you're listening on so you don't miss future episodes. 

[00:57:12] Matthew: And if you're enjoying the show, we'd really appreciate it if you left us a quick review. It really helps more people discover the pod and keeps us motivated to bring back more. So thanks for listening, and we'll catch you next time.