Transcript: Fixing Broken Meetings, Managing Calendars with AI, and Redesigning the Future of Work | Matt Martin CEO & Co-Founder Clockwise
Full transcript: Fixing Broken Meetings, Managing Calendars with AI, and Redesigning the Future of Work | Matt Martin CEO & Co-Founder Clockwise.
2025-09-29
Host: I love Toby's memo because I think that it's always a good idea to clean out the cruft on a regular cadence and kind of reset the Baseline because things build up over time.
But I would also caution that you know, look like meetings aren't inherently bad.
It's it's just another way of collaborating with peers and making sure that you can get your work done and the question is what are you trying to accomplish in it and who's the audience for it.
The the outcome um based pricing is interesting, but one of the problems there is most because of non-deterministic nature of AI. Um the outcome is actually 95% or in some cases 80%. So it's like AI is always draft AI at this point.
So like if let's say I'm writing black post, it gets me 85% there. but the last of the 50% is where the sauce is. Like code versus a great writer sort of differentiates it. So it's not really, you know, outcome.
The outcome is only for 85% of the outcome. Clockwise is going to goes all the way back to kind of a famous article by Paul Graham, um called meter schedule versus Maker schedule.
And the reflection in uh PG's article was that, you know, often inside of software engineering organizations, the two modes of operation conflict.
Uh the the managers control the schedules because they're setting kind of the cadence of meetings and who and you know, sync stand ups, one on ones, uh team meetings.
Um and they get a lot of their productivity done in meetings, whereas for makers, you know, people like software Engineers designers etc, they need a large chunks of time in order to go heads down on a project and get in kind of get in flow uh to be able to tackle things.
And so the first thing I would observe is that that uh different people have different demands their schedule and uh so there's not really one size fit all here.
Host: Hello everyone, my guest today is Matt Martin, he's the co-founder and CEO of Clockwise and AI powered calendar assistant that helps create smarter schedules for individuals and teams.
Matt he was previously in engineering and product roles at Salesforce, IQ, Legal Reach.
Clockwise is today currently widely used across organizations and different roles like engineering, product, sales, remote, hybrid and have been uh uh adopted by an impressive list of customers including Coursera, Instagram, Asana, Atlassian and many more.
I'm excited to have this conversation with Matt about meetings optimization, off meetings. How are we optimizing productivity in a hybrid era leveraging AI and a lot more.
If this is the first time you're listening to startup project, don't forget to subscribe to us wherever you're listening to this. You can also follow us on Substack at startup project.substack.com.
Host: Matt, welcome to the show. Matt: Thanks for having me.
Host: Uh, I think uh to get started, I think can you describe to the audience um, you know, what is clockwise and how do custom how how do your customers use uh clockwise. Matt: Yeah.
Uh at its core, clockwise is uh a very advanced scheduling brain. So we connect to your calendar, uh whether that's Google Calendar or Outlook, you can use it as an individual.
Um and we start uh to analyze your calendar when you connected, uh understanding, you know, what's the cadence of your meetings, uh when you tend to work, uh what are your working hours, uh when do you like to take breaks.
Um asking a few questions to get to know you a little bit better. And then based on that information, we start uh giving you suggestions on how to optimize your schedule in order to have more time for high impact work.
And where clockwise kind of really hits its groove is when you start to use it among uh larger group of people. Um so clockwise can look at the interconnection between you, other attendees, their preferences and how to optimize calendars holistically.
And so we do this uh at scale for some of the best companies in the world, places like Netflix, Uber, Atlassian where we help uh optimize um almost the full company complete engineering departments, uh schedules in order to again give more time for high impact work, be able to meet with the right people and be able to have a safe work life.
Host: So I mean you you are living in this world of calendars and meetings and this reminds me of an instance, I think couple of years back when Shopify CEO, Tobi sort of sent out this memo of like, you know, you can cancel any meeting you want and we want to reduce um you know the number of meetings happening in our organization.
Uh what is your in general take of you know what frequency or how many number of meetings are happening in the company? Like do you see high number, low number?
Um what are the trends that you're seeing in terms of just how companies are optimizing uh their meetings.
Matt: Yeah, so in a lot of ways, clockwise is found and goes all the way back to kind of a famous article by Paul Graham um called meter schedule versus Maker schedule.
Um and the reflection in uh PG's article was that uh you know, often inside of software engineering organizations, the two modes of operation conflict.
Uh the the managers control the schedules because they're setting kind of the cadence of meetings and who and you know, sync stand ups, one on ones, uh team meetings.
Um and they get a lot of their productivity done in meetings, whereas for makers, you know, people like software Engineers designers, etc, they need a large chunks of time in order to go heads down on a project and get in kind of get in flow uh to be able to tackle things.
And so the first thing I would observe is just that that uh different people have different demands their schedule and so there's not really one size fit all here and I love Toby's memo because I think that um it's always a good idea to clean out the cruft on a regular cadence and kind of reset the Baseline because things build up over time.
But I would also caution that you know, look like meetings aren't inherently bad.
It's it's just another way of collaborating with peers and making sure that you can get your work done and the question is what are you trying to accomplish in it um and who's the audience for it.
So I could I could talk about that for a very long time, but I would also know that there are some kind of um almost gravitational forces when it comes to meetings. So one is that the we've seen this in our data.
The larger the company gets the higher percentage of time people tend to spend in meetings. Um it just as you have more people in your orbit, the cost of collaboration and coordination goes up.
Uh and that tends to be true not just as you go from a team of five to a team of 50, but even going from a team of 5,000 to a team of 50,000.
I've been a company of 50,000 like the you hit a cap eventually, but the the amount of time spent news tends to go up.
And then another thing that happened is um it's a little bit in the rearview mirror now but when COVID hit uh the quantity of meetings spiked way up.
Uh because I think as people went remote and hybrid they were trying to figure out how to replace a lot of the content of an in office environment with a with meetings. Um that's subdued a little bit, but it never came all the way back down.
So there's there's kind of an overhang from uh companies going remote. And I I would say that even today you do see some split between in office companies and remote companies in terms of the volume of meetings they have.
Host: Um Is there any interesting trend because one of the things that happened after COVID, for me at least is uh there has been increase in the sort of um non-scheduled meetings.
So you just have a question and you just I'll get a call spontaneously sort of like replicating the hallway chat uh remotely. Like do you have any statistics on like uh spike in those and how they are doing right now?
Matt: Yeah, that tends to be one of the sources of the split between remote and in office because when you're in office, those conversations still happen. They just don't get recorded formally on a calendar.
Uh and so, you know, if you're remote, you do kind of have to reach out. I mean there um there are informal ways to do that. You know, you can drop on a quick slack huddle.
Um you could you know, move some of that to slack or asynchronous conversation, which is a good pattern. Uh but about one of the phenomenon is just that there's a shift in the medium.
You know, instead of bump as you noted, instead of bumping into somebody in the hallway or going over to their desk and chatting, you do have to find a Zoom meeting or schedule something on Google Meet in order to meet with the people.
Um so that frequency goes up. I would say that the the amount of time though spent in synchronous conversation it just that actually doesn't vary as much with remote or in office because it just a different type of synchronous conversation.
You know, it's happening at your desk, it's happening in the hallway, it's happening in the lunchroom. Um but it depends a lot on the culture.
Uh you know, in some place like Apple where it's not uncommon for software Engineers to have their own dedicated private offices, you know, that sort of synchronous conversation in the office is much lower than a place that's just a wide open um, you know, open office environment.
Host: So uh I mean before I'm assuming you know, clockwise started before chat GPT and all the LLM Yeah. Uh you know, mania started for lack of better word.
Um and it feels to me that there's now this rethinking in terms of uh especially in organizations about what type of tools we're adopting.
And you know, a typical organization now, a thousand person organization might have like 100 to 200 SAS products, right?
And you know, clockwise is one of those uh and I think we are sort of seeing a shift and between how many products you're adopting and then there's also like um how many things you could add to your own product.
So if you're zoom, you would be are you adding more features, I know that sort of overlap with other products. So there is also like uh more accelerated pace of launching new features.
Do you see this happening in your uh perception of how the sales are going for your product or other products when you to talk to other founders or customers?
Like do you is that this sort of changing narrative or is it more of a narrative than then it's real?
Matt: The I mean it's interesting that you bring up Zoom in the context here of kind of AI tooling and acceleration of adoption of feature set because I think there's actually a more significant undercurrent that's not related to AI, which is the correction a couple years ago now from a zero interest rate environment to an environment where money isn't free.
Um and that had significant impact to SAS uh buying renewals adoption cycles, especially along uh with more mature organizations.
We saw a huge wave of consolidation of tools of removal of tools of reevaluation of tools um that we hadn't seen in the life cycle of our business before and I think that I'm not to pick on Zoom, but their I would say that their proliferation of products environment isn't isn't downstream of AI, but it's downstream of that consolidation effort because they saw, you know, if you're just video conferencing software, it's a little bit easier to rip you out.
I mean, you know, everybody pays for Microsoft uh 365 for their basic email and calendar or they pay Google for their basic email and calendar and both those come along with video conferencing.
And so what does Zoom try to do, they try to replace that office suite. Um I it it remains to be seen if they can be successful in doing that. But I actually think that that's the more significant trend.
Now, when it comes to AI tools and adoption, that has seen uh that has been a bit of a resurgence and a correction in that downturn in buying.
Um there's definitely been top down appetite to find ways to add to the productivity and capacity that organization with those tools. I will say however that the um trying and retention is way different. I'm quite proud of clockwise of retention.
Like people use it and they like it. Uh but as I've talked to a lot of uh it leaders, uh CISOs, other folks that are heavily involved in SAS purchases, there's a lot of experimentation, but there's a lot of churn.
Um a lot of these AI tools, you know, they look interesting at the outset, but it's hard to measure what exactly they're contributing to the bottom line of the company.
And so that I bring all this up because it's a kind of an interesting mindset where you know, you have the background of this um massive constriction in in what people are willing to spend for software, but then a real increase in experimentation of what they're willing to try.
But I think some of that conservatism in terms of what they're actually buying uh is still there.
It's not quite as severe as it was when, you know, the zero interest rate bubble popped, but it's you know, people aren't just throwing money at the problem even though it can feel like that sometimes.
Host: I think because the reason I ask is um there's also this hyper run like what an AI agent can do.
Um because every new AI agent platform sort of is offering things like, hey, you know, you can optimize your calendar, you can optimize um your productivity even to your productivity.
You can create an agent which sort of reschedules or although like they're more non-deterministic in nature, they're not still get accurate enough.
Um so what's your view in terms of like because a lot of what agents are promising are overlapping with you know, the example I'm giving is I'm trying to correlate it to clockwise, but even like other SAS products out there.
Like someone says, you know, we have a revenue agent to increase revenue, right?
The problem I see is the farm factor is not fitting with the promise because when you get into things like revenue management or you know stuff others where a CIO wants to see the number.
Um they wants to see how the productivity change and that's that's still not yet easily correlated especially in this agentic chat based farm factors.
Uh so talk to me a little bit about that disconnect that we're seeing um in terms of you know what the AI agents are promising and why why is that disconnected in there?
Matt: Yeah, I mean it looks like a lot of this is kind of the basics of software selling that've been around for a while. Um they're ultimately the buyer needs to see the case for the return on investment of the product.
Um and I think there's a the reason that there's so much hyper out AI is that people have seen in various facets of their job, the impact that it can have.
Um and so they're looking and they're clamoring to find other areas where that application can be yielded and there are some.
But at the end of the day, you know, if to your point, like if there's revenue acceleration, the CRO is not actually seeing that. Well, you know, they're not going to buy the software, whether it's an agent or a piece of SAS.
Um and in a lot of these areas, you know, the the efficiency gains are really notoriously difficult to to measure. And uh you know, you clockwise, I think that we undeniably help people be more productive in their work environment.
But our measurement problem and our ROI measurement problem is always been there. We're a piece of productivity software. So, you know, we can tell you about all of the dedicated time that we put back in schedules.
And to some extent that's measurable at hard ROI, but but some buyers look at that and they go, well, you know, okay, you made their schedule more flexible, but did they actually get more done?
Um and I think that's where, you know, there is at the edge of uh some of these pieces of agent software, there are interesting new uh pricing models that are being experimented with.
Um you know, you see places like Sierra uh doing outcome based pricing. Um so each, you know, each ticket they take off of a customer service person's desk, that's what you're paying for.
And and that's much closer to hard ROI for the organization because you're actually offsetting, you know, real employee time and salary. Um in a very concrete way. Uh I think it's difficult to find those measurables often though.
You know, it's it it's difficult to find that that hard translation of outcome and it's difficult to have accountability all the way back. So I it's like so many things in the ecosystem right now, people are experimenting.
It'll be interesting to see where it lands, but a lot of these problems echo through software sales, you know, since the '70s.
Host: The the outcome um based pricing is interesting, but one of the problems there is most because of non-deterministic nature of AI. Um the outcome is actually 95% or in some cases 80%. So it's like AI is always draft AI at this point.
So like if let's say I'm writing black post, it gets me 85% there but the last of the 50% is where the sauce is where a good versus a great writer sort of differentiates it. So it's not really you know, outcome.
The outcome is only for 85% of the outcome. Right. So not all use cases can actually be measured by the outcome. Right, it's more in the pay as you go model makes sense for a lot of of new cases.
Matt: Yeah, and that's where like outcome based pricing is kind of the clearest from a buyer perspective and it's and I think it's in some ways the ideal paying mechanism because the buyer is just paying for what they want.
Um but to your point, I think a lot of these a lot of these software systems just can't be measured there. And so you see people fall back to that fall back to usage based because that's kind of it's kind of a proxy for outcome kind of.
You know, if you're looking at coding tools like uh cloud code, um you know, we rack up some bills even in our smaller organization on that for the usage based pricing.
And I squint at that and I it doesn't it doesn't bother me because in my head that correlates directly to developer productivity, but it is a correlation, it's not an outcome and to your point, you know, I don't think I'd pay Cloud code for you know, features ships because like I don't I don't want to incentivize, you know, Cloud to crank out more features.
Um and it's it would be difficult in, you know, the software ecosystem to do something like uh, you know, create a piece of software that drives increased um retention.
You know, like those are the outcomes that I actually care about, but that's a complicated system.
So I I I think we're going to see a lot of people experiment with a different sort of models here, especially because with agents, the thing that has shifted is that um a lot of what both the buyer and the seller want is to have the pricing based on replacement costs for employee wages uh for better or worse.
I mean, like everybody likes to dance around the fact that this might uh take over real human workloads, but like that's what people are really selling.
And so if you can get to outcomes that actually yield that, then there's quite a bit of wiggle room to price that because employees are expensive. Um but I I you know it's nice if you can do it.
I I don't think that it applies to most pieces of software.
Host: How how are you um leveraging AI in terms of like creating new features and products? Like can you give me examples of how you're using AI within clockwise as a product? Matt: Yeah, totally. Yeah. Well, I'll I'll I'll answer in two ways.
So first is kind of operationally, like how are we developing product and then the second is how does clockwise as technology actually use AI.
And on the first point, um, you know, we're a very at this point, this sounds like such a I I almost cringe at this term because everybody uses it, but I I I do think that we have a truly kind of AI native product development cycle where people are utilizing tools at every stage of the process in order to accelerate the results.
And you know, one of the places that's have the most clear leverage to me is the collapsing of um product research and prototyping.
You know, I have designers who are, you know, literally spinning up their own prototypes um whether that's in Figma make or it's in uh you know, V0 or if it's involvable.
Like you can spin up a prototype that's interactive and put it in front of somebody whereas previously, you know, you would have done some user research sessions and some um design sessions and maybe developed something, but it's quite costly.
Um and there, you know, it's kind of throw away code, it'll help inspire the engineering for what to make, but like you can do it quickly and you don't have to worry about the bugs.
Um and then that accelerates development cycles um and with all the tooling that we have, you can really spin up a lot of creative uh paths and experiment with what the best path is because you can get there faster.
Um it still requires a lot of you know, uh a lot of people like to say there you know, X percent of their code is written by AI at this point.
Eh, I mean like it does require human review a lot of the times or else you're going to create a really kind of hairy um code base. Um but you really can accelerate your experimentation cycles.
Um and then on the clockwise front, like what what are we actually doing with AI? I mean there are there are multiple levels of this. I mean so one is uh we have a product in the field right now um that allows AI based scheduling.
So, you know, you can you can chat with clockwise and uh you know, you can say, hey, I want to schedule a time with Nikita Aaron and uh Joe next week.
Um and we have our own fine tune model that uh we fine tune to pay attention to time and time based requests. So it can parse the intent of the user out of that and then give it to our back end systems to actually conduct the scheduling um itself.
Um we also have and we're about to launch our own MCP server that connects up our scheduling engine to uh frontier models or whatever MCP client you might be using, whether that's Cloud itself or cursor or another CLI client. I'm sorry, uh MCP client.
Um and so it's been fascinating to see uh especially with MCP, the combinatorial power of having different tools that can be called into from a pretty intelligent base model.
Host: What MCP is just APIs. Matt: Hmm. You know, they they they're kind of not. They they they kind of are and they kind of aren't. Uh and like I say this not as like a it's a very very good question. The an API uh embraces a transaction.
You know, you know exactly what the transaction that you're trying to yield is. Um MCP is really designed to give the base model a different set of tools. Um and you know, and like you can think of it as an API. I mean it's not it's it's not wrong.
I mean it is an application programming interface, but it's it it's manifestly different how you develop it um and it's kind of interesting.
Like you you even in the way that you formulate the description of the tool uh can massively impact how when why and where the base model will utilize it.
Um and then and and all the way down the stack like error codes, like when you when you have an error in API, you just return an error code so that the pro can do something with it.
Well, if you're an MCP world, you really want to return an error message that helps guide the underlying agent with what to do next in order to make the experience better.
And uh you're you're shoving like a lot of context and information back to the base model instead of just discrete uh volume and data because you're trying to empower that base agent with whatever they are requesting to use the tool for.
It's so like it sounds kind of like a nuance difference, but in practicality, it's actually quite different than building an API or utilizing an API and it's interesting, it is interesting. It's it they are shadows of each other for sure.
They're definitely related but they're not the same.
Host: Yeah, I was going to uh sort of ending post and I think one of the things that you mentioned, I think you mean, you named a couple of tools and you said that, you know, as a result we are becoming babysitters for hub big tools.
Um what do you mean? What what is the turn that you're seeing? Why why are a lot of these tools working hub big. Matt: Yeah. Okay, yeah, I I I love this.
Uh, let's kind of go down this uh, walk through this door and go down this uh avenue of interrogation because I I am so energized by what's happening in the industry right now because I love experimentation.
I mean like part of why we all get in tech is like we like to make new things and when you have an explosion of new technology, it's really exciting.
But with that explosion comes chaos and like people are trying on new things and trying to connect them.
And so when you look at uh LLMs, um, the ability to call into other tools and to be able to utilize things outside of that LLM's training data set is very obviously a need.
And anthropic developed MCP and I think it's um an interesting an elegant first attempt. But there is like it's cumbersome. I mean it is not for my mom, right? Like it's not, it's you know, it's not for my wife and she's very technologically literate.
It's not for it's not for some people inside a clockwise who are technologically literate. He's like you have to, you have to interconnect these things, you have to know how to deal with them.
The more tools you add, the slower the LLM gets, the more complicated it gets. It's clearly not a pattern that will extend into infinity.
Um, but it is a jump start on experimentation and so I think we're in this early phase where in order to often getting a workflow completed in an AI based way is more cumbersome than just sitting down and using a pre-existing piece of software.
Um, and that and and I think some of the skeptics look at that and go, well, this is kind of all BS. Like this is just like a more complicated way to do things we already know how to do.
But the the the ability of the base model to intelligently reason and kind of navigate these workflows is transformational. It's just like we haven't gotten there with the interface, we haven't gotten there with how we put those workflows together.
We haven't gotten there in terms of the accessibility, approachability and usability of those interfaces for the average user.
Host: Um, I think my take has always been, I think it's sort of like an evolution, right?
Like we first saw the base models and chat GPT sort of showcase the power of base models and then a bunch of engineers who got access to base models were is sort of like a V zero versions of everything.
And that's been like past two years and now you really have product thinkers or people who understand the market are like use cases really and they have to build the next generation of products targeting a particular set of use cases.
I think that's sort of like where we are going next and that's why I I also feel like you know, people say we are all in investing in AI.
AI might be true on a longer basis, I mean you can really tell what is investment and under investment, but the thing that you do know is that we are still early in terms of the apps that are leveraging AI.
Uh we are still very early in that form because there is now an opportunity to rethink a lot of fundamental apps. Um because you know, can you rewrite Outlook, you know, with AI being first.
Can you rethink um because if you think about notion, they they rethought how a noting tool should be for the internet. Um because word was just adapted from a desktop error to the browser era automatically in the same form factor.
And then notion and some other tools started experimenting with the different form factor for internet. Now, can you rethink even notion with AI being in place instead of adding on top of AI?
So there's I think that's a lot more experimentation and I think it's a really good opportunity for people who have unique perspective of how a particular tool should be uh to experiment with.
Like even Obsidian or tools like that emerged from the taking that have a particular vantage point on how the my internal knowledge structure should be right that's a specific point of view on how a tool should be.
I think we have not seen that yet with AI. All the other services good. Matt: Yeah, I agree with that. I mean like we're we're definitely in the phase where there's a lot of Bolton.
You know, there's a lot of looking at current products and asking, oh, now that I have this additional technology, what can I do on top of this current product in order to augment it. Um and I think the no taking example is an interesting one.
I don't know what the future holds, but if you take notion for example, um they've added on their own AI product. Um and and it's interesting.
It's actually, you know, one of the more interesting ones that I've seen, but the frequency with which I use notion's AI features versus notion is just a no taking tool, 100 to one. I mean like I don't use the AI that much.
Um and so I think, you know, in in then if just extend the example of no taking, um the future probably looks more like something that is an omniscient collection of information that you can query and talk to about surfacing the right information at the right time rather than looking like and that and notion probably doesn't go away.
I mean, one of the things that people often lose sight of is that most technology is additive, you know, we um it's not that common that the old technology disappears.
It does happen, but usually there's a new technology that is added to the ecosystem on top of the existing Technologies. Like, you know, when we got phones, we didn't get rid of laptops. Um uh or smartphones that is.
Um and so I think there's going to be an an evolution on, you know, that's that's in the future still where there's starting to merge a new a completely new category type and feel of software that's emerging from AI.
Whereas right now we're still very much outside the frontier models. I would say that the frontier models are genuinely there. I mean, chat GPT, Claude, these sort of things, they are genuinely new experiences.
Um but for the verticalized and kind of specialized experiences, I haven't seen, I don't know about you. I haven't seen that many things that are, you know, genuinely feel very new instead of augmentive.
Host: Yeah, I think one of my favorite example is that the Zapier and the it and sort of products have become really 100x more powerful than they were because of AI. And you you there's still a lot of difference between all these products.
If you compare make.com or a Zapier uh and there are some new tools like we've had uh delay founder Jacob on here and it's an amazing tool and saves you a lot of time.
Uh then we have Liny, which is for getting a different set of customers, which is similarly like a workflow automation tools.
And my reasoning has always been like if you're thinking about accomplishing a task from A to B, um what used to be that uh the human in the loop intervention pre AI was like if you have to intervene 10 times, now you have to intervene one time.
And that's really very dramatically more powerful and and we're underestimating why that farm factor is so powerful.
Because think about like everything that we do on a daily basis, you can break it down into a workflow and before there was a decision making uh intelligence that was missing. So we have to create a lot number of workflows.
So, hey, you know, take the email and put it on excel was one workflow.
Uh but now I can just change that workflow multiple times because the LLM is so good at, you know, 99% of times on basic minial tasks and that makes Zapier a lot more powerful and Zapier like products like whether it's Liny or relay extremely powerful.
And I think we haven't even seen them being applied to specific categories. Like you know, Microsoft has something like Co-pilot Studio, which which is built on this power apps ecosystem, which is like extremely powerful in the enterprise context.
But if you take all that and give it in a personal context or like a in a marketing context, it can be a complete game changer.
Like just looking at how many things you could do for just marketing in that workflow and if you target specific use cases, like think about what Canva did in creativity space, right?
They targeted specific outcome, they I'm not editing Adobe for editing Adobe.
I just want to create an Instagram post or a LinkedIn post or a like on based on outcome based use case and targeting your floor towards that particular outcome and that's what these tools can do.
So that's my one of my favorite examples of like really AI first tools. Uh there's also a creative tool, I forgot the name, it starts with I think it's called Fena or Fiona, something like that it's based out of New York Style.
And they're also like completely rethinking uh how you can produce creative assets. Uh and the farm factor is very different than what I've seen and it's not like just chat and you know generate stuff right?
I think chat is in in a way a little bit limiting um as a farm factor in that self and I've seen uh a workflow based outcome based targeting tools are going to you know find their usage and I've already seen um you know, Liny is pretty popular.
I think they're scaled uh significantly relay uh also scale uh to I think it's at least five million error or something like that in a pretty quick time.
And there are also these businesses now um are coming into a category where they don't have to raise money.
Um like after sea and that's a phenomenal that we are seeing with these companies because they're now quickly gaining user adoption and gaining revenue enough that they don't actually have to raise money after like a $5 million round, they don't have to actually you know raise huge rounds to and it it becomes a real choice where you know what type of founder and what type of strategy you're sort of trying to implement.
I think that that is what it is a thing, but they are really making you know, enough revenue to sort of continue without fund raise. So if you have missed the seed round, then you probably don't want to invest in that particular type of company.
So that's also another interesting aspect of all this.
Matt: Yeah, I know I I think you you get on a number of things there um and you do start to see kind of the different archetypes that are that are emerging and uh you know, who knows if this will hold true, but if I put on my kind of like forward looking hat, it seems like there are a few different categories of AI companies that are are in my mind likely to succeed and last.
Um one of them is kind of the horizontal platform, the horizontal workflow platform and you named a few. I I know Jacob over Relay, he's brilliant. Um great company, super interesting stuff that they're doing, very powerful. Uh Liny, Zapier.
Um there are there are a number of these and I think, you know, those sort of tools have existed before