Transcript: Offline AI, personal AI assistants, edge computing revolution, compression of neural networks, privacy, compute efficiency, AI infrastructure startups | David Stout, Founder of webAI
In this episode of The Startup Project, Nataraj Sindam talks with David Stout, founder of webAI. They discuss the revolution of offline AI, the technical challenges of compressing neural networks for edge devices, and why the future of AI is decentralized, private, and efficient. David shares his critique of cloud-based AI economics and the race for AGI, offering a compelling alternative vision for how intelligent systems will evolve and create value.
2025-10-13
Host: I had so many VCs. I'm telling you, like so many tell me, you guys have a great technology. You should just focus on one industry. Just focus on one industry. Just go sell, dominate that industry. Um, and we disagreed, um, for, for the fundamental reason that we're seeing now, where if you're not horizontal as a tech stack, you'll get steamrolled.
Host: Yeah, so we work in industries where there's highly contextual data that is not on the internet, it's not on Reddit, whether it's, you know, working on an airplane engine or working with individual personal health data. It's data that does not exist in the in in the web that needs to be navigated, trained on, and personalized.
Host: Prompting is horrible for their business models. They want, they need to be proactive. They want to get prompting out of their business. Um, uh, every question costs them money. It's not, it's not the same model as the internet companies where a user coming to your website is is a dollar sign. With Open AI, when you log in and ask a question, you're, you're cutting into their profit.
Host: Yeah, so we work in industries where there's highly contextual data that is not on the internet, it's not on Reddit, whether it's, you know, working on an airplane engine or working with individual personal health data.
Host: Hello everyone. My guest today is David Stout. David is the founder of Web AI, an AI infrastructure company enabling advanced AI to run directly on everyday devices instead of cloud.
Uh, he's a lifelong technologist and entrepreneur and one of the original thinkers I've met in the field of AI, uh, and has interesting takes around how intelligence will evolve over time.
Uh, David started his journey, uh, from a farm in Michigan and went on to study AI, uh, and has launched multiple tech ventures.
Web AI was founded in 2019, uh, and are pioneering techniques to compress and deploy large AI models, uh, on hardware like phones, laptops, even airplanes. Um, to keep your data private and AI accessible offline.
Uh, the company's vision is a web of models, millions of specialized AI systems working together across devices rather than one giant model, uh, in a distant data center. Web AI's technology is already being used in high stakes areas.
David's work, uh, has even earned Web AI recognition among top AI startups of 2025. And Web AI was recently valued at some $100 million.
If this is the first time, uh, listening to the startup project, don't forget to subscribe to us wherever you're listening to this podcast. Uh, it helps us reach, uh, a wider audience.
Host: David, welcome to the show.
Guest: Yeah. Thank you so much for having me. Uh, looking forward to, to talking today and, uh, sharing more about Web AI and, and, uh, some of our thoughts on the current market.
Host: So, I think, uh, to set the context a little bit. I think it would be good if you can give a sort of like a brief of your journey into the field of AI, when you've been, you know, started working in the field of AI or, you know, machine learning and, you know, what was the journey like before starting Web AI.
Guest: Yeah, no, appreciate the question. Um, so my background, uh, as you mentioned, grew up on a farm, all of that. Uh, AI was very much vapor when, uh, I was studying it. So, uh, machine learning was the actual, you know, field of study.
Uh, NLP was progressing. You know, there were models like Alex Net and things like that. Um, but was very early even in in regards to convolutional neural nets and things like that.
Uh, I think this is important because where, where my research started was in a very much, you know, still yet to be defined space that was, uh, incredibly esoteric. Um, and there was no LLM to help you research. There was no AI tools.
This was very much first principles designed. Um, and we were looking at, um, ways to bring, you know, convolutional networks like Dark Nets Yolo to devices that were low energy.
And, um, at the time, these object detection models or computer vision models were some of the most sophisticated models. Uh, uh, they were the heaviest in regards of, uh, compute. Um, they were the most complex as far as size and architecture.
And, uh, they showed the most promise in my opinion at the time of being something that was truly disruptive. Um, and having visual intelligence in the spaces was going to be incredibly powerful.
So, uh, my research started there and, uh, was able to bring, uh, some of the best computer vision, object detection models and masking models to devices like iPhones, um, and Bionic chips.
And, uh, Host: And this is through, your, this is through a research at Stanford or, uh, a Ghost House technology?
Guest: Yeah, so this is through Ghost House at the time. But I was, this is like right around the time I dropped out of school and started pursuing this full time. was bringing Dark Net models to, you know, an iPhone at the time.
And this got the attention of a lot of outside investors and technologists because it was the first, uh, of its kind, um, as an example.
And there was no Tensor Flow light, there were no like Py Torch light tools that were bringing, uh, uh, AI frameworks to devices.
We wrote the whole thing from scratch and we were talking directly to shaders and primitives, um, using the MPS framework, which was on, uh, these devices at the time.
And, uh, what we, what we found in any moonshot is you discover tin foil, you know, along the way and we, we, we realized that in order to bring these models to devices, we were discovering incredible compression techniques.
Um, and architectural techniques. And, um, with this ultimately led to Web Frame today, which is our own in-house AI library. It's our own framework. We're not using someone else's framework to run these models or build these models.
Um, and, uh, that research mattered.
You know, those early days mattered because it shaped what we ended up building, which was, um, we had this desire to run models at the edge because in computer vision specifically, um, if you didn't have real time, uh, AI processing, it was a really null use case of AI.
Um, computer vision in the cloud is not super interesting and I think that's where we started to really understand the value of AI at the edge.
Um, and I'm thankful that vision was our first focus because, um, it set the table stakes that we needed to run these things and these models in places where the data lived.
Host: What were the any, um, applications that sort of came out of that work? Um, I mean I can think of, you know, things that we could now do on whether a Google Pixel, uh, or an iPhone, uh, in terms of like pre art image editing, right?
Or the new image editing features are sort of like these years were of research, but that's like intuitive guess.
Uh, I'd like to know like what were the sort of applications that we see today in the wild that are sort of like result of all these efforts?
Guest: Yeah, I think the research of getting models to devices is continuing to play out. It's not done by any means, right? A lot of these examples are still referencing a cloud model. Um, uh, not a lot is happening on the device still.
Um, so, uh, but yes, you are seeing basic object detection and a good example would be like photos on an iPhone. Um, that's running on device.
And, um, you're able to search, you're able to, uh, query basic object states or titles or names, uh, and and and index things.
There's also modes on the iPhone, uh, in the magnifier that let you detect objects you're looking at and, uh, the audio kit will turn on.
So if you're, uh, if you have a vision impairment or anything like that, um, the object detector in real time will, will, will talk, uh, to you and tell you what's in front of you.
And, um, I think those are examples of some of that early work, um, uh, in the industry that was being worked on that, you know, has now made its way in. But there's still, there there's been a tremendous amount of focus in cloud AI.
Um, so a lot of these projects have, you know, sat in these companies where these companies were focused on it pretty aggressively and then they migrated to a new focus.
Um, uh, we're seeing a lot more now in the private sector, uh, with people we're working with where it's multimodal, which we think is the ultimate paradigm.
Um, but, uh, yeah, I think those are a couple examples of of where that research has now, you know, landed and where people are using that kind of technology today.
Host: So, you were working on, you know, getting, compressing models onto device and in 2019 you started Web AI. What was the thesis for Web AI? This was pre-chat GPT, this was not the AI boom. This was more the crypto era, obviously, people who are working in their fields are working in their fields. Uh, what was the thesis for starting with Web AI and how did it change over time?
Guest: Yeah, so started the company on really three pillars, but I'm a simple thinker when it comes to, um, a business and a strategy. I wanted to know the utility value. So, uh, I thought this cloud arbitrage is not going to work.
Um, this idea of big data cloud compute is is going to flip the whole cost structure upside down and not super promising, uh, for AI, um, in regards to individual ownership. It felt like we're going to copy the internet era.
All the mistakes we made there, we're about to reproduce themselves.
Uh, the thesis for Web AI was, uh, we founded it, you know, in January of 2020 at so at the tail end of 2019 and the, the thesis was, if we could bring AI to devices, run it privately in a way that a user owns it or enterprise owns it, um, would people pay for that?
Would people want that? If you could serve AI on a device, um, and you could bring world-class intelligence and put it in someone's pocket, is that valuable? Is that worth pursuing? And I I think the simple answer is yes, it's worth pursuing.
There's a lot of value in bringing intelligence to device. And then the second question is, what kind of use cases could you unlock that the alternative would be unable to do?
And, uh, a really simple way to look at this is you have companies with data that is IP centric that they can't share to a foundational model.
You have companies that have data that's regulated in a regulated industry that they can't share to a foundational model. And, uh, you have use cases that require real-time decision making, no latency, um, that that can't go up to the cloud.
And and and those problems require an AI solution that lives in the environment, um, that they can directly engage with that's state of the art. And and that's really the problem we were solving.
And how we solve that was by building our own runtime and framework, uh, and libraries. We designed our own peer-to-peer network protocol to communicate these models across heterogeneous devices.
And then lastly, building an application layer on top of that that enables our customers to build AI, um, using hardware that's accessible to them and then federating that AI across their infrastructure.
And, uh, uh, it becomes really promising when you can, you know, create an AI system and then deploy it on machines sitting in your office building.
Host: So, I mean, there's so many, I think general audience often thinks of the term so yes, we should post chat GPT about these large models. But yeah, before the whole chat GPT era, it was all specialized models.
You know, you're, you had, uh, recommendation systems, you had, you know, image recognition algorithms or models. You have all these specialized models built for specialized purposes.
Now when you decided, that you identified these three factors, um, that this sort of approach makes sense, then are you optimizing for a specialized kind of models based upon the use case that you're targeting?
What was your initial sort of, you know, approach in terms of like prioritizing and selling this to a customer, right? You could choose this whole wide range of models that you can pick from. Um, but you have to pick and start somewhere.
So, what are the type of models you're picking, what are you going to do, focus on particular size models or a particular models targeting a use case? Like how did you sort of, uh, focus on?
Because it sort of like a rewind when you start up in my perspective.
Guest: Yeah, it is very wide. Actually, our strategy was the inverse, right? Where we said, we need to be as horizontal as possible.
We need to own the tooling, we need to own the methodologies, the frameworks, the communications, um, and we don't need to own the model.
Um, we want to be the pickaxe and shovel of an industry, rather than be, you know, the best, um, you know, medical model company and that's all we do.
The reason for that and I think it's actually played out quite quite like we thought it would, where I had so many VCs. I'm telling you, like so many tell me, you guys have a great technology. You should just focus on one industry.
Just focus on one industry. Just go sell, dominate that industry. Um, and we disagreed, um, for for the fundamental reason that we're seeing now, where if you're not horizontal as a tech stack, you'll get steamrolled.
So, uh, if you're if you're a point solution, uh, with these incredibly smart, powerful foundational model companies, um, you're going to get steamrolled because you have no protection.
If you're if you're building an app that's, you know, focused on coding, I still think you're at great risk of just getting steamrolled by a foundational model company.
Um, uh, I just don't see how those companies have long-term staying power, um, when the model that they rely so heavily on is not theirs.
And, um, what we decided to focus on was the tools that made the models great, the way to retool these models so they can run anywhere, the connective tissue that lets the model talk to another model, to talk to a device, to talk to a person, and we will enable our customers, uh, to interact with their data with these models and make them better.
Um, uh, and that's our staying power. And and I think that's proven to be true and, uh, really glad we did not, you know, focus on like a point type model.
And and as far as models we support today, we support everything from vision models like object detectors to multimodal models, um, that, you know, look at image and text and all across the ecosystem.
Uh, we, we support models from Hugging Face as well as internal models. So, we have a, we have a broad spectrum of, you know, models we support, um, but with the idea that the platform's designed to be horizontal and not a point solution.
Host: So, today the product that you sell is being used by what type of customers and give me a couple of like use cases or customers names that you can talk about just to sort of like crystallize this idea where, you know, Web AI plays the role.
Guest: Yeah, so we work in industries where there's highly contextual data that is not on the internet, it's not on Reddit, um, whether it's, you know, working on an airplane engine or working with individual personal health data.
Um, uh, it's data that does not exist in the in in the web, um, that needs to be navigated, trained on, um, and personalized for each of these users to drive real results.
And, uh, we work with the Aura Ring, if you're familiar with them, uh, doing work there. There's more to be announced on that soon when that becomes, you know, active in the product. Uh, working on some pretty great programs there.
And then additionally working with major airline companies and aircraft manufacturers to to improve, um, maintenance, uh, as well as, you know, uh, assembly.
And then, uh, outside of that, working with, uh, the public sector, uh, on all sorts of use cases that require AI to work anywhere, not just in a, you know, data center stateside. Um, it needs to work everywhere.
And, uh, I I think I think the the the ubiquitous connective tissue in all of our customers is they have data that no one else has. Um, they have, uh, customers that need highly personalized experiences, um, that are contextual to them.
And, uh, they they operate in a privacy mission critical environment where data cannot go somewhere else, it can't leak. Um, it needs to be highly accurate, highly performative, um, it needs to to operate, um, at the edge.
Host: So, talking about a little bit about, you know, hardware. I mean, you are trying to run things on the edge or include any environments where, you know, your data is heavily protected and you talk about sharing data.
Uh, I mean sometimes it could be in data centers, sometimes it can be on a mobile devices, right? Talk me about your perspective on like which hardware are you optimizing Web AI to work on?
Guest: Yeah, we're, we're working a lot on Apple hardware, so Apple Silicon M series chips. Um, you know, supporting, um, also Linux machines and X86 and other processor types.
Um, we want to take, uh, our models and run them in the most effective profiles. Uh, we see a lot of promise in arm processors in FPGA. Um, uh, we're really focused on delivering the models on the best hardware for the use case.
Uh, and that hardware, you know, can vary, um, in those applications, but generally really focused on, um, uh, Apple Silicon because it's, it's probably the best edge hardware right now for AI. Um, it's incredibly performant.
Um, uh, specifically for inference is what I'm talking about too. Training is a different conversation and, um, Apple Silicon is not quite there for training yet. Um, but it it will be, um, at some point.
But, uh, as far as inference and and things like that, absolutely.
Host: So you, your, Web AI's use cases mostly around inference because you're deploying a trained model on the edge?
Guest: Well, we're helping people train too. So people are building models in our platform. They're creating custom models and shipping them.
Host: And that is happening in Web AI's platform, so in that case the underlying hardware is is that happening in the cloud or is it happening in private data center?
Guest: It's happening on it's happening on the edge. Um, it's happening on laptops to clusters that are made with devices to, um, TPUs if they wanted to, right? So it can, it can vary. But yes, that's happening on Prem on device.
Host: So one of the, uh, one of the applications that I haven't seen yet happen is sort of like this lightweight personal model that exists on my machine, right? It be it my work machine or might it be my mobile.
Um, you know, it's sort of like you can think of GPT1 or GPT2, right? Uh, uh, and maybe now you recreate GPT2 size model, but with a lot more capability built into it from all the learnings from GPT5. It can be llama or whatever series.
That's sort of like hasn't, you know, a local company sort of given me an application where I could just run a model locally and sort of like, you know, think of like Windows OS that I can just install and put extra on it and go for it.
Like, is that a model that is not possible or why haven't we seen that kind of an experiment from any company?
Guest: I think we haven't seen it because the models that are easy to ship to device are bad, right? So, um, uh, people have become accustomed to a certain performance of a model and intelligence capability.
So let's say like from an IQ perspective, they're, they're expecting a certain level of, um, intelligence. Um, so that that'd be one. Um, Web AI is actually in October, we're releasing what you're describing.
So we're, we're releasing, um, our first ever B2C solution, um, which is that, right? Download it, run it on your machine, run it on your phone.
Um, and why it's taken us time is we had to make some, uh, some architecture changes so we have a great model, uh, that's performative, that's not disappointing that runs and lives on your phone. Uh, so that's a hard problem.
It's a hard thing to solve. Um, uh, you have to have a runtime, you have to have a model architecture that works, that's performative. And, um, there's a lot of pieces that fall in, fall in there.
And I think it's always been easier to just have the cloud do it. And, um, uh, I think a lot of companies are take hitting the easy button on this one and saying, you know, I'm just going to use cloud. Cloud is so easy for AI.
It's it's it's the scaling works as as far as like performance. Um, as far as cost, it definitely doesn't work. Um, but it works from a, from a use case perspective, from a functioning perspective. It absolutely works.
It's just astronomically expensive and inefficient. Um, so, uh, those are all the reasons I think it's been overlooked. It's a hard problem to solve.
And, uh, rather than solving that problem, I think the the AI companies that are popular today are really focused on trying to solve the super intelligence problem, um, uh, rather than solve like the actual unit economics functioning problem, monetization problem, uh, and privacy problem.
So, uh, I think that's my two cents on that, but I I think I think long story short, companies will be, um, Web AI is. We're about to release this soon. We're really excited about it. I'm proud of the product.
Um, and, uh, these, these tools will be valuable for users because now they have something that's private that they own that lives on their device and it's personalized, safe. It's ultimately safe, which I think is the best piece about it.
It's not going to some data center and being stored, um, and being manipulated and used by by someone else.
Host: So, the entire interface, entire interaction is on device and I don't need internet for you know this new app to function on my, uh, friends or on my, I'm assuming it's going to be on mobile phones.
Guest: It will be on mobile and, uh, laptop.
Host: Um, so you mentioned, you know, Apple and in the whole like post chat GPT era, you know, I feel like if I look at all the big tech companies, Apple has been relatively picky on what they work on.
Google I think was a little bit underestimated, but now if you see in last couple of months, they've sort of caught up and they're doing interesting experiments as well. Um, and then we have, you know, the foundation model companies that they open AI.
Um, I think, uh, Tropic. AI here perplexity. I would more put down an application company rather than a foundation model company. So what are your thoughts on like, you know, the model spending that is going on in investing in so many data centers.
And I think it it's all being rationalized by this one argument that um, you know, this will sort of lead to something that looks like AGI in some form, whether you know, it doesn't not necessarily the AGI that the popular culture has always been listening to but a some version of that that any amount of investment would sort of or makes sense.
Uh, what what are your thoughts on in general like, you know, the trajectory of these foundation model companies and AGI?
Guest: Yeah, I think if we're fairly pragmatic about what we've seen and what's happening today, just like current state of the Union with AI.
Um, I had really interesting conversations with some of the leaders in the space, um, you know, from the top foundational model firms, you know, the co-founders and I think there's, there's this common consensus, um, that it will keep scaling.
Um, and this scaling is going to keep happening, models will get better and we're going to steamroll everyone because we're going to have the best model.
My problem with that is the empirical statistics like evidence we have right now doesn't say that. Um, so pre-training has in all senses like we're pretty pretty much proven to be, you know, flattening, right?
So GPT5 is a is an MOE, it's a model router. I mean, it's it's it's a lot of post model work, right? Um, uh, these thinking models are all post training.
Host: Just to like I don't know makes sense for the audience like when pre training actually training even though we call it was pre training. That's like the crux of where you're using every day GPTs. It's called.
Guest: Yeah, that's the, that's the I get a big corpus of data, I'm training it and I'm creating a model. I'm taking random seeds, um, and I'm, I'm creating a model that's that's actually learning. And and that's the pre-training process and then post training is the refinement.
Host: Three factors there are one is data, the other one is, you know, number of GPUs we can use. Architecture.
Host: And the data architecture. And you know, we've capped out, there's no more data to like suck up from the internet. So Well, they're trying to make more. Yes.
Guest: it's it's perpetual energy machines, I guess. Uh, they're using the AI they made to make more data, which isn't how data works.
So you have this pre-training piece, which pre-training fundamentally has to continue for the narrative that we're talking about to be true.
And this is why I'm trying to get to the first principles here is if we're building these data centers, we're building them for pre-training, correct? Like it's not just inference.
We need to, we need new models, we need new architectures if we're going to make AGI or ASI. Um, so, but in parallel, all of the leading foundational model companies are not really showing great gains in pre-training.
Most of the gains we're seeing are post-training. So the last several iterations of these models.
I'm not just saying the last, I'm talking about like the last two or three, uh, versions have been pre-training, um, minimally playing a role in the advancement.
Um, we're, where the majority of the advancement is happening post-training, which would indicate that we are hitting, uh, we are flatlining on this idea of, you know, training continues to scale and these models continue to get better.
Um, I've heard arguments that say like, you know, you can't base it on one specific model. You know, one specific model, you know, isn't showing great results in pre-training, that doesn't mean that this, the scaling laws aren't true.
Um, the problem with that is, uh, it's you're picking and choosing now, uh, data points to say that you're, you're scaling law is true. So I think that we're tremendously overbuilding.
I think, I think one we have an energy problem, uh, if this is true. Um, we don't have enough power, um, we don't have enough water. You know, there's all of these things that become a massive issue.
Um, and we need to look at this, uh, really pragmatically.
Guest: Like I said earlier, we need to be what, what outcome do we want?
And that's the one we need to build towards rather than, uh, you know, taking, um, taking, I would say, you know, the the first, the first example that has proven to work, um, and and chasing that down. So I think it's so early.
Um, Transformer as an architecture, I'm not a big believer in long tail. So I just, I don't think, I think so many things are variable today.
So to build data centers and build all these things out, we don't even know the architecture to be quite honest.
Like is Transformer architecture and for those in the audience that don't know, Transformers, you know, the the the core architectural layer in these models that are creating generative AI. And do we really know that that architecture scales?
Um, let's say compute scales, does the architecture scale? I there's so many questions there.
So for me, what makes the most sense and we think about what Web AI is building in our vision is this idea that civilization is the only example of super intelligence. And, um, civilization is capable of doing incredible things.
So you have groups of people that build things. Um, and that's what I'm talking about. Different context, different talents, different abilities, super intelligent outcomes. I as an individual have never done anything super intelligent.
We don't have any example of singular super intelligence. And what I would say is much more likely is we see super intelligence come out of millions and billions of contextual models that are living across the world as like a compute dust, right?
That that's everywhere, ubiquitous.
Um, I actually think that statement is far less risky than the one we're talking about in parallel, which is, I'm going to figure out a way to train this model that's going to solve everything and it's going to be AGI.
Um, I think that's a much riskier statement from a business perspective and a technology perspective. The civilization approaches, you know, it's not only theoretically accurate, but nature and science has demonstrated it to be true.
Um, and we would be copying an architecture that works. So I I think the outcome that we're chasing is adjacent to what the industry is chasing and I think that's a good thing.
Uh, but I I don't see I don't see the current approach of like a centralized model working at scale, uh, to get AGI. I think there's going to have to be an audible. Something is going to change. Um, it has to change.
And, uh, I think the money is fairly circular right now. Um, and I think that's a big part of a big part of this.
The moat itself is how expensive it is, um, to do this and I think there's a real fear of that not being true because the companies that have raised all that money and, you know, built that infrastructure become much less valuable if that paradigm changes.
Host: I think I think about the intelligence thing that you mentioned, uh, now for sure you might have heard about this phenomenon called emergent or emergent behavior, right? Uh, an ant is not actually that individual.
Ant is not so intelligent, but the collective ants actually have ant colonies and these structures are pretty intelligent if you look at ant colonies, right?
And it's sort of like an emergent phenomenon and even if you look in how cities are formed, this of much more like organically formed by, you know, settlements and different neighborhoods and that creates a, the structure called city.
History is full of examples where you design a city from top down, it's most likely to fail.
So what you're talking about with this collection of models that emerge from that emerges, uh, you know, something that looks like, uh, you know, intelligence is actually much more likely and it's a phenomenon that we often see in nature a lot.
And also like it's it's it's it's same thing as as an individual at March that inter religion, but if you have cloud if has no one person who's building Azure or Google cloud is Correct intelligent, but Google cloud can do a lot more things or, you know, Azure can do a lot more things.
Uh, so that's sort of what the just to play like a devil's advocate on what Open AI like or to your argument, uh, one of the ways I sort of tried to justify what Open AI or like Anthropic or based on their data center investments is doing is, okay, now the only way the foundation model companies as a business makes sense is they become big tech companies.
So that means you have to have a an infrastructure play.
You have to own, you know, worldwide, um, data center collection because now you'll create new applications using AI which might be the next versions of GCPs, the next versions of, you know, Facebooks, the next version of whatever we need our society, whichever the next viral product is that we cannot live without, right?
So that that's the way I see what Open AI is doing is basically to become the top three companies. And for that that means you not only sell the foundation models to the product, but all the applications you build on top of it.
All the possible applications that can be built, they will try building it and you can already see that with the they're trying to launch a LinkedIn alternative, they're trying to do a, you know, I think they have like seven hardware products collectively working to launch in two years, right?
So if you see on this whole as a collective it looks like an Amazon or Microsoft Azure that is being created.
So that's sort of like the ambition of what they're creating and it it's not necessarily has to do with AGI, but it's just pure ambition of being the top company in the world.
I I I that's one of the ways I look at it because you can I agree that that's I think they're an internet company. I just think I don't think they're an AI company, but they're getting AI economics on an internet company strategy.
Um, that's I think the fundamental Delta here is. I agree. I think they're building the main frames.
I think they're doing all that, but it it it they're they're saying that the strategy is AGI is going to solve their monetization problem and solve their unit economics problem, but they're trying to launch a social media company.
I think you I was watching your talk around, I think it's in Singapore, and you had this very interesting line where you said the prompts don't pay builds. Uh, Yeah.
Host: Can you elaborate on that? That I think you, you have this idea of that if you're prompting more actually isn't, it's not good.
Guest: No, you want, I I think honestly the release that Open AI just had today or yesterday, um, on this like summary thing they're doing is an example of like my talk, you know, uh addressing this before it happened, which is these companies have created bad habits. Prompt