Startup ProjectBuild the future
← All transcripts
Podcast Transcript

Transcript: Coursera's Engineer No 1 on Building AI agents for knowledge workers #AI #podcast #aicode #startup

In this episode of The Startup Project, host Nataraj Sindam interviews Jiquan Ngiam, Founder of Lutra AI and Coursera's first engineer. They discuss the history and future of AI, from the importance of data and scale in deep learning's rise to the current challenges of building AI agents for knowledge workers. Jiquan shares insights from his time at Coursera and Google Brain and explains his vision for augmenting human productivity with AI.

2024-09-24

Host: what method at the end of the day was thing number one, getting the architecture right? which is, you know, transformers. so it's a good architecture. And I'm sure there's a few more that will come up with.

But more importantly, it turns out that what's important is the data, compute and scale was way more important than that.

And so what's interesting, for example, is that, uh, we don't talk about this these days but AlexNet in 2012 was the major breakthrough for deep learning and neural networks. And without that breakthrough, I don't think we'll be where we are.

Host: Okay, no. Uh, what was your experience working with Andrew?

Guest: uh, PHD program and Stanford in 2009. So in the Stanford ML group and Andrew was my advisor.

The idea that these models could be trained and bigger, um, and I think as a, you know, student and then subsequently as a, you know, person like a Coursera working with him and a founding team. um, I think Andrew superpowers are really just seeing that future that's coming and then taking big bats, right?

You know. Narrator: Natraj is the host of Startup Project podcast. He is an investor at Incisive VC, an angel investor and a product manager.

All opinions expressed by Natraj and podcast guests are solely their own opinions and do not reflect the opinion of the firms they work with. This podcast is informational purposes only and should not be relied upon for investment decisions.

Natraj and guests may maintain positions in the companies or securities discussed on this podcast. To learn more, visit the startupproject.io.

Host: Hey, welcome to Startup Project, uh, and thanks for coming on the show.

Guest: Thank you. Thanks, Nataraj. Glad to be here with you.

Host: Uh, so I had to have you on the podcast. Uh, I mean, your background is very impressive. Um, you, you know, were one of the first employees and director of engineering at Coursera. You worked at Google in the famous Google Brain team.

Um, and you started and I don't know if you've completed your PhD under Andrew Ng and for listeners who don't know Andrew Ng, Andrew Ng is sort of widely respected professor who pioneered Coursera, DeepLearning.AI and created one of the most famous machine learning courses, which I've taken, uh, and probably 2017 or 18, that was sort of like when I decided as an engineer, at that point I was mostly software engineer, um, and I wanted to know, hey, what's happening in machine learning?

What is this, you know, cutting edge? I've never formally studied machine learning, but the go-to course was always, you know, Andrew Ng's first course and then he created a bunch of set of courses.

Um, that was like the source of truth for all us, uh, who, you know, didn't formally study machine learning, but wanted to study as engineering, and if even if you wanted to pivot to data science and do more things around that.

Um, so yeah, I'm really glad to have you on the show. Um, but I wanted to ask about, you know, uh, what was your experience working with Andrew?

Guest: Oh boy, that's a that's a question good question, one to wait back to. So, um, a bit of history here. so I started my uh PHD program and Stanford and 2009. So in the Stanford ML group and Andrew was my advisor.

So working alongside other people who are amazing in the field like, you know, I think uh Andre was in the, you know, around the same time as me there. Hey, this is Andre.

Yeah, Andre, Richard Socher, uh they know uh people like Andrew Mass and there's like a really good set of people that uh were there. I think we were all very early into the uh deep learning space back then.

And I think 2009 was a super interesting time when if you turn back time and look at what we were doing, you know, we're trying to train uh models on images to learn uh Gabor filters to see little edges in there, right?

And we just like at the beginning of understanding how this technology would work and could work and develop. But very clearly there was a certain certain science of skills important, data is important, and the bigger the models were important.

So working with Andrew was really amazing because, you know, I think what what's really cool about him is that he he saw that potential. He saw that future way before it, you know, it came to bear today, right?

So in 2009 that time, uh, he was one of the first people in that the entire ML community community to really push hard on the idea of neural networks. The idea that these models could be trained and bigger.

Um, and I think as a, you know, student and then subsequently as a, you know, person like Coursera working with him in the founding team. um, I think Andrew superpowers are really just seeing that future that's coming and then thinking big bats, right?

You know, and so it's like that big bat on neural networks and 2009 uh and subsequently it's it's not a obvious bat. Yeah. That see, back then. You know, my friends were asking me, why am I doing this versus graphical models?

You know, and people were like the side show in some ways. But then I think we're all excited about the potential and you really like, you know, push us there.

And then working with him was really interesting in that he was somewhat would would push you to do more.

And I just fondly recall um projects where, you know, you start off and you go like, okay, now you go got to collect go collect a big data set on your own.

You know, so like single grad student figure it out and get my set up my cameras, set up my data collection facilities, recorded, you know, and then collect my own data, write my own experiments and you know, some of these things uh from the outside, she's kind of hard, you know, that you know, to do any of these machine learning projects from scratch.

Just realize so much efforts because you're from ground zero getting data all the way to training models and evaluating them.

And Andrew was one like one person that's like, you know, he would push push people to like just strive to achieve more and get more done and partly um me working with him subsequently became Coursera where in 2011 we put out the first online courses in machine learning.

Uh, I actually helped to design a lot of the uh programming assignments back then and Octave and got the team together to build the platform.

And again, it's on those things where, you know, building a new way to teach online is another, you know, another big massive change.

And in 2011, uh that was super exciting for myself because to see, you know, the the ability to give people new superpowers through education was just very um this is just really heartwarming and really like uh mission oriented, which resounded with me.

It made to sit up Andrew as well. I think in everything that he does, I think there is this aspect of him, which is he wants to do things for the good of the world too. Yeah. And so you see that in him too as well. Yeah.

Was there any theory or thesis when you guys started Coursera that it would be as big as it is now or you know, even with DeepLearning.ai, like was was there a sense that it could become the go to main platform for learning all this?

Guest: Yeah, know so at the time, um we had our sights on like democratizing access to education. Right? And if you turn back time to 2011, 2012 days, there were not many options to learn online, right? There are very few.

In fact, the quality of um materials online was very limited. I think MIT OCW, if you remember those times, that was probably the best alternative then. Right? There was no like assignments that you could do and get graded and get feedback on.

None of that existed. Uh, the only way to learn things was to go to, you know, college campus to really learn cutting edge things.

And so when we started, you know, starting it back then, there was a clear sense in our minds that this could be a lot bigger than we thought it would be number one. Number two, I think is that the demand was very high, right?

Because um what happened there was that we and in in the when we decided to do this, I think in a week or so, we put together a landing page. I said, here's the course syllabus. Here's a 2-minute video of Andrew speaking.

If you're interested, just sign up. And just put it out there. Stanford I think did an announcement about it. That certainly helped. And then we got tens of thousands of people signing up so quickly just a matter of weeks.

And so it just just spoke to the the latent demand for this across the world. And then subsequently when we went, no, it just we got, you know, rolling on and did more.

It became really apparent to us that the the not, the the the type of materials that we could make accessible like machine learning. Um, you can't you you at the point of time in 2011, 2012, you couldn't actually learn that at many places. Yeah.

If you were to go to a, you know, another college in a different country or different city, they may not have the professors that could even teach the material. And if they did, it was someone who was still trying to grasp and learn it too, right?

Alongside you. And so it was really interesting to see that not only, you know, that they the demand for this was really big because of the lack of access to it, right? And we know that textbooks only go that far.

And you need not just, you know, the materials and the, you know, educator teaching and and the assessment, but also the community of students.

And then that that I think it really came all together in like the idea of a MOOC where it's not just materials online to learn, but people coming together to learn. And that the forums were like lots of activities in there, a lot of communication.

I think that was like a whole, it was really nice to see all that come together. Yeah. Um, I think it was one of the first like serious online course I've taken and which sort of like sticking to a schedule.

Um, I remember, like, which sort of you can, you know, pivot your career with or seriously impact your career sort of a course.

Um, a- and it's very rare that even today, I mean, there it's also an example of you know, internet generally democratize a lot of things and for any given subject, you only need one good teacher or one good course and everyone can use it, right?

I think Coursera and sort of DeepLearning.AI sort of encapsulate that phenomenon to me, at least personally. Uh, because if someone is good uh at physics, right?

Pretty much every child in the world should be able to learn from the best teacher in physics. Right? So I think internet sort of encapsulates that sort of a thing.

So someone sitting in India could learn DeepLearning.AI or someone in the US without, you know, uh the hassle of getting a visa to the US or you know, getting admitted a college and all these sort of barriers which exist and you can you know, just get fast through all of that and just sign up and learn.

So which, which you know, is a powerful phenomenon, I think. Um, so, you know, you're doing all this amazing things, uh, and then, um, you don't know, you no longer work with Andrew and move to Google Brain.

Uh, so what was your uh experience uh there? Like, what were you building at Google?

Guest: Oh, that's a that's a good question, here. So, so I actually spend seven eight years of, you know, my life at Coursera helping build that out.

Uh, towards the end of that, um, I was actually like very interested to see if going back north, we were doing a lot of technology like serving videos, quizzes and everything. But I was always interested in like AI, right? Machine learning.

And around that time was when Transformers first came out. 2017, 18 time. Yeah. So seeing those papers, seeing the excitement around an architecture that could such generalize and work across a lot of modalities was was super exciting to me, right?

I think so, um, so after Coursera was uh trying to see where else, you know, getting back into the field. So I went to Google Google Brain, uh worked in the brain team that had a bunch of amazing people.

Um, and then started to get into the the field of like, how do we uh apply AI to more places? And so I work on more of an applied team side where I worked with, you know, if you can find us like the way more team actually, quite a bit.

So a lot of papers with there with the Waymo group on uh point cloud recognition, 3D object recognition, figuring out how we can uh do self-driving cars better from all the way from perception to reasoning and planning.

So a little bit of uh projects with them across the entire stack in there. Was a a lot of time spent at Google Brain. Yeah.

And then alongside that was just a super excited to see like that's the whole I think when we started to see the the skill the scaling laws come into place. Right?

And then started to see like the team around me uh started to build out really large models to start to serve all these use cases. And it was a it was I think a a amazing moment to see.

Like because I I worked in AI for such a long time, machine learning. And we went from, you know, training very simple linear models, so you know, neural neural networks, two layer networks.

And suddenly we're starting to see these models be able to generate and produce, you know, texts and images and everything up there.

And then on my projects, you know, starting to produce models that could do like, you know, behavior prediction in different ways for uh self-driving cars to and all of that. Yeah. Oh.

So, I remember this distinct feeling when I I forgot if it's in the course itself or some other place. when you first do a simple project of, you know, recognizing uh letters, right? Uh I think that was one of the first exercises I remember doing.

Um, you feed a bunch of images of, um, you know, English letters and try to create a model that recognizes just those uh letters.

And I feel I felt like thinking that this is similar to how a human brain sort of works, but it's not exactly how it works.

Um, I'm curious what is your did you have any that sort of like comparison going on in your mind on how you think about these neural networks or you know, now large language models and um what it means when you compare it with human brain.

Guest: Oh yeah, that's I'm certainly there. So so I think a lot of the design and concepts of neural networks were inspired or brain inspired, right?

Um, I think for myself, I think what's interesting for myself is that we when we were thinking about the ways to approach this and turns out that maybe the lesson that we learned right I I see is um what method at the end of the day was thing number one, getting the architecture right? which is, you know, transformers. so it's a good architecture.

And I'm sure there's a few more that will come up with. But more importantly, it turns out that what's important is the data, compute and scale was way more important than that.

And so what's interesting, for example, is that uh we don't talk about this these days, but AlexNet in 2012 was the major breakthrough for deep learning and neural networks. And without that breakthrough, I don't think we'll be where we are today.

So can you explain to the audience what that breakthrough was? Yep. So Alex, so uh back in the day there's this ImageNet 2011 2012 days where the task was to classify an image into 1,000 potential categories, right? Thousand categories.

And so like was it a dog of this breed? Is it a cat? Is it a airplane? But have all of that. Very hard task. Um, up to then the state of the art methods, you know, could barely get like, you know, 10 like 10% off.

I mean that was various like single digit percentage like really hard task to do. And then there was one year then which Alex Krizhevsky from uh University of Toronto with Jeff Hinton and Ilya definitely.

They were came together and then what they did was was fascinating. They took Who's Ilya who started S-S-A yesterday which announced funding for a billion dollars. Yeah. Exactly.

So just this this was like way back stories in there, but they took convolution neural networks, which is a concept that's been around that. Young came up with it way back in the 90s, I think. And they just skilled it up, right?

They just took that model, say, make the filters bigger, make more layers. Um, put in a lot more data, train it, train it for longer. Just make it bigger, right?

And then did a lot of work down around uh compute because making bigger was not easy from a training perspective. You got to wait a long time for it to train.

So then they also take like two GPUs and Alex was figuring out how to make custom kernels around them to train such a model.

And then the breakthrough happened that that that approach, take a existing no one approach, you know, convolution neural neural networks, scale it up, just blew through the water.

It just blew out all the the metrics, blew out all the uh performance uh and and became the best approach there.

And subsequently, we had all these other approaches like, you know, residual networks, ResNets, we had, you know, ExceptionNets and all those things. There were variations and uh modifications on this, you know, base idea there.

And that really to make the whole deep learning field take off and the image processing, image understanding regime.

The next takeoff I think happened during uh language where we had RNNs and then uh you know, memory RNNs and then after that Transformers come from that. And then I think that's the next takeoff in there.

But in both cases, what happened was we found we found a architecture, you know, convolution neural networks or Transformers that worked. And then what the secret there was, how do we scale it up?

How do we pour as much compute, data in the into it as possible and just keep pressing that button and scale it up, you get better and better and better results. Right?

So to me, I think that was uh, you know, while many things were brain inspired to begin with, um, what was more interesting was uh the ability to skill of compute was way more important.

So it turns out that the methods that fit the compute paradigm better. And so if you implement something that fits you know, the CUDA or TPU architecture well, uh then you can scale it up better.

So it turns out that methods that were amenable to scaling and partially, you know, limited by uh, you know, how we design our hardware. Uh those work better, right?

So uh that's actually fascinating, fascinating to me at least to see that that that was the the real um reason why these things are working. Yeah. When even to a point of scale, right?

Even like from GP2 when no one was looking like only the nerds were focused on GP1 and GP2 and suddenly GP3 and you know, when Chat-GPT came like everyone started seeing the scale, right?

And and every new version of new model from llama or any other company sort of like the improvements were pretty significant. Yes. What do you think like how would it how are you thinking about like when is the next scale shift?

Like is it is GP5 going to be a scale shift? Um, I know like the parameters sort of are now at what trillion um or more, right? So uh do you see like a significant change or is it going to be like 4 to 4.5 and h- how much that being uh look like?

Do you have any idea?

Guest: Um, it's hard to tell. It's hard to tell. But I I I can give my uh I can give my um unpopular opinion, maybe. Yeah, go ahead. I think my opinion is that the GPT4 definitely in the trillions of parameters, right?

But, um there is a to skill it up more, you need not just more compute, you also need more data. Right?

And data is one that I think it's going to be limiting us because we've kind of exhausted lots of data sets training these models, especially in the language part of it, right?

Now, we could generate more synthetic data, but synthetic data is going to be as good as whatever you generated, what the generator for it, right? So so you're going to fit the generator, not the the real world, right?

And then if you think about that and also like say you need uh more compute in order many to compute, you start to get a point of like diminishing returns, right?

So while most people think that we are maybe on exponential curve to progress, uh my hunch more and more is like we're more on like a sigmoid curve, if that makes sense.

Where we are progressing, but now and after it's like, oh man, the next incremental progress is harder and harder and harder. Imagine if that's it now, because we just ran out, we've ran out of data. Yeah. Right?

Now you can self-play and do that stuff and all, but again, that's a lot it's even more data. So like I think we might be at that curve where incremental quality improvements is going to be harder moving forward.

Um, there's a little bit I mean it's been more than a year since GPT4 came out. Um, honestly, we haven't seen a model that is significantly better than what came out in March last year, right publicly. Right?

But things are, what I can um be confident in on the other hand, is that I'm pretty confident that the cost will go down and speed will be better. Like very confident in that.

In fact, I think we are finding out more and more ways to take these big models and distill them down into smaller models that are very effective to serve, right?

So the GPT4 series of models, I I think we're all derivations of the original GPT4 model, right? So we're just finding out ways to make it smaller and smaller.

GPT4 0 mini, for example, recently, to make it serve really high quality models at a very low cost and and very high speeds. Right? So I think that's what's going to happen there and that's going to be, you know, really interesting to see, right?

Because there be this uh effect where the the goal is to train uh model as big as possible, but you don't want to serve it. You only use it to generate data to train a very small model and then serve it. Right?

So that's going to be, I think, the the game moving forward, I think. Yeah. I mean, I I I I sort of tend to agree with you and um to your question of like not having enough data, right?

Um, I also think that's also a limiting factor in some of the use cases that we want to unlock. For example, like everyone wants to build a personal robot these days. Uh, like Tesla has an initiative. You've seen some other new companies.

Uh, I forgot the name like 1X or Feature. So there are companies that are building like, you know, general robots which do housework or some other work. And I think the limiting factor there is data.

Like there is no data of me folding uh clothes, um, you know, continuously of different types of different kinds in different households, right?

So for it to do, so you need almost like a scale AI like company which is generating this type of data and labeling it. Uh, and you can take this example a little bit stretch this out and you know, there's no data of me doing work on in my backyard.

So that robot cannot do, you know, that work or learn how human does it. If you want to replicate a human sort of behavior. Um, you can extend this beyond like you're picking apples in a farm.

Of course, that we have solved in a different way without, you know, doing it in the same farm as a human does. But if you want to really replicate a human like activity, we don't have that data.

And I have this ridiculous idea of like, you know, just like how, you know, we've cameras for monitoring the police behavior, like everywhere police these days wears a camera on them, right? So that everything gets investigated.

Uh, if you really want to get these use cases going, you need to offer people money to put a camera on themselves and record these mundane tasks while they're doing it.

Uh, so for example, you put a camera on a McDonald's worker in a McDonald's and how they're flipping burgers. You have, you put that across 100 of McDonald's workers, you've got a perfect model that you can, you know, sort of learn from it.

So this is like a little bit of a out there idea that I have in terms of unlocking some of these use cases which we don't have really data about. And that's what will limit these things. And and I think synthetic generation of these data.

I don't know how much effective that is yet, but uh that that's one out there idea um I was thinking about.

Guest: That's interesting you mentioned that, because I think maybe when I think about data and what you're learning, I actually think of it as like also not just just the data but also like, what is the signal you're getting from the data? Right?

So for example, we know that um data from code, software data is really powerful because your code contains lots of logic. It contains uh reasoning in there.

Like formal reasoning, you know, code is almost like formal reasoning right in on, you know, what do you want to do in expression, right? Software. Yeah. And so there's a lot of power in uh software code that that that we can learn from.

But then if you take data like um these workers, right? flipping burgers. Um, what you what you're learning there? It's not reasoning. You know, you don't learn reasoning, but you're going to learn about physics.

You're going to learn about the burger. You're going to learn about when does it cook, when it's not done, you know, the image, or the image, um how it reflects that in the image and how you see it.

You're going to learn about, know, the activities they take, you know, now they flip the burger and after that how are they going to put it on the bun? What are they going to do? Sequential activities.

Much less reasoning activities, but more physical world activities, right? Yeah. So I think, so I think over now, right now we see like a lot of new image models, video models are coming out.

And what we I think what we're starting to see is like we we've kind of done a lot of uh reasoning work to learn about reasoning models. And now can we learn about the physical world? Can we model the physical world implicitly in these models?

Um through um pixel data, right? And maybe what's interesting here is uh there's an open question of like, can we connect all the pieces together?

You know, with the reasoning world that we learned, then the physical world that we learned through the the visual data.

Uh does it allow us to take that reasoning data that we learn, you know, our reasoning the databases and then apply it to the physical world. Right?

So I think there's uh some projects in Google when just around the time when I left like SayCan and everything, where they're trying to connect the dots in there to make that happen.

And so be interesting to see if uh can we take all the data from language models and make robotics go much faster?

Now, I think what happens, what happens here is that um the limiting factor for us would be like making reasoning even more even better than that is going to be hard.

But then reasoning would get better at reasoning about the physical world if it had the physical world uh you know, information from videos and all.

And the physical world information like flipping burgers and all, will be informed by, you know, reasoning data like, you know, if there's a lot of texts on physics online, you know, I know what a cat is and a table is and everything, they know that objects don't go through objects, um that might influence how your model decides to interact with the world.

Yeah. So it's interesting to see like um the the compositional knowledge come together and maybe that's some there could be something there that that unlocks things. Yeah. Yeah, I I I definitely agree on that.

Like, I think it's more there needs more innovation on like user interactions and how do you guide through and make it more productive. I think that's where a lot of innovation in cursor is sort of can be attributed to. Exactly.

And cursor is even more powerful if um if you know exactly what stack you use already.

Like if you have a repeatable stack that you're already sort of good at or sort of use and sort of know ins and outs about it or you can have like a 30, 20% knowledge about it. Then it's a game changer for you.

That you can replicate that stack because everyone has like a full stack sort of they usually work on it. They have backend, they have frontend. You can just churn new MVPs now, you know, with very limited amount.

And you can do that with like if you're an iOS developer with iOS apps, if you're an Android developer with I Android apps.

Even with apps on like Shopify or WordPress, which are very common knowledge to the core already and that makes it even more powerful. And Python scripts obviously because you know, everything the the models have lots of Python data.

So that makes it even much more powerful if you're just doing Python scripts. So yeah, it's it's I mean it's a great time to um write some code, I guess. No, it's great.

I mean, it's it's a it's it's it's an interesting time to be an engineer because you can be very productive with very little there. Yeah. I think so.

I think that the the And and there was one time I went uh was a ChatGPT or GPT4, um didn't make a bug in one of my code maybe long time ago. Then the challenge is then when that happens, sometimes it's very hard to hunt down.

Because when when you write everything yourself, you know, you know, you kind of have you grok everything, you know, where there is.

But then there was one time, I think the model made like a very subtle error in some comparison that it took me a long time to hunt it down. So, you know, be interesting to see what happens in the future.

Maybe everything will be more accurate and good and be fine, but um you might also be in a situation where uh very subtle bugs that become very hard to find and fix.

Yeah, yeah. you need to find that abstraction level at which you want to write, you know, want the model to write code.

Maybe like small function, write a unit test for the function and then verify it and then generate it instead of like going wholesale and generate an MVP out of. Yeah. That's right. That's right.

Because ultimately, you know, these models serve a user, right? Like these things are not like aren't just going to do everything.

I think there's a user directing it at some level, you know, at a task or a specific like function level or something or even a higher level or like the goal.

And I think that's to keep in mind like the it's the the goal is to serve the user, is to serve the the need of that user in there and how we can achieve that goal.

Because I think you think about the AI is doing everything on its own, but it's really more helping someone at the end, right? Yeah, yeah.

I mean you're working on agents and you brought up a really good point about like there's always human because there's always this talk of, you know, there's ASC moment and uh I forgot how what it's called actually acceleration of AI essentially.

And then there's a deceleration moment and uh who think that AGI will not come any time soon. Uh what is your thoughts on are we close to getting AGI?

You know, OpenAI famously says their goal is to sort of bring responsible AGI uh or you know, I'm paraphrasing, but some you know, is similar to that. Um, what are your thoughts on you know, this whole AGI? Are we coming close to it?

Um, do you even can imagine it? I for me like I can't imagine with the existing models that uh we are at AGI. I can see that we can see a lot of agents doing work and agents getting better, which might look like it, but not really being AGI, right?

Um, so I'd love to get your take on it.

Guest: Yeah, know it's good question there. I think like depends on how you define AGI. Everyone has their own definition of AGI. Exactly, right? If you define it one way, it's like, oh yeah, it's sure.

If you define it another way, it's like we're very far from it. Like you know it's like is it doing all skill labor that knowledge labor that you can do? Like no, it's not not there yet, right? Yeah. And I think I think it's like I want to understand.

I wonder if the right question because because of the lack of like very clear definitions of that term, I think the the conversations tend to become very confusing to me at least like kind of like talking past each other sometimes when people we talk about that.

But I think what's really more interesting to discuss is like, hey, what are the set of tasks maybe knowledge tasks in this case, right? That we can that start to delegate to computer. Now, clearly you can delegate a lot of root writing test code.

You see that model is doing that very well, right? Yeah. We can delegate uh generating the first draft of application.

You know, Luther, we're trying to delegate more and more task like, you know, even want to do some web research thing, get it into a spreadsheet, just tell the computer, you can do it now, right? Yeah.

Now, then I think the scope and expanse of that kind of task that you can delegate slowly starts to grow, right? Initially it's like working with a few applications.

You know, is working with like getting some data from one system to another system or making one script for you.

And then eventually like making two scripts for you, doing three applications and slowly start to grow into bigger and bigger scopes in there, right?

And maybe what what's more interesting is just looking at that because the impact to society and impact to us doesn't need to come where AGI comes. It can delegate the biggest task, which is, you know, make money from you or something like this.

You know, whatever you want it to do. Um, it might come way before that where we can delegate a lot to start to the computer. Right? If we and then I think that's really interesting. Like today if you don'