Transcript: AI Hype, Future Trends & Research Trends with Best Selling Author of The Master Algorithm Pedro Domingos
Listen to the full, unedited conversation between Nataraj Sindam and Pedro Domingos on The Startup Project podcast. They dive deep into the current AI hype cycle, the limitations of LLMs, the reality of AI progress versus investment, and the future of machine learning research. Pedro offers a critical and insightful perspective on the state of artificial intelligence today, challenging many popular narratives in the field.
2025-11-03
Host: Hello everyone, welcome to another episode of Startup Project Podcast.
Host: You're, uh, listening to a very special episode.
Host: Uh, today we have on the show Pedro Domingos.
Host: Uh, he is a Ph.D. in Machine Learning, uh, in the '80s before it was cool and called as AI, from UC Irvine.
Host: Uh, he is the Professor Emeritus of Computer Science and Engineering from University of Washington.
Host: Uh, he's author of two books.
Host: Um, he's also the author of The Master Algorithm, um, which is, uh, widely read, uh, and also suggested by Jensen Huang to his employees, uh, when Nvidia was started to be pivoting to AI.
Host: Uh, he's also, uh, winner, winner of, uh, John McCarthy Award, which is the highest honor in data science and AI.
Host: Uh, his works have been published in Wall Street Journal, Spectator, Wired, and other magazines.
Host: Uh, he's editorial board of Machine Learning Journal, uh, and helped start, uh, a bunch of fields in AI and machine learning, including statistic relational AI, data stream mining, adversarial learning, uh, and influence maximization in social networks.
Host: Uh, I'm sure I missed some things, um, uh, of, uh, because Pedro has a long career.
Host: Um, but we'll have a great episode.
Host: We'll talk about AI, um, uh, and how it is changing technology and our society, and a bunch of, uh, interesting topics.
Host: Uh, with, without further ado, uh, Pedro, welcome to the podcast.
Guest: Thanks for having me.
Host: Uh, so to set the stage a little bit, can you talk about your early, uh, career, uh, and most importantly, why did you choose machine learning, uh, at a point when, you know, it's not cool, I would say, like it's, what, '80s, it's not like in 2010s, you know, where, you know, I know some of my friends who are doing machine learning, but at that point it was a well-established field.
Guest: So I actually got my Ph.D.
Guest: in the 90s, not the 80s, so I'm not that senior.
Host: Sorry for aging you up.
Guest: I apologize for my inexperience.
Guest: Um, but it was in the 80s that I got interested in machine learning.
Guest: Uh, when I was an undergrad, I saw a book on, on AI and I wondered, what could that be?
Guest: It was actually the first AI textbook and back then the state of the field was very primitive.
Guest: And, and nobody in AI cared about machine learning.
Guest: So a few people cared about AI and then within AI machine learning was, hardly anybody was doing that.
Guest: But what I immediately thought was, first of all, to solve AI machine learning is the crucial thing.
Guest: Without machine learning, we won't get there, I think.
Guest: These days everybody agrees with that.
Guest: And with machine learning, I think we can.
Guest: That was one part.
Guest: Another part was, I saw in AI the potential to have an impact in both breadth and depth like no other field.
Guest: And, you know, if you want to have impact, then you should pick a field like that.
Guest: It was also, and this one may be a little counterintuitive, I also liked the fact that AI was in a very primitive state.
Guest: Mm.
Guest: Because physics and biology are high impact fields, but boy, you have to spend a long time to get up to the research front.
Guest: Right?
Guest: A lot of geniuses have already done their work before you, and you can stand on their shoulders, but it did takes a while to get there.
Guest: Whereas in AI, I just felt like I could start playing with this tomorrow and make some advances.
Host: Yeah.
Host: To get to the edge of physics, you have to do a lot more work than compared to like to get to the edge of a new field.
Guest: And it's also just harder to push the frontier, right?
Guest: Because, you know, they've been pushing it for hundreds of years, whereas in AI, even today, it's actually not that hard to push the frontier forward.
Host: Yeah.
Host: I mean, uh, how do you see the, um, you know, I think yesterday Nobel Prize was announced and, uh, I think three people or four people, you know, pretty much, there's a Twitter joke, I'm sure you've seen, that, you know, all the three Nobel prizes, uh, went to current and X Google employees.
Host: Um, but I'm sure one of them, I think it's your colleague who received a Nobel Prize in Chemistry, but pretty much feels like applied AI is what is gaining attention, both in terms of Nobel, but in general.
Guest: Oh, I would disagree with that.
Guest: So, um, as a historical note, the first Nobel Prize for an AI researcher went to Herbert Simon in the '60s, I think, for work on bounded rationality, which is very much core AI work, but definitely also very much very relevant to economics.
Guest: The concept of bounded rationality is actually absolutely central to economics, and in fact, it's problematic and that another non-economist that won the Nobel Prize, Danny Kahneman, who's a psychologist, he was very influenced by Simon and also the whole idea of bounded rationality was also very central to what he did.
Guest: In fact, AI is all built around heuristics because we're dealing with problems that can't be solved efficiently, so we need to resort to heuristics.
Guest: That's bounded rationality is instead of trying to optimize, you just do what you can essentially.
Guest: And, and of course, what Kahneman studied was the failure modes of that and called them biases.
Guest: Whenever you hear about biases, we should really think that a bias is just what happens when a heuristic doesn't work and there's usually a reason why we had that heuristic.
Guest: Now, to your point, so, um, the physics and the chemistry Nobel, I think, are very different stories.
Guest: The, the chemistry Nobel is definitely very much for applied machine learning, applied AI, and, and to a very important problem and very successfully so, so I think that that Nobel Prize was a slam dunk and I was only wondering, I think most people are wondering, when will it happen?
Guest: Because, you know, it's, it's, it's going to have to happen sooner or later.
Guest: The physics prize, uh, is a more interesting case, but among other things, I wouldn't say it's for applied AI at all.
Guest: In fact, what John Hopfield did was, was, it has no good applications.
Guest: And, so, so he developed Hopfield Networks, which were very influential in, uh, neural networks, but ironically, what the influence was a bunch of theory that never panned out.
Guest: So there's definitely important strand of neural networks that comes out of the connection that, that Hopfield made between condensed matter physics and neural networks.
Guest: In terms of applications, that has gone nowhere.
Guest: You could even say it's have a negative impact on the productivity of AI.
Guest: And then Jeff Hinton built on that with Boltzmann machines, but again, Boltzmann machines are widely known in machine learning for not working.
Guest: So there are very few real applications of, of successful applications of Boltzmann machines and those tend to be a very special kind called restricted Boltzmann machines, which really you could have gotten in a different direction.
Guest: So that whole body of work that they're getting the Nobel Prize for is really more than anything else, machine learning theory rather than applied machine learning.
Guest: I have to put the theory in quotes because Jeff Hinton has, he never, tires of saying, he's not a theorist.
Guest: John Hopfield is very much a theoretical physicist.
Guest: Uh, what, what Jeff Hinton did is, is very machine learning.
Guest: He played with a lot of stuff, uh, ran programs, did things and so on, but I think no one, including him, would pretend that that ever had any applied impact.
Host: Uh, I mean, for the regular audience, how do you differentiate between AI and machine learning?
Host: Because up until before LLMs, everything we've seen was called machine learning.
Host: Recommendation systems were called machine learning algorithms.
Host: Uh, pretty much every implementation aspect, we always refer to the word machine learning.
Host: But we always know that it's, it's in, it will eventually lead to AI.
Host: Like, do we call LLMs as AI or machine learning?
Guest: Well, machine learning is a subfield of AI. AI deals with the automation of various human abilities like the ability to reason, to solve problems, to plan, having common sense knowledge, uh, understanding language, understanding the world, navigating, manipulating.
Guest: Machine learning is the automation of learning.
Guest: So there's computer vision, which is the automation of vision.
Guest: There's automated reasoning, which is the automation of, of vision.
Guest: There's natural language processing, which is, or natural language understanding, which is the automation of language.
Guest: So machine learning is just one subfield of AI. Now, it is, um, the field that underpins all the others.
Guest: And all, and again, this was my realization in the '80s is that you're not going to make much progress in any of these fields without machine learning.
Guest: So it's, it's, it's ironic that a field that used to be ignored and second rate within AI, now is so big, it's confused with the whole field, but it's actually understandable, right?
Guest: People just hear machine learning and AI almost interchangeably.
Guest: But I think it's important to keep that difference in mind.
Guest: So, a large language model is a machine learning system that does an AI, in particular an NLP task and that's what you'll see in many different fields.
Host: So, uh, you know, I think it's a good segue to talk about LLMs and the current AI hype cycle as we see, um, and I think you would probably, you know, more qualified than the most guest I had to talk about this is, how much of this hype do you buy into?
Host: Because the different sort of, it's, it's like politics, right?
Host: You know, there's different sort of spectrum in terms of belief as well.
Host: Like, you know, there are super, uh, believers who would say that, you know, GPT-10 will be, you know, AGI, uh, which I'm not a subscriber to because I can't squint my eye and see an LLM being like AGI.
Host: Um, but then there is the opposite, uh, which would say that, you know, hey, uh, LLMs will soon forget LLMs and there's a new technique that's going to come around and, uh, the entire, you know, perspective of how we'll think of intelligence or what works as a practical application will change and we'll move on from LLMs.
Host: Um, so where do you stand in that sense?
Guest: Well, I would say before we think about the hype, we should think about what is the reality on the ground.
Guest: The reality on the ground is that AI has made tremendous progress in the last decade.
Guest: There are people who deny that and say like, oh, it's all just a bunch of hype trying to sell us things.
Guest: AI is not just a bunch of hype.
Guest: I guarantee you that.
Host: There's also people who Mm?
Host: It's not crypto?
Guest: Uh, I mean, again, I would say crypto is more a bunch of hype.
Guest: Exactly.
Guest: That that's that's an interesting counterexample.
Guest: I, I think there's some nuances there, but to a first approximation, yes.
Guest: Crypto is a bunch of hype and AI is not.
Guest: There are also people who say that LLMs are just stochastic parrots, they just regurgitate something from the web and that's also false.
Guest: LLMs are learning systems and the essence of learning is generalizing.
Guest: They do generalize.
Guest: They do produce things that were not input into them.
Guest: And, and Transformers in particular are a major advance in neural networks.
Guest: People used to say like, oh, there's nothing new under the sun in neural networks, which is always not quite true, but, you know, certainly, uh, Transformers give the light to that.
Guest: So Transformers are a major advance and what we're seeing is the consequences of that.
Guest: And, and other things as well.
Guest: But so I think there is very rapid progress happening in AI right now.
Guest: However, it is also true that the hype has run away from the reality.
Guest: People are not blaming that we're a lot closer to AGI than than we are.
Guest: And that's dangerous too, right?
Guest: We now have potentially a stock market bubble that will burst, uh, when AI doesn't fulfill its potential, which to me is a little alarming.
Guest: And I think the the problem is that it's, it's hard to keep these two ideas in mind at the same time.
Guest: One is that we have made enormous progress in AI, and at the same time, there is way more progress left to make.
Guest: You know, intelligence, you know, the human brain, for example, is the most complex subject in the universe.
Guest: It's, it's complex, and maybe a lot of the complexity is probably superfluous, but still, we, we have come a thousand miles in AI since the field was founded in the '50s, but there's a, there's a million miles more to go.
Guest: And we're going faster now, so that doesn't mean it'll take a thousand times longer.
Guest: I don't think it will, but we have to keep those two things in mind at the same time.
Guest: Now, on this question of whether GPT-10 will be AGI or whether we it'll be something other than Transformers, I really don't think GPT-10 will be AI.
Guest: Uh, if, if there's even GPT-10 by then, although I could very well see OpenAI calling whatever that they're doing then GPT-10, but, but you shouldn't be fooled by that, right?
Guest: There's the marketing side of things and the, and the reality.
Guest: Uh, I think there's an interesting question here, which is, so first of all, I strongly believe and I think the great majority of the people in the field believe that tweaks on Transformers will not get us there.
Guest: There are some people up at any end of places that do and they may be right and in a way I hope they are because that would shorten the path, but most likely that won't be the case.
Guest: Another question becomes, what are the major things that need to be added to Transformers to get to human level intelligence, that's one path, or if we just need to throw Transformers away and, and just do something else, uh, what should that be?
Guest: And I honestly think these are both plausible roads to get, to get there.
Guest: And you know, we're in a very high dimensional space.
Guest: So there's probably you can get to the end point from either of these directions.
Guest: I personally as a researcher, my bet is that it's not we, um, it'll be something different from Transformers.
Guest: And there's, and there's too little research on that right now and too much research on tweaking Transformers.
Guest: Having said that, uh, I, I'm, you know, I know researchers who believe this and they tend to just poo poo and, and ignore Transformers and the, the whole stack of things that underlies them.
Guest: And I think that's a mistake.
Guest: I want to understand what Transformers do in order to beat them, right?
Host: Yeah.
Guest: I, I don't want to sacrifice what they do.
Guest: I think it's worth trying to understand what the heck is going on in there.
Guest: And then hopefully what will happen is that once we understand them, we can take those few things and then, you know, build our systems from scratch just to do those things and not the other stuff that just wastes billions and billions of cycles.
Host: What do you think of this comparison of models to human intelligence?
Host: Um, I for one don't think they're similar or I mean, in first, like we cannot replicate our own intelligence because we don't understand how brain functions in the first place, is my belief, having read a little bit about, you know, uh, you know, researchers trying to model brain and, uh, you know, comparing our intelligence and how our intelligence work.
Host: And I'm curious like what do you think of this comparison of, uh, LLMs as an intelligent intelligence similar to human intelligence?
Guest: Uh, it's a confusion.
Guest: It's a category error.
Guest: A model is not a brain and a brain is not a model.
Guest: Your brain contains a model of the world and that's a very big part of it, but it's a lot more than that model of the world.
Guest: And, and what we have today is, is in a way a little surreal, almost a little farcical in that imagine, you know, trying to, you know, pretend that a model is a brain, right?
Guest: It's just a very strange thing, right?
Guest: So we have LLMs, which are large language models, uh, being used as the found, people even call them foundation models, being used as the foundation of, of everything, which, which is just very weird, right?
Guest: And, and I think people will look back on this and go like, what, what was wrong with them?
Guest: What were they thinking?
Guest: Why, why were they so confused?
Guest: Again, it's not to say that, that doing a lot of work on better modeling is, is not worth it.
Guest: We definitely need better modeling, in particular, we need better learning algorithms, but this idea that, you know, large language models are the foundation of AI to me is, is slightly absurd.
Host: Yeah.
Host: I mean, And by the way, sorry, we are already moving away from it.
Guest: If you look at A1, you know, the LLMs are already starting to fade into the background and now let's talk about reasoning and doing these sequential things, you know, things like reinforcement learning and what not, which, of course, we knew that this what was going to happen.
Host: Yeah.
Host: In to your previous point that you made, um, it is excel that like the AI development is itself accelerated and I think this bubble in one sense is accelerating it, right?
Host: Like for someone like Elias Satsekker getting, you know, I don't know, some incredible amount at a billion dollar valuation and his whole, uh, like thesis was that we'll do research and like propagate towards AGI, right?
Host: Something like that.
Host: And like there are now 10 companies, you know, which are doing model development, um, you know, whatever we call it foundational or not.
Host: Um, I think that itself is probably pushing more people faster towards next version of what this, uh, you know, next version of whatever LLMs, you know, the next version of LLMs essentially.
Guest: Well, um, we need to distinguish two different things here.
Guest: One is the amount of investment in AI, which is ballooning like never before, and the other one is the amount of progress in AI.
Guest: The amount of progress per dollar spent in AI is actually going down.
Guest: Right?
Guest: There's more and more research on less and less.
Host: Where do you think that dollar should be directed?
Guest: machine learning within that neural networks, within that deep learning, within that LLMs, pretty soon, pretty soon, we're going to be doing infinite research about nothing.
Guest: And you have to remember and people are always forgetting this, starting with Ray Kurtzweil, but, but certainly and not limited to him that, uh, technology does not progress in exponentials.
Guest: Technology progresses in S-curves.
Guest: It's just that the first part of an S-curve looks mathematically very, very close to an exponential.
Guest: But what when you're seeing that exponential, the question should you all be asking is, not assuming that exponential is going to offer forever because it physically can't is where, where is this S-curve is going to start to taper off?
Guest: Mm.
Guest: And when will the next one come?
Guest: The thing about AI is that unlike most technologies, which are all, you know, the adoption, the progress, the specs, the performance, they always S-curves.
Guest: The thing about AI that's unusual and interesting is that the way that we have to go is so far that it could be that, for example, the exponential is already starting to die out, right?
Guest: By some, by some measures, you know, like we haven't had any major breakthrough since Transformers and that was almost 10 years ago, right?
Guest: So maybe by some measures, we're already starting to see the, the S-curve settle down.
Guest: But on the other hand, it could be that this exponential is just getting started, right?
Guest: Both of those things are possible.
Host: Yeah.
Guest: And, and, and if, if things, and when things do taper off, then the question is, how long will it be till the next wave comes along?
Guest: And that is, that's actually where the biggest uncertainty is.
Guest: It could be very short or it could be very long, right?
Guest: You often, like we've had that in AI, we have periods of rapid progress and then there's a decade, for example, in speech recognition, which, you know, is a predecessor of all of this, that's where language models come, come from the machine translation, where, you know, for a while there was very little progress.
Guest: There's, there was the things were flat and then, you know, the deep approaches came along.
Guest: And, and for, you know, chances are that's what's going to happen.
Guest: Where that that, uh, next wave is going to, uh, come, it's hard to predict because at the end of the day it really depends on us, right?
Guest: It depends on who researchers is coming up with ideas.
Guest: It's like Alan Kay said, the best way to predict the future is to invent it.
Host: Yeah.
Guest: And in fact, I think the best hope in the, in the hype cycle that we're in is that we will come up with, with big innovations fast enough that those innovations will actually fulfill the potential that people think current models have, but don't, so that there's no crash.
Guest: But it, I think it's pretty dicey.
Host: But if, if this kind of hype and this amount of dollars is actually the per dollar return is not as much as you think, then where should that dollars be directed towards?
Guest: Oh, they should be directed, so, there's two things here.
Guest: One, uh, maybe they should just be directed not into AI because there's a lot of other things that could be directed into, right?
Guest: So when you see VC spending a lot of money on something, the first part is like, why do they have so much money?
Guest: Because there's, there's a lot of capital chasing too few opportunities, right?
Guest: So there's that aspect.
Guest: But, but within AI, I do think it's worth investing a lot in AI because the return at the end of the day is going to be, you know, enormous, right?
Guest: Really, really large.
Guest: But the problem, I think, with AI is that, and this really, uh, is a shame, is that there's too much funding of the same narrow kinds of things.
Guest: We need to let a thousand flowers bloom, right?
Guest: We, we don't know which direction AI lies in.
Guest: And again, it's ironic that AI research today is actually much less diverse than it was 10 or 20 or 30 years ago.
Guest: This makes no sense.
Guest: We shouldn't be exploring fewer things than before, we should be exploring more.
Host: It's interesting because a lot of big VC funds or the VC money that's going is going from the place where the research actually happens, because a lot of VC funding comes from these large educational institutions, uh, funds, right?
Host: Like the Yale and Harvard having this, you know, mega hedge funds.
Host: I, I call them really hedge funds even though they're, you know, term differently, right?
Host: So, in a way, they are the ones who are actually funding into this, uh, you know, asset, uh, counter to what should be happening in one way, because they should be investing on the opposite end in their own research, right?
Host: Um, uh, I, I was seeing, uh, you know, you're a very interesting Twitter follow by the way, so, and that's how I discovered you.
Host: Um, and one of your tweets, uh, was you were comparing Sam Altman with, uh, Francis Coppola and, uh, he you said, I just blew $100 million on a movie, says Coppola.
Host: And I already blew $10 billion on, uh, and the movie is just getting started, says Altman.
Host: This was your tweet.
Host: So, I'm, I'm guessing you're not super positive on OpenAI.
Guest: Uh, um, no, I'm not super positive on OpenAI.
Guest: I think, it's, um, so on the one hand, we have a lot of progress happening in AI today and, you know, some important parts have come from OpenAI.
Guest: But on the other hand, uh, what we're seeing in AI today is a comedy of errors.
Guest: It's almost farcical.
Guest: I mean, that's part of why I just wrote a satire of AI, right?
Guest: My, my book 2040, it's about the startup called Kumbayai that is a spoof of OpenAI, right?
Guest: And what happened with OpenAI is that to a first approximation, they were just very lucky.
Guest: People think OpenAI is a bunch of AI geniuses solving amazing problems, but it ain't so, right?
Guest: So first of all, again, going back to the S-curve, what happens when an S-curve is that there's a whole series of things that you have to do, none of which seem to make a, a big difference at the time.
Guest: And then there's one that reaches the team, tipping point and then boom, right?
Guest: And then, and then we tend to give all the credit to the people who did that last piece.
Guest: And the the last piece that OpenAI did was to skill up LLMs and, you know, honor to them for having done that.
Guest: But you can't ignore all the stuff that came before.
Guest: The invention of Transformers, which was done at Google.
Guest: The invention of attention, which is actually what Transformers are based on, which was done at the University of Montreal, contextual embeddings, the whole language modeling thing, you know, GPUs, back prop, embed, I mean, like etc., etc., right?
Guest: So there were all these things that are all very indispensable to getting where we are.
Guest: And then there's OpenAI, which is, you know, largely a bunch of hackers, and they themselves admit that they were flailing at the time.
Guest: And then they found this thing and ran with it.
Guest: And, and in a way, this whole opportunity for OpenAI was created because Google screwed up.
Guest: Sundar Pichai screwed up.
Guest: The researchers at Google, and Google was ahead in this for the whole duration, right?
Guest: They, they were using LLMs in, in search, right?
Guest: They were using Bert in search in 2018.
Guest: But then decided to not do a chatbot and, and they all left, you know, frustrated at the bureaucracy and risk avoidance and that's what created the opening for OpenAI.
Guest: So, you know, there are many good people at OpenAI.
Guest: They're doing good work.
Guest: I have nothing against them, but I think OpenAI is also massively overrated.
Guest: And if you're thinking about, you know, where to lay your bets and what to invest in and what not, I would say, think twice, for example, about things like Elias Scaver's company, which is not worth a billion dollars.
Guest: I think Elias is again a good example, uh, uh, he's all right, but he's, he's the, he's the quintessential example, I would say, of someone who was in the right place at the right time.
Guest: Right?
Guest: And, and, and which was, you know, Alexnet, which is actually named after Alex, right?
Guest: Who was the guy who actually put neural networks on GPUs, which made everything possible, possible.
Guest: So yeah, I would not be putting any money at all into Elias's company right now.
Guest: Sorry, Elia.
Guest: I hope I'm wrong.
Guest: I hope they solve AI and like, but even, I mean, their whole, their whole stick of like save super intelligence Inc.
Guest: We're just going to dedicate ourselves to creating super intelligence.
Guest: Well, that's easy.
Guest: We can do that, you know, and making it safe, and we're not going to be distracted by products, right?
Guest: Any investor should look at this and have their alarm bells ring like, you know, full tilt, right?
Guest: Like this is, this is the perfect example of the fraud when the market is too high and there's too much hype.
Guest: So, there you go.
Host: But then if we have to steelman or like give a counter to that argument, why are investors investing at that high valuation into someone like Elia?
Host: Is it purely about talent or like, um, we have like so few people, uh, who are experts at the cutting edge right now and who can implement these things or is it some other factor just or just pure hype?
Guest: It's several things.
Guest: The first one is that precisely because the potential of AI is so high, right?
Guest: In AI, you can actually say with a straight face, my company is worth a trillion dollars or maybe even 10 trillion.
Guest: My company is going to increase, is going to solve AI.
Guest: And I think about it.
Guest: If Ilya Sutskever is going to solve AI, his company is potentially, you know, money, other things, worth trillions of dollars.
Host: Yeah.
Guest: And that's the best guess that the VCs are correctly always gunning for.
Guest: So with the AI that multiple, you know, is enough to take you from 100 million to a billion, right?
Guest: That's one aspect.
Guest: The other aspect is that there's this culture, which again is in Silicon Valley, which again is justified, but can also lead to a lot of mistakes, which is like, yeah, you know, like companies, you know, Facebook and Google didn't have customers for a long time, right?
Guest: As long as the potential is there, we'll figure out the business model later.
Guest: And this is not wrong, right?
Guest: But it's a very high risk proposition, right?
Host: Yeah.
Guest: And then finally, there's Ilya himself.
Guest: I mean, I, I hear a lot of people say like, oh, Ilya is an AI genius and, and anyone would work for him and what not.
Guest: And you know, with all due respect for Ilya, I don't think he's an AI genius, but I do think, right, on a very pragmatic level, there are a bunch of people who will go work for him just because he's Ilya.
Guest: So that counts for something, right?
Guest: If you're a talent magnet, and I'm a VC, at some level, I don't care where, why, where, why you're a talent magnet as long as you are a talent magnet.
Guest: In fact, a tweet that I have, you know, earlier on, which was kind of inspired by, uh, this was years ago now.
Guest: It was inspired by DeepMind, but I think OpenAI copied the recipe was, you know, here's how you solve AI.
Guest: You declare that you're going to solve AI, uh, you produce a demo that seems to make progress towards it.
Guest: A bunch of smart people flock to your company, and then they solve AI and you take the credit.
Host: So this could happen.
Host: I mean, in one way, you can attribute the same with Elon, right?
Host: His ability, I think, for the last couple of companies was to attract world-class talent and put together great teams, right?
Host: And that is something not everyone can do and Elia, for right or wrong, he was in the right place and, was able to now, he became an AI brand, right?
Host: He might not be the best one in AI research, but now he's an AI brand that attracts talent like you said.
Host: Um, I mean, I, I would say that as far as the brand component goes, yes, but I actually think Elia and Elon are very, very different cases, right?
Guest: I mean, Elon has his strengths and weaknesses, but he did, um, on a good day, his, his strengths make things happen that otherwise wouldn't.
Host: Mm.
Guest: Uh, I think Elia was always a little in over his head, frankly, and still is.
Host: What, what do you think about, because I saw, you know, I've done a little bit of machine learning.
Host: I, I, as a software engineer how much you do, like I did deep learning about AI.
Host: Um, and at that time the cutting edge was NLP.
Host: Um, this was like 2016, or at least cutting edge on deep learning not AI, right?
Host: Not in terms of like research.
Host: Um, and when LLMs came out, it looked a little bit magical for first, but then you started to understand how they work, they're primarily predicting tokens, uh, and, you know, that's how, you know, this is working, then sort of the mysticism sort of goes away a little bit, but then you, it's still very, very useful, but I never thought that, you know, it will eat the world or it will, not in the sense of like software is the world, but, you know, it's a threat to humanity.
Host: Uh, and suddenly I see, uh, you know, there's a petition that we should stop AI for six months.
Host: Uh, and some of the famous people, you know, from Elon, I don't know if Jeffrey Hinton was signing that, and even with Elia's company like safe intelligence, uh, which sometimes feels like,