[Chit Chat Stocks] swyx on the AI majors

Download MP3
https://www.listennotes.com/podcasts/chit-chat-stocks/the-future-of-artificial-3ylaqngtMR7/


intro: Welcome to Chitchat Stocks on this show host Ryan Henderson and Brett Schaefer analyze businesses and riff on the world of investing. As a quick reminder, Chitchat Stocks is a CCM media group podcast. Anything discussed on Chitchat Stocks by Ryan, Brett, or any other podcast guest is not formal advice or recommendation.
intro: Now, please enjoy this episode.
Brett: Welcome in. We have another episode of the Chitchat Stocks podcast for you this week. We have a fantastic interview coming up with Shawn Wang from LatentSpace. It is a, I'll say covering anything AI. A lot of the stuff, you know, for me and Ryan, it might be going over our heads and it might be a little too hard technically for us, but that's what we brought on Shawn to the show today.
Brett: For a lot of public market investors, this new [00:01:00] AI stuff, it's hard to You know, see what is, what is working, what's just a narrative, what all the stuff that's getting thrown at us during this boom times. So Shawn, we wanted to bring you back on the show. You came on, I think almost exactly two years ago now to talk more cloud stuff.
Brett: Now we're really going to talk about cloud AI and how it is impacting the startup ecosystem. So Shawn, as we kick off the show. What is your relevant expertise in this booming AI field? 
swyx: Oh God, that's the million dollar question here. So I am so for, for listeners who haven't heard back the previous episode that I was on I was finance and public markets guy.
swyx: And I was, I was in a hedge fund for my first career. And then I changed careers to tech where I worked at AWS and three unicorn Sort of developer tooling cloud startups. My relevant expertise is you know, on, on some [00:02:00] level, I, I'm just a software engineer that is building with AI now. And then on another level, I had, I actually, when I was an options trader back in the sales side, I actually did a lot of natural language processing of the Bloomberg chats.
swyx: So I fed all of the Bloomberg chats into a pricing mechanism. Then built our global pricer. So our entire options desk was running off of that thing. This was about 13 years ago. So so you know, I, I've always had some involvement with like AI, but like, you know, it was never a big part of my identity and I think.
swyx: The more foundation models came into focus, and foundation models is a very special term as opposed to traditional, maybe machine learning finance that a lot of your listeners might be familiar with then you start to build differently, and there the traditional software engineering skills become a lot more relevant.
swyx: So relevant expertise now is that I, I guess I've sort of popularized and created the term of AI engineer, which you can talk about and created the industry such that Gartner now considers, considers [00:03:00] it like the peak of its hype right now. And I, I consider that both a point of success and also a challenge because I have to prove Gartner wrong that it has not peaked, but you know, they put us at the top of the hype cycle, which is kind of funny.
swyx: Because I started it, so. 
Ryan: Yeah, it's it's a unique challenge but yeah, funny anecdote. Okay, so a lot has changed since we last spoke. Yeah. Pretty much this whole world of AI that everyone's talking about now or at least has become mainstream has, I believe that kind of kicked off right after the discussion or our last discussion.
Ryan: So I guess the last discussion was really focused on the cloud computing industry broadly. And that was actually right around the time when AWS. Azure, GCP all the revenue growth rates were coming down and actually now with the hindsight bottoming. So my question for you is what has, I guess, [00:04:00] what has changed over the last two years and why has revenue growth at the big cloud providers re accelerated?
swyx: Yeah, again, like, revenue growth at big cloud providers is due to factors that, you know, probably I don't have a full appreciation of. I also challenge the fact, the idea that everything has changed. You know, I think in some ways, this is just like the next wave of something that was just a broader, maybe like 20, 30 year long trend anyway.
swyx: You know, we, we needed more cloud compute. Now we need even more cloud compute. Now we need more GPUs in the cloud instead of CPUs, right? Like, what's really changed? I don't know. Like, you know, people still want serverless everything. People still want orchestration. People still want you know, unlimited storage and bandwidth and all the sort of core components of cloud.
swyx: In that sense, it hasn't really changed. I do think that if you see there are plots over time of the amount of money and flops invested in machine learning models, that actually used to follow a pretty log linear Moore's law type growth chart for the last like 40 years. [00:05:00] And then, You had 2022 happen and now everyone's like, oh, you can train foundation models now.
swyx: And actually you've seen a big inflection upwards in the amounts that people are throwing in throwing the money in there just because they see the money now. So like every, it's like obvious to everyone, including us, including me in a way that it wasn't obvious to basically everyone, but Sam Altman and Satya Nadella circa 2019.
swyx: Like they knew this. Four years five years ahead of everyone else. And that's why they went big on OpenAI. But now that we see this, obviously everyone's throwing money into NVIDIA, basically. 
Brett: I had, why, why are, and this is maybe a question I think I know, but I'd like the answer again, and it feels like it's maybe a basic question, but a lot of, I think listeners are going to want to kind of understand this connection.
Brett: Why do these new AI companies require so much upfront spending? On NVIDIA chips, cloud computing costs. All that stuff. 
swyx: Yeah. I mean, so [00:06:00] you have to split it by whether you're a foundation model lab or you're basically everyone else that consumes foundation models. So the rough estimate for, let's say GPT 3 was like 50 million to a hundred million in compute for one run.
swyx: And for every one successful final training run, maybe you have between a hundred to a thousand. Prior runs before that, right? So just pure R& D. The estimate for GPT 4 was 500 million. We've actually had two generations of frontier models since then, just for OpenAI. So that would be GPT 4. 0 and GPT those are the models that, those are only, only the models they've released.
swyx: And also not, those are only the text models, we haven't counted the video models and all the other stuff. So it's just a lot of upfront investment, right? Like, I think it's, it's like the classic capital fixed costs upfront thing, where, you know, you have a pre training phase where you're just consuming all of the internet.
swyx: [00:07:00] Data that's, you know, there's nuances to that, but we won't go into that. And, and, Alka, Alka comes the other end, you know, 3 to 6 months later, Alka comes a model that you then spend another 6 months fine tuning and red teaming, and post training, and then it's ready for release. So, like, so there's, there's OpenAI doing that.
swyx: And then everyone else trying to copy them. Anthropic's the most successful so far, but there are others kind of on their tail. I would say Cohere, XAI, Meta, and all these other companies that we're naming. There's a, there's sort of like a second tier of frontier model labs that are out there. And all of them need enormous amounts of compute just for training.
swyx: Once they're successful training, then they can start serving the models to, to people to build on top of, and that, that compute workload starts switching to inference. So there's a classic question of how much money goes into compute and how much goes into inference. It's typically between a two to three ratio of computer inference or a three to two ratio, depending on how [00:08:00] you're set up.
swyx: So that's, that's what Google DeepMind runs at. It's like a 3 to 2, 2 to 3 ratio. And I think it makes sense. The question now, though, is like OpenAI's finances are relatively public, and they basically make zero margin on on, on, on, on their business right now, as it stands. Which is really interesting, because then you should use the crap out of it, because they're giving it to you for nothing once you advertise all the costs.
swyx: So they're really banking on reaching some bigger goal, some next generation. They're not trying to make profit in this, in the current generation models. And that's something that you as a builder or an investor should exploit, because You're getting this effectively with no margin. 
Brett: Yeah, that's not bad.
Brett: Oh, Ryan, you go ahead. 
Ryan: Yeah. So I guess, yeah, first of all, for anyone that's listening to this and wants to learn more LatentSpace is a wonderful blog and it has a lot of the numbers or that Shawn is kind of referencing here, [00:09:00] especially with the open AI economics, which look not like the, they don't look the prettiest when you look at it on a chart, but, and one of those charts you shared, and they're certainly running at cost at the moment.
Ryan: My question, I guess, is, you mentioned the foundation models, very costly to build from the ground up. What about, so there's the foundation models, and then you said there's everyone else. If you're in the everyone else camp, how costly is it to start? Is that purely just. The, as they say, GPT wrappers, I guess, or is there more included in that?
Ryan: Everyone else. 
swyx: Yeah. GPT wrappers are a common way to phrase it. So I, I, I have been on the side of arguing that GPT wrappers are good. Actually. This is something that was not consensus 2022, 2023. But it turned out that like basically the sort of middle tier of companies that tried to not be GPT wrappers, tried to.
swyx: Train their own custom models to compete with open AI. Those are the inflections of the world. Those are the character AIs of the world. They all [00:10:00] folded. All of them. They all got echo hired by like the Amazons, the Microsofts, and all those guys. Stability. Remember stability AI? I started latent space because of stable diffusion.
swyx: And that's folded. Like it's, it's really hard to be an independent lab with like, The mid tier of money, like you have to, you're just going to be crowded out by, by like, like the biggest labs. There's some exceptions for different modalities, like video and voice. But yeah, I mean, like for the for the sort of capital light startups it's never been.
swyx: easier, never been lighter or more efficient in capital to start with a foundation model that's either open source or provided via an API. And you start from there, you build your customer base. And eventually, if you need to, you can train your own custom models to serve your customers for high volume use cases.
swyx: But most people don't even need to. I've, I've had, I've interviewed a someone on my podcast called Codium who used to develop their own custom models. They've given up and they've just decided to just wrap OpenAI and Cloud and, and they're a unicorn now as [00:11:00] well, and they're doing very well. And they, they, they call it, they consider it one of the biggest mistakes of their, their startup journey so far, which is trying to train their own model.
swyx: Because they were like, there's just, there's no point which is fantastic news for the Foundation Model Labs. That's exactly what they want to hear, right? Like that you're completely reliant on them. So I think there's like a sort of uneasy ecosystem of reliance from like everyone who consumes models and the people who make the models.
swyx: But yeah, it's, it's, I, I would say it's very easy now to create an AI startup. It, and then it might be the easiest possible. I would say then, and I think the last thing is there's another component of this which you might not be considering, which is the amount of proprietary data you have access to, right?
swyx: Like, abstractly, when you start a company, like, it's a, it's a sum of, like, your, your people, your, your sort of unique resources and your, maybe your technology sort of insight or whatever. So maybe you don't have, like, special unique insight. Maybe you, your, your people are relatively commodity too, but if you have access to some data that you can't get anywhere else.
swyx: Which is something that I've worked [00:12:00] on for, for my company, Small AI. Then people will beat their way to your door just to get at that data. In the same way that Bloomberg, you know, like the Bloomberg terminal, like, sure, like it's a, it's a bundle of stuff, but really like it's, it's the feeds. It's like the pricing, it's like the, the journalism that you get out of the Bloomberg terminal that's, that makes it worth so much.
swyx: And then obviously the network effects for the Bloomberg chat. So yeah. Yeah, proprietary data. You're like, people are actually spending more and more and more money on that because I think relatively everything else is becoming more commoditized. So then the relative profits accrue to data. 
Ryan: Okay, follow up question here.
Ryan: The, so when you mentioned that a lot of people are just. Kind of calling it quits on building their own foundation model and, and, and just leaning on OpenAI and I believe Anthropic was the other one, cloud. It makes it sound like that's where the real moat lies. I guess my question to you is with the quote unquote GPT wrappers, [00:13:00] do you think there's, we'd like to talk about moats all the time here.
Ryan: Can you build a moat as a GPT wrapper? I know that there's a lot of different use cases here, but if you are, is it in that data? Is that what you're talking about? Is kind of the GPT wrapper is 
swyx: right? Yeah, it's, it's in that, it says that data, whether or not you acquire it through whatever means, like you know, licensing perplexity is doing a lot of licensing of e commerce data.
swyx: And you know, specific articles, OpenAI is also doing that, of course. Or you're doing it just because you built it. Relationships with people. You're the sort of like a, I forget, like the store, like the, the record, the input the sort of database of record for something important within a company.
swyx: The classic one would be Rippling, which is sort of all your employee data. So you're like very well based, very well suited if you have the custom data for, to produce custom AI, like, obviously that makes a ton of sense. [00:14:00]
Brett: All right, let's go through, you mentioned, you know, OpenAI, the foundation models, and then these other ones that, as we've called the, the wrappers, but there's also the, and I maybe call them the four big hyperscalers, Meta, Amazon, Microsoft, and then Alphabet slash Google.
Brett: I guess Meta really only has, you know, they're not outsourcing, you know, selling the third party cloud services, but we're going to go through some of these and we're just going to play a game here with you, given what you're seeing as. You know, what startups are going after, like what cloud provider are they using, who their relationship is with, your opinion on who's, say, you know, ahead, succeeding, or falling behind in all these new AI tools.
Brett: First one, we'll just give the one that kick started everything here, OpenAI. What's your opinion on them? As we say for context here in December 2024. 
swyx: Yeah, cool. This is going to be controversial because I have friends working there. So [00:15:00] there's a, there's a broad question now of whether or not OpenAI has peaked.
swyx: They let the, they let the wave. And, but you know, that they haven't shipped GPT 5. 01 has been inspiring, but. Not game changing according to most industry surveys. So, like, what, you know, where are they going next, right? Like, they're already the most valuable private startup ever, I think, unless SpaceX, you know, is secretly worth 200 billion.
swyx: So, like, where to, right? Like, how far can they go as a private company, especially when they're projected to, like, 5 billion this year. They're projected to lose 40 something billion in three years. This is a, this is a giant pile of cache that's burning. Half the senior management team has left this year.
swyx: All of which, all of whom, like, individually, they're all like, I want to spend more time with family. I want to work on other things, like, but together it smells. It just smells, right? Like, it never is a good sign when most of your management team leaves. I just, let's just put [00:16:00] it that way. No matter what they say.
swyx: So that's, it's, it's ugly. And, and, but like, you gotta have sympathy. This is one of the most story companies of all time already. It's super hard to manage. I'm sure the politics are absolutely insane. And we saw this, this time last year where we had the management shakeup in OpenAI. So that, that is the question that is, that insiders are wondering, like, has OpenAI peaked?
swyx: Like, is, does, do they have tricks up their sleeves? Obviously, people, you talk to people who work there, they, they would say, nobody knows anything. Like, we know that we have every, you know, we have so many tricks up our sleeves. And then people outside are just waiting for, for something to happen. And nothing's nothing's really moved the needle since effectively GPT 4 I would say.
swyx: As, as far as public re perception is concerned. Concerned would I say anything else there? I mean, but they're still the leaders. I, I, so they, they, they set the industry standard for everything. When they, when they release something, everyone follows it. It's still that case. So I, I would, I would highlight that there.
swyx: Do you, do you have any other questions or comments in open AI before we move? 
Ryan: No, I, I [00:17:00] think that covers it fairly well. It's, it's yeah, yeah, I think, I think you're right. Individually, each one, you can make the case that oh, they, you know, they all left for their own reasons, but as a collection it is, smells a little fishy.
swyx: I guess the entire, the entire safety team has left. Which is, was a big part of their their sort of proprietary leadership. Like basically everyone who works on safety, not, doesn't only work on safety. You know, like I think if even if you personally don't super care about safety, people who work on safety, work on safety because they take AI seriously.
swyx: And because they take AI seriously, then they are actually like frontier. Like it's very sort of back to front in that, in that thinking. So yeah, it makes, it means a lot that Ilya Soskever has left and is now working on a different model lab. Called safe superintelligence because he was the original champion of scaling and opening up.
Ryan: Okay. Makes sense. I guess, question for you. Do you think there's a first move, a lasting first mover advantage for chat GPT? I guess, I don't know if they were the, Full on first mover, but [00:18:00] it seems like they were the first ones to really, truly gain notoriety. Is that serve as an advantage for them? 
swyx: It does in a few elements and it doesn't in some others, right?
swyx: So I'll say the ones that where it does. OpenAI now owns chat. com. There is their informal service of. Sort of consumer market share. And the number, like opening eyes, like far and away, number one, like like the number two is on topic, but average person around your Thanksgiving or Christmas table doesn't know what a topic is, doesn't care.
swyx: They're just like, give me the chat thing. Right? And the chat thing is over the eye. They own chat. com. So yes, there is a first mover advantage to building the brand. Same for complexity, which you also have on the list. And the then you also have the regulatory capture advantage. And the sort of data licensing advantage, right?
swyx: Once OpenAI has done the deal with the Wall Street Journal or the Atlantic or whoever else they've done the deal with, the people after them can't really do the [00:19:00] deals because these are all exclusive. And that makes sense. Then there's the regulatory thing that they tried to pull with the California legislation or whatever other legislation in every other city city and country in the world.
swyx: Like they're, they're always incentivized to like. Because they're the leaders, because they're first, because they're biggest first, to define the laws such that it conveniently states that everyone their size is good, and everyone coming up, you know, has all these regulations that they, it will be too prohibitively expensive for them to comply with.
swyx: That is like, even without malicious intent, that happens. It just happens. They're the best intentions of everyone, this is how the world works. Enormous first mover advantage from that, from there, and obviously from launching chat2BT where you have the largest collection of RLHF data sets in the world, right?
swyx: Where it doesn't have advantages is in places where Anthropic has sought out advantage. So, for example, for coding models, Claude's Claude's sort of 3. 5 sonnet model is currently the favorite among developers, [00:20:00] and this model did not exist six months ago. And now if you just talk to any developer on the street, they will generally prefer cloud over open AI.
swyx: So so the, the mode is pretty light in terms of specific use cases outside of consumer. But I think inside of consumer, there's a very, very standard playbook where you, you pursue where you can take an initial advantage and just compound, compound, compound, compound until you screw up, which, which Google did.
Ryan: Okay. So. I guess it makes sense on the sort of being the verb, I guess, especially around the Thanksgiving dinner table and anyone who's not, I guess, a developer. On the developer side, you mentioned that the most recent model from Claude is kind of the preferred one for developers. Is that something that flip flops over time?
Ryan: Or when someone gets some sort of, when a developer gives preference to a model, is that kind of, do they tend to stick with that? 
swyx: The. Well, they will stick with whoever [00:21:00] is the best performing for their use cases. So if someone new were to come out, if like Amazon were to come out with like a new model that like blew everyone away, they would switch immediately.
swyx: So these, these are just more just like, these are the chronic early adopters, right? They, they care about. Not just brand, but actually what is, what, how, like, are you the best in the world at doing this thing for me? And they will ignore brands. In fact, like sometimes brand can work against you. Like, cause if, if you're an early adopter, you want it, you want the hipster upcoming brand, like the stuff that nobody knows about yet.
swyx: You know, you want to, you want to sort of be in on the new thing. I would say cloud has reached past that now. Like they, they are the default for, for developers, but yeah, like I, I, I, there's no loyalty there. Like there's no there in terms of like You know, ongoing advantage. And I would even say that the developer tools that the cloud wrappers that has sprung up to put cloud into their developer workflows, like cursor, like Codium, like maybe Cognition Devin, the people [00:22:00] who store your company specific code base, specific context actually win more than the model labs here, because the model labs maybe prohibited or.
swyx: are not allowed or don't prioritize storing your context. So like that accumulation of data is actually being hoovered up by the startups in between rather than the model labs itself. 
Brett: And it seems like at least for the time being, and maybe besides the optimizations that Alphabet and Meta have gotten from their advertising stuff, which is kind of a whole different story.
Brett: The nearest term opportunity for providing the most value with these models is. From the developer efficiency, that's the one we're seeing constantly right away where all these companies are saying, Hey, look, we started using these things and we're just seeing so much improvement in just velocity and whatever metric you want to use.
Brett: So it's quite interesting for Anthropic there. I think you kind of hit on how some of these other startups besides OpenAI, the one that turned into a a bit of [00:23:00] a, a verb, just given there the first one and how they're trying to succeed. I want to talk about some of the larger ones. And maybe the biggest elephant in the room is Alphabet and Google.
Brett: The one that seems to be going vertically integrated here, trying to essentially copy everything that OpenAI is doing. What's your opinion on what they have been able to accomplish over the last couple of years and how they've tried to catch up with, you know, these new startups trying to encroach on their turf.
swyx: Smartest thing Google ever did was acquire DeepMind for Demis Azabes. And then dumbest thing that Google ever did was every year they continued to employ Sudhendra Pichai. Unfortunately, like he's run out everyone's patience, I think, for, for how he's managed this whole, this whole transition. And yeah, what specifically, 
Ryan: like, like why, like, What is the frustration with Sundar Pichai?
swyx: That he's obviously I think I'm, I'm making very, very [00:24:00] broad statements and, you know, he's, he's still like a thousand times better business person and technologist than I am. But I think to squander Google's initial advantages that were set up for the, for him. Like 10 years ago like they invented the transformer.
swyx: They invented the TPU. They invent, they, they have the largest data sets of basically everything in the world. It's like YouTube, email search, name it. They have it. Vision for like freaking self driving cars and like they have it. They have everything. Maps. I mean, don't forget maps. 
Brett: Yeah, 
swyx: exactly.
swyx: Right. So to lose that to a startup is. Like, you know, bad. And then to, to have like two of everything. So, you know, the, the classic example was Google does have too many resources that has the gold, that has the cash cow of ads and then like waste a lot of it in like experiments that don't pan out.
swyx: That was Google X for a long time. Then it was Google brain versus deep mind. Then it consolidated the brain and deep mind under Devis and Sabis. And. [00:25:00] Like, the most recent thing, this is why it's nice to talk to me instead of people who actually work at Google, because they can't say this, but I can, through me, they can say it.
swyx: The people who I talk to within Google are not very happy with how DeepMind is still being run, because there's still two of everything. And specifically there's, The GCP version of DeepMind's offerings, and then there's DeepMind's offerings, so DeepMind's offerings. So, this would be Vertex AI and AI Studio from DeepMind.
swyx: So, there's there's a lot of still, like, I would say, inefficiency, mismanagement. People still leave you have Notebook LM here, which is one of the breakthrough products with Google. I called it Google's Chat2BT moment this year, in a sense of, it was something that was kind of, Not expected to be a big hit, was a huge, huge, huge hit and you know could have, could have been doubled down on.
swyx: The creators of NotebookLM have just left Google. Like, they can't retain this talent because they can't do anything inside of Google without A lot of politics so it's really rough I would say, like, I, I, I'm very sympathetic. Obviously it's, it's hard to run a huge organization [00:26:00] like that, but you know, like it's, it's a, it's a reality that like they have to deal with and, and see their way through.
swyx: And I, I really hope they do because like, you know, obviously they, they serve it like 2 billion people. It's crazy. So like whatever they do has impact. So, you know, they have to take that seriously.
Brett: We're all on the outside of Google, so we can't say, hear someone say, Oh, right, we're going to completely change our strategy. We totally got this wrong. What would you be looking at as someone from the outside as, Okay, they're fixing these issues. I'm liking what they're seeing or what I'm seeing here. In regards to these AI labs and the research stuff.
swyx: Yeah, so like they are paying attention. So they've improved a lot since like the early days of BARD. Remember BARD? RIP BARD. They've improved a lot in the sense of paying attention to what yeah, People want out of them. Notebook LM was a really, really good start. I would say that the frontier models as well, [00:27:00] Gemini experimental has consistently beat open AI now two months in a row, which I, which I would say is a, is an achievement for them.
swyx: This, this is coming from like the start of the year when we actually. Did not know what the Gemini strategy was at all. Now it's pretty clear they can keep up with OpenAI, and that's, that's, that's a, that's a huge improvement from where they were. Creating AI Studio, I think, was a plus, because Vertex AI was definitely weighing Google down.
swyx: Like, they're a distant fifth or sixth in, in, in the sort of market rankings I, I, that I see. So like, yeah, so like, I mean, they're doing, they're fixing a lot. And then I think the last thing I'll also shout out is they're basically, they rehired Noam Shazir, who who was the lead author on the Transformers paper and he's building, he's building his new team around him.
swyx: And I would say that's a, a mitigated positive, like whatever comes out. Of him, we'll not, we'll not see the effects of for another four or five years, but it will be good just because he's continued to make hits. Like every single time he comes up with [00:28:00] something, it's, it's been a huge banger for those who don't know, it's, it's not just the transformer.
swyx: Like he had, he had the same year in 2017 that Albert Einstein did in 1905. In the same, every year he published, every paper he published is basically now the industry standard. And then he did that again last year. So I, we expect him to do more you know, when, when he lands and ramps up at Google.
Ryan: Okay. Shifting gears a little bit, I want to talk about Meta. They have, I guess I'll let you kind of summarize it, but what has their overall strategy been? And from the outside looking in, I would think they are one of the, Going from spending a lot of money on AI development, AI applications, to having that show up in revenue, it seems like Meta would be one of the closest, like, that gap would be the closest, given that they can see [00:29:00] the targeting efficiencies with their advertising revenue.
Ryan: Do you think that's true? Do you think they're going to see some of the highest returns on the advertising, or on their AI spend, and then, I guess, you know. Maybe, what are your thoughts overall on Meta's AI initiatives? 
swyx: Clearly has done very well for the stock price, I would say. Reinventing themselves from a VR company to an AI company in the last one and a half years, maybe two years, has been fantastic for the stock price.
swyx: I think it's added, like, Trillion and a half of value to them. It's incredible. 500 percent 
Brett: in two years. Yeah. Quite a turnaround. Yeah. On that narrative. 
swyx: We're an AI company now. Forget the name. Forget the meta rebrand. Like we're an AI company now. Actually, it's even more gaslighting. It's like, we've always been an AI company.
swyx: Look at all these things we did with Facebook AI Research and hiring Yann LeCun. So we've always been there, just like we've been in everything. Like, they worked on a phone. Like, someday they're going to come out and say we've always been a phone company. Yeah, so, like Meta's done super well like, [00:30:00] they, they are, they've adopted the Android strategy to OpenAI's iPhone strategy, right?
swyx: Like, and, like, someone had to fill that role, and probably Meta was the right person to do it was, was the right people to do it. Because this is basically the exact role in the tech ecosystem they've always played. Like, they've never, they're, like, out of the list that you have here, they're the one company that doesn't provide cloud services.
swyx: They only do consumer and they're really, really damn good at it. And in the tech world, they specialize in basically open sourcing tech for everyone else that is proprietary to others. So, where you know, like, maybe Google would have, like, You know, the Angular platform, which is very, very owned by Google meta open sources react where where Amazon would have DynamoDB, so they get a kind of very proprietary platform, meta open sources, like the, what, what eventually is now Cassandra like it's, it's, it's very, very interesting that meta basically just always open sources the stuff that other people closed source, and that is their tech strategy.
swyx: Part of it is recruiting, [00:31:00] part of it is, A somewhat unvalidated strategy that people will build on top of Meta's LLAMA models and that will create an ecosystem that benefits Meta. This is what they say in earnings calls when you ask Zuck, when you ask Yann LeCun, Hey, why do you spend a hundred million dollars training LLAMA3?
swyx: Like, what does that do for Meta? One, it's a rounding error. That's like a day of earnings for them. Two, they will say, they'll feed you some line about how people are building thousands and millions of derivative tooling and models for Llama and that benefits Meta. I don't know. I haven't seen a huge amount of evidence for that, but that doesn't mean it doesn't exist.
swyx: I just haven't seen it. But it is true that it builds a huge amount of branding for them. It helps them hire, for sure. It makes them super relevant because everyone loves them because they give nine digit models away for free. And what's not to love? Like, that's, that's great. 
Brett: Okay, we, we got other questions about, you know, startup spending The invention of the AI [00:32:00] engineer, which you think invented, if I'm not, if I'm remembering that correctly and H100 oversupplying kind of the NVIDIA semiconductor stuff, but I want to hit quickly, Amazon and Microsoft, where do you, where do they stand as some of the big cloud providers and backers?
swyx: Here. Yeah, it's interesting. It's a, it's a very sort of Chinese you know, I, I see you have some listeners who are like, I'm still interested in Chinese market. In Chinese market, this is always like BAT dynamic, where like, if you're backed by one of the B, A or T, then you're not backed by the others. And everyone's trying to draw the sort of feudal territories around them.
swyx: And for a while, it's kind of looked that way. So Amazon's kind of very close to Anthropic, like anything that Anthropic releases they, Amazon will also have on Bedrock, right? Anything that OpenAI releases, obviously Microsoft will have, right? And it's just kind of, you know, every big cloud has its, has its little sock puppet AI lab that, that it can secretly funding.
swyx: I would say that's not entirely true for Entropiq because Entropiq is also funded by Google. But anyway, that is mostly true. [00:33:00] That is the strategy there. Amazon has also bought Adept, which also had two of the former co authors of the Transformer paper, and now Adept is is now the sort of Amazon AGI division, which is kind of interesting.
swyx: And I would say the last thing about Amazon is they've always invested in their own hardware and their own silicon. And Anthropic is now a major part of their strategy for proving that out. So in the, in the race to replace NVIDIA every cloud is basically investing in building their own silicon.
swyx: Google is obviously ahead with the TPUs, but Amazon is probably next. And I'll, I'll probably put OpenAI on Microsoft after that. So that's Amazon. I, I, I would, I would say like. In terms of just generally cloud services wise, I think what I said last time might stand in a sense that Amazon cloud services, AWS was this, was this leader, it was this early leader in cloud, but now because of the AI wave they are behind Google and Microsoft just because Amazon in general is behind on it.
Ryan: [00:34:00] Okay. So I think that touches on the list of companies we had here. We could seemingly go through tons and tons of AI companies, what's your thoughts, but I want to. Shift gears a little bit. So it's when I, when I listened to you talk and I read your write ups, it just sounds like so much money. And I guess you see this in NVIDIA's earnings reports is just being poured into training these models.
Ryan: And I'm going to read you a quote that you wrote. In one of your articles, it says, this is a little bit of a, I think it might've been a year ago, maybe not quite as long, but it says, It is now consensus that the CapEx on foundation model training is the fastest depreciating asset in history, but the jury on GPU infrastructure spending is still out and the GPU rich wars are raging.
Ryan: Meanwhile, we are, we know now that Frontier Labs are spending more on training plus inference than they make in revenue, raising 6. 6 billion in the largest venture round of all time, while also projecting losses of [00:35:00] 14 billion in 2026. The financial logic requires AGI to parse. I love that quote. I guess my question is, that was me 
swyx: being sarcastic a little bit.
Ryan: I guess my question is like, I can't help but wonder about the ROI here. So much money is being spent. I guess, why, why can this continue? Especially 
Brett: in the startups, like excluding the ones that have existing business models, like Alphabet or Meta. 
Ryan: If you were rationalizing this from the, from the startup's perspective, this, this level of spending, what is the return?
Ryan: What's the outcome here? 
swyx: Right. Okay. So, so we should split the startups. Do you consider opening a startup? Sure. Yeah, I guess. That's the largest, that's 
Brett: the largest one. 
swyx: Okay, all right, so like you're, you're, you're, you're definitely very public markets and I've been in private markets for like seven years now.
swyx: So it's very, it's very different, my frame of reference for me, like you know, 200 million company is a big [00:36:00] company. Anyway, so, um, so yeah, like you, one, you can justify on growth, right? Like the revenue ramp is unheard of, right? Like now what, like name me a company that's gone zero to 2 billion in revenue in two years.
swyx: And like, realistically, they can reach 100 billion, like, within the decade, like, it's not impossible. And like, what's your revenue multiple on that?
Ryan: Yeah, I guess that. 
Brett: What's the pre mortem? What's the, what's the potential downside? Where? 
swyx: Yeah, where does it stop? 
Brett: Where does it stop? Fantastic question. Yeah. Well, so, okay. 
swyx: First of all, right. I do think when I talk to finance guys, you guys are very different than tech guys. Because you look for the premortem, you look like, where does this go wrong?
swyx: And then the tech guys and the VCs go, like, where does this go right? And I think it's very important in tech investing that you get paid orders of magnitude more when you, when things are [00:37:00] right than when things are wrong, right? That your downside is 1x, your upside is 100x, right? So I think finance guys tend to overemphasize downsides sometimes but obviously it's different when it's public market.
swyx: Okay rant over. Premodern wise yeah, yeah, it's going to, it's going to be that it's, there's a lot of burning of, of cache, and there have been AI winters before, and that we, we have, we seem to have maxed out our current direction in terms of, like, Whatever GPT 5 was going to be, and now we're sort of exploring other things, and we've released two other models that were not GPT 5.
swyx: And, and maybe, you know, that, that was it for this wave, and we're going to go into winter for another, like, 5 to 10 years, and then the next generation, whatever is, you know, the next open AI comes, and whatever money you threw into this one, You know, wasn't it and that's, them's the shakes, like it's happened before you know, if you weren't around in like the sort of 2014 AI wave, you weren't around in the 2007 AI wave, like, it's happened before, and, and, you know, people flamed out and they were all really smart and really well intentioned and had all the right things and all the right [00:38:00] PowerPoint charts, and it just didn't work just because the tech didn't scale, and that's, that's not, that's not the fault of anyone, it's just that's the risk that you take.
swyx: I would say that's the flippant answer. Pre mortem wise I would also say that there's a huge amount of challenge for AI specifically in terms of incumbents versus destructors. AI is a very powerful force that helps incumbents. So, in the sense that Google has lost a huge amount of advantage to OpenAI and Perplexity, but it is not, it is still the default search for, you know, 5 billion people on planet Earth.
swyx: Like, they can get it back. And, and, like, the sheer amount of, like, incumbent dominance that Google has can still win. The sheer amount of you know, force of nature that sort of the meta is exerting just in terms of its social networks can, can, can make it still win on, on, on, on its AI stuff.
swyx: So, like, it's absolutely not clear that, you know one of [00:39:00] these, like, startups that's sort of AI foundation lab startups will win here. I tend to, I tend to view these questions as not that useful, because to me, it's going to be clear that some mix of incumbents and startups will win. So, it's not an either or, they both will win, it's just which of them will win.
swyx: Some of them will lose, some of them will win. Like, it's not useful to discuss these categories this way, I'd rather have it the other way, and go like, okay, what are the determinants of success? And, Some of the incumbents will have it. Some of the startups will have it. 
Brett: Okay. Let's talk inference training and costs.
Brett: So it seems like from our point of view, and again, you talk, you know, we focus on costs where things can go wrong. What's all the spending going to turn into ROI. It seems like the biggest bottleneck. Is the fact that it can just, it's very, very expensive to train these things and then run them. Let's imagine a world where all these smart [00:40:00] people just come up with all these innovations that decrease the cost of these things by a thousand X a decade from now.
Brett: How would that change everything in the industry? Do you think that's likely? Do we see that helping or hurting the cloud providers? That's kind of what my frame of reference is. Cause it seems like all this stuff. It's moving so quickly, I just I don't know, I get confused on whether like the state of NVIDIA, the state of all this spending can, can stay and whether the, these innovations can break through and make things less costly.
swyx: Hmm. So there are a number of things that I want to sort of comment on here. So the, you said decrease 1000x by within a decade. I just want to give an FYI that the current depreciation or the current sort of scaling or efficiency curve of AI is faster than this. It is 10x per year. Per year.
swyx: I'm not kidding. So, so 1000x is 3 years, not 10. Okay, wow, [00:41:00] sounds pretty nice. That is the current trendline, and obviously trendlines break. But that is the current trendline, and that is a fact. Okay, the other thing I think you need to also be updated on is that you are still in a very sort of GPT 3 and 4 mode of Having a very big mental model of high upfront training and then low, low subsequent inference.
swyx: That is in the process of changing right now because of World War I. It is very clear, and I mean, yes, OpenAI completely nailed this when they hired Noam, Noam Brown you know, a year ago. And this will, this will be the story of the next decade, is figuring this out. Which is that inference time scaling is now a thing, as opposed to training, like pre training scaling.
swyx: Which means that the The cost of training and inference is going to rebalance, such that inference will be higher. That generally is a good thing because you can charge very good markups on inference, as opposed to charging amortized costs on training. Like the training inference cost is [00:42:00] directly attributable in the way that training is not.
swyx: And and the performance is fantastic as well. So the rebalance Basically, like, exactly when the big governments of the world, including the US, decided that 1025 flops for a model was too big to let you responsibly train yourself, you have to notify us when you're doing a large 10E26 run. That's when we figured, that's when we stopped.
swyx: Training pre training anyway, like we were going to stop anyway. So like we let the bureaucrats in Washington think they're doing something, but actually now all the, all the focus is moving to inference which is really, really funny, like, like you're now focusing on the thing that does, that no longer matters which is kind of interesting.
swyx: That all aside, I would say combining this, this training inference shift. And the overall cost reduction I think, I don't think it benefits or hurts NVIDIA either way. I think as long as they continue to deliver hardware improvements on the pace and [00:43:00] scale that they have, it's good. The problem with NVIDIA now is expectations are so high that they have a new generation every single year.
swyx: And all it takes is just one little misstep and they're, they're going to be knocked way down. Just, just because hardware is hard. Mistakes happen and, you know, look when we had some delays in Apple, what, you know, people were like proclaiming the death of Apple. So it's going to happen to NVIDIA and, you know, that would be the perfect time to buy NVIDIA, to be honest.
swyx: And the, I mean, who does that benefit? I mean, it benefits consumers. I think there's, you also need to think about a spectrum of intelligence, right? So the, the large labs are going to be focused on the closest to AGI that we can get. For people who don't know, there are five levels to AGI that OpenAI has defined.
swyx: We're currently at level two. They consider us to be reaching level three soon. So you can read up more about that. The rest of us have progressively dumber levels of intelligence that we can use on progressively dumber and cheaper machines, right? So like, this year on my phone, I have Apple [00:44:00] Intelligence doing, like, simple tasks for me, like, summarizing my messages.
swyx: You know, I can, I can sort of ask it for, for visual search of my videos and photos. This year as well, by the end of this year, Chrome is shipping Gemini Nano inside of Chrome. So you can sort of query models without, without, without calling a serve. So like it all runs on your machine. So those, those AI models are free.
swyx: Like, they run on your device, like, there is no cost to them. Therefore, like, basically nobody makes money on them. They're, this is just Google and Apple trying to serve their customers well, and trying to say, like, you don't need to call OpenAI for summarizing your emails. Like, you can just call your local model, it's fine.
swyx: Right, so it benefits the consumer it benefits incumbents who can bundle stuff for free where other people would have to charge separately for them. Like, like an entropic is never going to get on your operating system unless they build an operating system, which, you know so they're always going to be third party.
swyx: They're always going to to, to suffer relative to the incumbents. [00:45:00]
Ryan: Okay. The last time we spoke, you said the big, this was mostly pertaining to the hyperscalers, but you said the big get bigger in tech. And it's kind of a broad classification. For, I'm trying to remember what my question was, but I think it was basically like, what are the embedded advantages of hyperscalers?
Ryan: And you mentioned they get bigger in tech. Do you think this. Now we're seeing, like you said, companies go from zero to two billion in revenue in two years. Do you think this rise, and I guess recent proliferation of AI development, changes that in any way? Or do, or I guess amplifies it? 
swyx: Ah. Does it amplify that?
swyx: Does, does it amplify the hyperscaler effect? 
Brett: Sure. Economies of scale. The big get bigger. 
swyx: Yeah. Yeah, yeah. They, it should, [00:46:00] but currently it hasn't. Which is cool. So the hyperscalers should benefit from from having that scale, having that resources, having the data centers, the, the customer relationships, what have you not even talking about data, but they also, yes, also to data.
swyx: They should have all the advantages in the world. And the only reason they don't is. Basically, diseconomies of scale, right? Like, the problems with managing so many people and having so many egos and so many divisions, and so many conflicting priorities that you can't really figure out which one to focus on.
swyx: Whereas OpenAI had one goal and they did really well at it. So, yeah, there are diseconomies of scale, they do exist. And I will say that so far the evidence has been that hyperscalers have not been able to benefit from their scale here in AMA I don't know how true that is going forward, you know, always, always subject to change, always subject to change in management, to be honest, like, like literally put, promote Demis of Google, or like, you [00:47:00] know, get someone else in there like, you know, the Google story might change very significantly if you can use a different, if you can use the existing assets really well.
swyx: But it does take, I mean superhuman levels of organizational management to, to organize a hundred thousand people to, to like pivot. 
Ryan: This episode is brought to you by our friends at Yellow Brick Investing. Yellow Brick is an aggregator of the best stock pitches across the internet. By tracking thousands of blogs, newsletters, fund letters, podcasts, and more, they collect and summarize the best stock pitches and bring them to you in a single place.
Ryan: If you're a regular listener, you know that we use Yellow Brick every single week here on the podcast to discover new investments. Or just find reports on companies we've already heard of. Try it for yourself. Simply go to joinyellowbrick. com slash chitchat and search a company or ticker you are interested in.
Ryan: You are bound to find a great report on just about any company. That's joinyellowbrick. com slash chitchat. 
Brett: Heads up folks, [00:48:00] interest rates are falling, but you can still lock in a 6 percent or higher yield with a diversified portfolio of high yield and investment grade corporate bonds on public. com. You might want to act fast because your yield isn't locked in until the time of purchase.
Brett: Lock in a 6 percent or higher yield with a bond account only at public. com forward slash chitchat stocks. Thanks. Bye. So we've been jumping all over a bit on at least this topic, but one thing that you focus on a lot in your writing discussions and stuff over the last few years is what you've called the rise of The AI engineer, I mean, go through that.
Brett: What makes it different than other, maybe adjacent engineers just general software, machine learning or stuff or data sciences, and why are they going to be important over the next decade and beyond? 
swyx: Interesting. Okay. I typically talk to, I preach this or phase this to a technical audience. So this will be interesting talking to [00:49:00] public investors.
swyx: Okay. So you know how Snowflake IPO and everyone was like, what the hell is Snowflake? You know how that was a thing? 
Brett: That was me, that was me. 
swyx: That is happening to AI right now. So Snowflake was the darling of data engineers. And data engineers were a thing basically because Facebook had a crap ton of data and they were like we need people who specialize in moving data around and making sense of data.
swyx: They invented a data engineer, it proliferated out of Facebook to everyone else it became a whole industry, Snowflake became Data Warehouse, and And was the one of the biggest IPOs of all time. Okay, so that thing is happening now to AI. We have seen this play, we've seen this story play out many times in my career.
swyx: So I've seen this for DevOps, data, front ends, and let's call it sort of serverless or cloud engineering. So the same thing's happening to AI in the sense that there's a special sub field of software engineering that is that is specializing in building with LLMs. And they will you know, they, they, they're, they will, they will, they will define the stack.
swyx: Whatever new tools that they like [00:50:00] will become the preferred stack and the preferred way to build, build ais for everyone else. And you know, I, I saw this early, you. And, and called it and promoted it as a trend and now Gartner thinks it's, it's, you know, peak hype. But I do think like it's basically the, the sort of defining engineering trend for the next 10 years at least.
swyx: There will be some day that it's over. When we hit AGI, whenever that AGI is, then AGI will do everything for us. They will never have to work a day in our lives. But until that day. I often say that AI engineer will be the last job, like if you're scared about AI taking away jobs, then you should think about the mechanics of how jobs are taken away.
swyx: It's by someone sitting down and encoding what it is you do for work into an AI. And that is the job of the AI engineer. So, ultimately, it is the job to take away the other jobs. So if you want to build AI lawyer, you're going to build AI customer support rep. You're going to build AI financial analyst.
swyx: You're going to hire an AI engineer to study what it is you need done, to put together the systems for you to do it [00:51:00] and to do it reliably and to scale it up to hundreds of thousands of AIs working for you. So that would be the job. 
Brett: You talked about something that I thought was extremely interesting.
Brett: Or maybe it's not a new idea, but it was at least to me. And that's the distinction between, or just the idea of autonomous applications, autonomous agents, and why they are the most valuable. And it made sense to Lisa, for me as a generalist, just the fact that you said, Hey, look, it can save anyone a ton of time.
Brett: So what, what do you mean by autonomous agents? And why do you think they can be so helpful for companies, people just, you know, 
swyx: Well, I mean, wow, it's, it's kind of like hard to define because I, I live in a world where this is assumed knowledge. So it's really interesting to step out of it for, for a sec.
swyx: Autonomous agents, I would just kind of think about it. To the common person as having an [00:52:00] assistant, like a secretary or a virtual assistant or executive assistant. If you ever worked with one of those, particularly if they're remote, if you've never seen their face, they work in the Philippines or something, you just email them and stuff happens, right?
swyx: That's an agent. It right now is that role is performed by a human, just like humans used to perform the computer role in the 1950s for NASA. Right now, the humans are performing the virtual assistant role. Because they live in the Philippines and they're 10x cheaper than people in the U. S. But that will go to AIs as well.
swyx: That is guaranteed. It is already partially going, right? Like we're, the problem right now is we live in this liminal stage where not everything works well yet. So we have to like scope it down to the subset of things that work well and don't use AI for the stuff that doesn't work well. But like, You know, I always say like, you don't even need AI to have a good agent.
swyx: Like, my perfect agent for me is my scheduling link that you used to book me for this call. It's a, it's a, it's a candidly link, right? Like, I sent you a link, you booked yourself, you could have [00:53:00] rescheduled, you could have canceled, I could have canceled, I could have rescheduled, and like, we didn't have to talk to each other.
swyx: We just like, dealt with this link. And that's great. Like it manages my calendar for me. I have a bunch of these out to different people that, that need to work with me asynchronously. And that's great. So I think like that, that is the promise of the agent. And I think that that is one agent working for you.
swyx: And I think the, the, the scale here is the, the promise here is defining one sort of AI employee. This is what Jensen Huang loves to call it. Like he, he, he, he has this vision of AI employees for NVIDIA as well. Defining that and then scaling that, like it's, it's just copy paste to, to have the next hundred and the next thousand working for you.
swyx: And this is way better than hiring humans, because they don't need holidays, they don't need silly things like human rights and they work exactly the same, so there's no training time too, like, once you train one, you've trained all of them. So there's just like an enormous amount of advantage to having these things.
swyx: And then finally, I think on the humanity level. Like, we, we're basically topping out in human population, like, based on projected growth rates, we're going to top out at 10 billion [00:54:00] people. I think this is, like, really nice that we're figuring out how to scale intelligences just as human intelligences top out, right?
swyx: So, ultimately, we're going to need tens and hundreds of AIs working for each individual human. For us to keep scaling the way that we, that we are scaling. And I think that's, that's beautiful. 
Brett: Yeah. That is a, that does sound quite nice. And even moving beyond the virtual agents, which I've seen some ideas from, I think it was an, an investor talking about this, about the future where there's.
Brett: Millions upon millions, maybe billions upon billions of virtual agents talking to each other online and, you know, figuring things out, but what about, and I know you're more in the startup land, so you see some of this stuff. What about that in robotics? We've seen a lot of. Ideas. And you can kind of connect the two together.
Brett: Well, we put an autonomous or AI agent or something that's been trained in AI employee within a robot, and they can replace some, you know, jobs that [00:55:00] aren't that great for some people and provide a lot of value for society. Are you seeing that at all? Or is that something that is maybe way far out into the future?
swyx: No, no, it's, it's, it's here. It's, they're walking around. I mean, like I, you know, half my rides all Waymo's, right? They're all robot drivers. Yeah. 
Brett: Yeah. 
swyx: It's really like, if you live, if you have the means, and you haven't driven in a driverless car yet, get on that. Like, they are in Tesla's, they're in Waymo's, this is coming, it's going to reshape our cities, there'll be no more parking, it's great.
swyx: Okay the, yeah, so They are coming. Physical intelligences are a thing. And there's always the question of why humanoid forms? Because, you know, it's just a quirk of nature that we have two arms and two legs. But the simple answer is that we built the world to fit us. Therefore, now we must build machines to fit the world that we built for ourselves.
swyx: So they probably should take humanoid forms. And yeah, they can take jobs that are downright dangerous for people. It's fantastic. And also, you know, [00:56:00] potentially if If we, if we don't watch where we're going, they can also fight for us, which is terrifying. But yeah, it is absolutely the case that they should take warehouse jobs.
swyx: They, they should they should take the, the hard menial labor so that we can do the higher value add stuff. So I think it's, I think it's coming. Did you see, do you guys see the videos of the Optimus robots shopping for Kim Kardashian? 
Brett: I did not know. 
swyx: They're walking around out there. The problem with Optimus and the problem with Elon in general is that you don't know how much of it is real.
swyx: So a lot of these, like, are tele operated by humans, like, sitting behind a camera somewhere. But honestly, even that is fine. Like, even that is training data, even that will be learned by a model someday, and then you can take the human out of it. And by the way, teleoperation is still cheaper than having a human there physically in person.
swyx: So, like, it's fine. These are just cars that walk. You know, like, we're comfortable with self driving cars, like, you know, these are just slightly different cars. [00:57:00] So it's going to happen. I would say, obviously, like, put this one or two generations behind how production ready it is for, you know, For the, for the everyday consumer, like it's so much easier to move things in the world of bits than in the world of atoms.
swyx: So like let's solve the bits stuff first, like the software first, then we, then we can move on to the hardware because the software clearly isn't done yet. 
Brett: Yeah, I, I saw, or you had some analogy within one of your pieces where. You said this current iteration or kind of the new products out there, the stuff that a lot of startups are working on, all these new innovations seem very similar to what self driving was maybe 2015 through 2017.
Brett: And I remember back then everyone was saying, Oh, you know, it's right around the corner and technically it was, you know, it just took kind of eight to 10 years to get there to where it started proliferating throughout society. But it might take maybe that long for some of these new things as well.
Brett: Which can seem like a long time, but in the grand scheme of things, 
swyx: [00:58:00] it really isn't. It's nothing. Like it's like, it's like we had, we took like 30 years to roll out the car. Like we had the, we had, you know, the Model T and it took a long time to like, you know, roll that out. 
Ryan: All right. I think we've covered a lot of the questions we had expected to ask beforehand.
Ryan: I guess one question that I'm thinking of now, we do, we, we do have primarily public market listeners that listen to the show on a regular basis. It sounds like a lot of the development is happening in the private markets at the startup level. Because people we we've seen the exodus of executives.
Ryan: Once they go into big tech companies, they don't like the politics. They want to be able to operate in a smaller company. For the public market investor, what, instead of where can they get the most exposure to AI, what industries. [00:59:00] In I guess the medium term, we can call it, I don't know, maybe next five years will be the most adversely impacted by the advancements in AI.
swyx: Hmm. High margin, low NPS software businesses. 
Ryan: Okay. So the legacy, I guess, software businesses. 
Brett: Like is that like an Oracle? Is that what you would? Yeah. 
Ryan: Yeah. 
Brett: Interesting. Because like 
swyx: the, the cost of making new software, the cost of porting software over has just. And I think I've just vanished. Overnight.
swyx: Like it's a lot cheaper to write software now. If you don't, if you haven't tried it, I feel like I need to try it. Throw out things, like, there's a lot of theoretical talk in this podcast. Go, like, go right in the way mode, right? Like, job one, right? Job two, go to Bolt. new and type in, make a Spotify clone, make an Airbnb clone, whatever.
swyx: Like, you'll be shocked at how far along I've been at doing these things. Bolt. new is just, is the one I [01:00:00] happen to have just interviewed on my podcast. So, like, it's top of mind. And, and Yes, these are, these are sort of toy examples, but in the real engineering world, in, in, with like cursor, windsurf cognition, all these things, and not so much cognition, but, you know, that's a different story.
swyx: But like, they are all real examples of the cost of software going a lot lower. Right. So if you listen to the all in podcast, like Chamath is working on 8090, which is like this, this like grand vision of like rebuilding all legacy things with AI. And I think like that is directionally correct. I'm not sure that Chamath is the guy to do it, but directionally, someone is going to crack it.
swyx: Someone is going to crack it. Like he's writing the thesis. I don't know if he's the guy, but whatever. It doesn't matter. Like someone's going to do it. So like. Like, the hunt for ServiceNow is on. The hunt for Oracle is on. I would say Oracle is a little bit anything databases, databases are very hard tech, and and take a lot of time to mature.
swyx: So maybe not specifically the databases, but the things built [01:01:00] on top of databases. The reason that Salesforce is so threatened by AI that they are, you know, Mark Benioff's doing his AgentForce thing, is because Salesforce knows that it is, it is, Like a prime target for new Salesforce, whatever new Salesforce is.
Brett: Interesting. All right. Well, I have bolt. new loaded in my browser right now. I'm going to try it out after this episode. We appreciate you taking the time here, Shawn. You have quite a few, especially for anyone that wants to go even more advanced and all this stuff. I mean, you have so much over at Latent Space.
Brett: So why don't you tell the listeners, I think you'll find your work, what you're doing and that the conference you did. And maybe you're doing it. Yeah, 
swyx: so, so the conference is not really for this audience. Conference is more for engineers learning the latest and greatest in engineering stuff. It will have zero business sort of angle to it.
swyx: But maybe you like that, I don't know. Latent space is more, more where I let the sort of Venn diagrams of business and, And Tech happen. And I basically do what a research firm would do if I were reporting on these as public [01:02:00] companies. So, you know, I kind of perform the analyst role, which is kind of fun for me.
swyx: Like, cause I used to do that for my hedge fund. And yeah, that's, that's where you can check out all the, all the, all the resources. The interviews and the essays that we have on, on tracking the market. And then I think the last piece that we have right now is small AI news. So small AI slash news is the newsletter that is on that gives you a daily pulse.
swyx: So these are, this is a completely odd AI generated newsletter or like a 99% AI generated newsletter. There's a bit of human curation that surveys all Twitter, all already all of discord to keep you super up to date. So this is the one that like the Andrej Karpathy's of the world Subhash Chintala's of the world, read to keep up with AI if you want to get that level of granularity.
swyx: So basically I have the six month granularity, that's the conference, then I have the weekly granularity, which is the which is latent space, and then I have the daily granularity, which is AI news. So, plug in wherever you want. 
Brett: Beautiful. All right. Well, that's going to do it for this episode. Thank you again for joining.
Brett: Let's hit the disclosure. We are not financial advisors. [01:03:00] Anything we say on the show is not formal advice or recommendation. Ryan, I, or any podcast guests may hold securities discussed in this podcast, may have held them in the past and may buy, sell, or hold them in the future. Thank you everyone for tuning into this wonderful interview.
Brett: Thank you, Shawn, for taking the time to join us and we'll see you next time.
[Chit Chat Stocks] swyx on the AI majors
Broadcast by