Warp’s Zach Lloyd on Building the AI Terminal for Developers

In this episode, Nataraj is joined by Zach Lloyd, the founder and CEO of Warp, a company developing an intelligent terminal to modernize the command-line experience for developers. A former principal engineer at Google who worked on Sheets and Docs, Zach brings a wealth of experience to his mission of reinventing a tool that has remained largely unchanged for decades. The conversation delves into the evolution of the terminal, the profound impact of AI on software development, and Warp’s vision for a future where developers interact with their computers through natural language. Zach shares insights on moving from ‘coding by hand’ to ‘coding by prompt,’ the challenges of building a sustainable business model around LLMs, and his bottoms-up, product-led growth strategy. This discussion is a must-listen for anyone interested in the future of developer tools and the practical applications of AI in coding.

→ Enjoy this conversation with Zach Lloyd, on Spotify, or Apple.

→ Subscribe to ournewsletter and never miss an update.

Nataraj: Zach, welcome to the show.

Zach Lloyd: Thanks for having me. I’m excited to be here.

Nataraj: I was really excited to have you on the show because after the ChatGPT moment broke out, the LLM companies were everywhere. I think that’s the first line of value that has been captured. I was excited about the new types of applications that we will see, and the most bullish use case for me was developer productivity. The reason being, anyone who studied compilers will know that LLMs are actually looking a lot like compilers in terms of text completion and autocomplete. Then there’s this aspect of code being very deterministic. I can say something in English and it could mean different things for the same person in different contexts, but code is already logical. So if you are feeding a logical structure to the LLMs, it’s more likely that it performs better on code than on English language. That was my thesis. I think in some format, we are seeing the biggest use cases are around developing new products, especially for software developers. So I think a good place to start is if you can talk about what Warp is and how you came up with this idea of an intelligent terminal.

Zach Lloyd: Cool. So yeah, Warp is an intelligent terminal. The terminal, in case folks aren’t familiar, is one of the two most important tools that developers use every day. They use a terminal and they use a code editor. The terminal is basically the place where you tell the computer what to do. That could mean building your code, running your tests, writing internal tools, or interacting with your production system. So it’s a very ubiquitous and important tool for developers. It’s also a tool that’s kind of stuck 40 years ago from a usability perspective. It’s something that really has not evolved much from an experience point of view. When Warp started, our goal was to modernize this interface, make it more usable, and make it work more like a modern app. Even really simple things, like make the mouse work in the terminal. But as the LLMs have matured and come out, the product has become vastly different. At this point, Warp is a place for developers to talk to their computer and tell the computer what to do. Because they’re doing this through the terminal, there’s this huge array of tools that already exist in the form of these command-line apps that can take what a developer says in English and turn it into a series of app calls that do what the developer wants. That could mean setting up a new project, debugging something in production, or increasingly just writing code, which is obviously the biggest developer activity. So that’s where we’re at today. We think Warp and the terminal are an amazing interface for a developer to tell AI what they want to do and essentially have it done.

Nataraj: Developers are really unique; everyone is picky about their stack of tools. They have their own slightly different version of terminals or command lines they use. How does a developer use Warp now? How does the existing behavior of the terminal change by installing Warp?

Zach Lloyd: Great question. If you’re a developer, you can just go to warp.dev, download Warp—it’s a native app. If you’re running on Mac, Linux, or Windows, you just open it up and use it instead of whatever terminal you were using. Whether it was iTerm, the stock terminal app, or the VS Code terminal, you just use Warp. Despite being an AI-native experience, Warp is backwards-compatible with your existing stack. The way this works, really big picture, is a terminal is the app you run, and then within the terminal, you run a shell. Think of the shell as a text interpreter, so when you type a command, it’s the shell that figures out what program to run. Warp works with all the existing shells. A big product emphasis for us is to meet developers where they are and not make them take a step backwards in order to get all the extra benefit of doing this incredible stuff with AI.

Nataraj: So basically, you allow developers to bring in their existing nuances into Warp.

Zach Lloyd: That all basically works. For 98% of the stuff that developers have set up in iTerm or wherever, you can just open up Warp and it should just work the same, but also be better. At least that’s the goal.

Nataraj: A terminal is generally a little restrictive. Usually, they’re not intelligent in the sense that while some terminals let you easily reuse a previous command, you have to know the exact command. This becomes really hard when a developer is in the early stages of their career because you have to remember all those commands, or you’re constantly going to ‘help.’ If you’re using Git and trying to commit or do different things, you’re struggling to find the right command to do the right thing. What intelligence is Warp adding?

Zach Lloyd: Yeah, you’re absolutely right. One thing that’s really frustrating for beginners and experts is you open up the terminal, and it’s just a blank screen. If you want to get something done, you better remember what the command is. And these commands can become quite complicated. Let’s say you want to set up a brand new Python project. You have to install the Python toolchain, and then you might have to clone some Git repo. When you go to clone the Git repo, you might find that you don’t have SSH keys, and then you’re going to start Googling or go to Stack Overflow to figure out how to recreate your SSH keys to authenticate to GitHub. That’s annoying. That’s not what developers want to do. Developers want to build things; they don’t want to deal with all this incidental complexity. So what Warp does is you don’t have to remember the commands at all. You just need to know what you want to accomplish and you tell the computer to do it literally in English. So instead of typing a command in Warp, you would type ‘help me set up a new Python tool chain, clone this repo, make a new branch for me, make sure it all compiles and runs,’ and that’s it. You would hit enter. And then what the LLM does is it tries to figure out its context. The LLM might run ‘ls,’ it might run ‘git status,’ it will try to run ‘git clone.’ When it hits that SSH error, it’ll say, ‘We had an SSH error, do you want me to generate these SSH keys for you?’ As a user, you’ll say yes, and then it will remember the command to generate the SSH keys. It will basically do this with you until you get to the spot that you want to be at. That’s a way better workflow than switching context out of the terminal and looking this up on Google every time you hit some error.

Nataraj: There’s also been this huge integration between IDEs and terminals. Does that change how you think about Warp? Does Warp have to also now work on the IDE?

Zach Lloyd: Great question. A lot of people use the terminal in the IDE, and there are definite benefits to that. What’s interesting that’s happening in the world of AI-based development is that I think neither the IDE nor the terminal actually makes sense as the primary tool for the future of code. What makes the most sense is some sort of workbench where you as a developer just tell the computer what you want to do. The standard workflow for someone using an IDE today is you’ll open up all the files that might be relevant to building a feature, and then you’ll start writing a function or a class definition. You’re basically doing what I call ‘coding by hand.’ The world that we’re moving towards is one where, rather than doing anything by hand from the outset, you’re going to work by prompt. You’re going to describe the feature that you want to build in English, and the AI, with increasing autonomy, is going to solicit whatever information it needs from you and your environment, and then it’s going to go do that task. My hypothesis is that the IDE is not actually the right place to do that. It’s much more of a place for having a bunch of files open and doing hand editing. What you see in all of the AI-based IDEs, like Cursor, is that they are guiding users over to a chat panel where the user can, through conversation or prompting, build their feature. That chat panel is starting to look more and more like a terminal in its interactions. Warp’s approach is not to build an IDE, but to build something where a developer can ask for anything they want done and build the interface around showing the work that’s being done directly in that linear fashion. My vision is that these traditional IDE and terminal boundaries are going to blend into something oriented around what the best workflow for development should be in the future.

Nataraj: Historically, we’ve moved up the level of abstraction in development. We used to write HTML, then we came up with WordPress. For e-commerce, we went to Shopify. We’ve moved to a layer where we no longer use HTML directly. The end output is the same, but what you’re doing to get it has changed.

Zach Lloyd: Totally. Another good analogy is that back in the day, developers used to work in assembler language, which was very low-level. Then you moved up to a language like C, where you still have to know how memory works but it enabled faster productivity improvements. Then you moved up to a language like Python or JavaScript where you don’t have to worry about so much of the underlying system architecture. This is a bigger step because you can basically do it through English, but you’re lessening the barrier to working with code. I do think, for now and for the next couple of years, you’re going to need that programming expertise to build things of high complexity. It becomes more important that you know what’s going on because a lot of times, with this method of developing by prompt, the AI will do 80% of something and then get stuck or have bugs it can’t resolve. If you don’t know what’s going on, you’re going to be stuck with it. But the level of abstraction is definitely changing for developing software.

Nataraj: How has the feedback been from developers? And how is adoption coming? Are developers discovering it and then forcing engineering managers to buy your product, or is it coming from the top down?

Zach Lloyd: We’re mostly building for developers, so our go-to-market motion is bottoms-up, product-led growth. It’s going really well from a user adoption standpoint. We’re well into the multiple hundreds of thousands of developers actively using Warp, and that’s growing really fast. We have some people who are using it because they want a better terminal UX, and some are using it because they’re AI early adopters. Our strategy is to get a lot of developers using it, spread it wide, and get them paying for it. When we have enough concentration at a company, we end up having conversations with engineering leaders. We do have enterprise contracts with pretty good companies, but the primary motion is bottoms-up product growth. What gets people to pay us is getting them to an ‘aha’ moment in the app where the AI did something that blew their mind. It could be something as simple as fixing all of their dependency issues. A big part of what’s helped us grow is inserting ourselves into developers’ existing workflows in ways that are low friction but surface the value of AI with them doing almost no work.

Nataraj: What are some examples of workflows you’ve inserted yourselves into?

Zach Lloyd: A prime example is you try to build your code, you get a compiler error, and Warp just pops up a fix for it. As a developer, all I need to do is accept this fix. That’s very different from expecting the developer to know to type in, ‘Hey, please fix my compiler error.’ To the extent that we can hook into someone’s workflow, guess what they’re trying to do, and surface the AI as a fix, that’s the best way to get an ‘aha’ moment. I think that’s one of the reasons the first modality that really caught on is autocomplete—it’s just there, it’s no work, and it’s really low cost if it’s wrong.

Nataraj: Are you creating your own model or leveraging other LLM models? Which models are doing the best job for your use cases?

Zach Lloyd: The best model for developers right now is Claude 3.5 Sonnet. We offer it in Warp. We’re also offering for more complex tasks, users have the option to do a two-step execution where first they use one of the reasoning models to come up with a plan, and then we switch them to a standard LLM to actually execute the plan.

Nataraj: How do you think this will evolve in the next two to three years in terms of development? There’s this new phenomenon we’re calling ‘agents,’ where we are using high-reasoning models with traditional LLMs.

Zach Lloyd: The way I look at it, there are three main modalities that are important for developers right now. One is completions. The second is chat, where you’re pairing with an agent in an interactive mode. The third is a true agent with real autonomy. I think this is coming. In this world, you have to change the user experience to be based around higher latency interactions. What does that mean? It means if I’m asking an agent to build a feature for me, I don’t want to sit there and watch it do it. It might take five minutes to get a plan and another 10 to execute and test. That points towards a different interaction modality, essentially some sort of workflow management software, kind of like GitHub Actions. You start a job, it tells you when it finishes or if it hits an error, and you can have multiple running at once. I think another really important property is that when the agent fails, it’s not a pain for the developer to hop in and work with it to fix the issue.

Nataraj: Can you talk a little bit about cost? LLMs are costly, and the per-query price is not yet cheap enough to make a sustainable business. How are you seeing that play out?

Zach Lloyd: It’s a great question. Our pricing is based on a couple of plans for individuals and small teams at the $15 and $40 price points, differing mainly around AI requests. It’s a hard thing to price because the underlying price of these models is based on tokens, but pricing by tokens is too close to the metal and too far from the value to a developer. For all of our paid users, we have a pretty healthy positive margin, around 30 to 60%. However, it’s hard because the underlying models change both their costs and how much context they want to gather. We give all of our free users some amount of AI because we want them to understand the value and get to a moment where they want to pay us. I definitely think there’s a path to a sustainable business here, but it’s a bit of an open question exactly what will happen with model costs.

Nataraj: I always thought this dependency on LLMs could change completely if you adopt an open-source model and host it in your own cloud, then start to fine-tune your own models.

Zach Lloyd: Totally. If we were to take DeepSeek or Llama and host it, it’s a totally different level of control over the costs. You’re not paying a model provider. If you look at who’s making money on AI, it’s the chip makers, then the hyperscalers, then the model providers. There are a lot of people taking margin before you get to the app layer. We don’t do that right now because the quality difference of the models is such that our number one concern is getting users to realize the power of this stuff and convert them to paying customers. From a unit economic standpoint, we’re trying to stay breakeven and see what the right way to optimize costs is.

Nataraj: Is there any metric that you really focus on for your product? For example, how much time does it take for a new customer to decide to pay?

Zach Lloyd: One really interesting metric we’ve just started looking at is what percentage of things done in Warp are either done by AI or being asked of AI. In a normal terminal, it’s 0%. For Warp, it’s around 10% of things happening in the terminal right now are either the user asking in English or the AI doing something. AI engagement is the leading indicator for monetization for us. It would be a cool spot for us to get to where more than half of the interactions in Warp are happening in English or autonomously because of the AI. We’re trying to flip our users’ perception of this being a terminal that has AI to an AI interface where you can fall back to using the terminal if you want.

Nataraj: Can you talk about your go-to-market motion? Marketing for developers is a particularly interesting problem.

Zach Lloyd: About 80% of it is organic. We spend some money on sponsorships at GitHub repos and a little on Google ads, but the primary thing is organic. The biggest driver by far is developers telling each other about it. We’ve experimented with viral loops, like a referral program, and the ability to share cool things you do in Warp via a link. A really big thing is social media. The best thing for us is when someone does something super cool with our product and shows it to the world on Twitter or YouTube. For a product like Warp, you have to see it to get it.

Nataraj: How are you using AI, and did it change the way you are building your own startup?

Zach Lloyd: It’s an interesting question. We’re building an AI product, we’re all developers, and we all use our own product every day. There is a virtuous cycle: as the AI gets better in Warp, we do more of our coding, debugging, and DevOps tasks just by talking to our own product. Outside of our own product, there are a few AI tools I use. For example, I use a tool I really like called Granola, which is an AI meeting note-taker, and I just don’t take notes in meetings anymore. That’s cool, but it’s not like some of the stories you hear. I saw a tweet from the president of YC that in the latest cohort, for 25% of the companies, 95% of the code was written by LLMs. That’s not how it’s been with Warp. But we are adopting AI tools, and the primary one we adopt is the one we’re building.

Nataraj: Do you think we have hit a productivity level where we need fewer developers versus more?

Zach Lloyd: Not at all. We’re trying as hard as we can to hire developers. I think there’s probably a class of relatively simple front-end apps where you can maybe start to not hire developers. But for professional software development at a tech company, the impact of these AI tools is that it makes your existing developers more productive. The other thing is there’s basically infinite demand for software. Developing software is becoming more efficient, and there are benefits to that. Every company I know is trying as hard as they can to hire awesome software developers right now. I haven’t seen a negative impact at all in the type of development we do.

Nataraj: I actually think AI is at a stage where it’s sort of ‘draft AI.’ It gets you to 80-90%, but not 100%. That’s where the narrative versus reality is. You still need a developer to do that last 15-20%.

Zach Lloyd: I agree. I think it’s a mistake to think of it only as a function of the progress in the models. The models are only as good as the context that’s provided. Getting all the right context and knowledge into these things is a challenge. And then, the likelihood of succeeding at a task depends on the ability to specify it correctly. English is ambiguous, and people assume a lot of context that the LLM does not know. The fallibility of humans and how they communicate is still going to create work around this.

Nataraj: We’re almost at the end. What are you consuming right now? Books, podcasts, Netflix?

Zach Lloyd: I’m reading a totally different non-tech book called ‘Traveler’s Guide to the Middle Ages.’ It’s like, imagine you were traveling in the Middle Ages, what would that experience be like? It’s about people going on religious pilgrimages or traveling to the Far East. I like it because it’s an interesting reminder of how different an individual’s experience of the world was not that long ago. It’s history based on how people lived, not major historical events.

Nataraj: Are there any mentors in your career that helped you?

Zach Lloyd: I’ll call out a guy who’s kind of legendary in the tech industry, he’s now the CTO of Notion. His name is Fuzzy. He was my manager at Google on Google Docs, and he was one of the creators of Google Sheets. Most of what I learned about how to create incentives for engineers, give feedback, and get a team functioning at a high level, I learned from him.

Nataraj: What do you know about starting a company now that you wish you knew before?

Zach Lloyd: I’m on my second company, and I can tell you things I learned from the first to the second. The first one I failed at, but learned a lot. Really focus on team. Really try to hire great people, even if it makes you go a little slower, and hold that high bar at the beginning. Also, really try to work on as big of a problem as you can, which is counterintuitive to a lot of startup founders. Counterintuitively, the bigger swing you’re taking, the easier it is to get people to fund you and attract awesome people to work with you. People want to work on something really meaningful.

Nataraj: That’s a good note to end the conversation. Zach, thanks for coming on the show and sharing all about Warp.

Zach Lloyd: Cool, thank you so much for having me.

Zach Lloyd’s insights provide a compelling look into how AI is not just enhancing but fundamentally reshaping developer tools. The conversation highlights the shift towards more intuitive, AI-driven workflows and the exciting future for software creation.

→ If you enjoyed this conversation with Zach Lloyd, listen to the full episode here on Spotify, or Apple.

→ Subscribe to ourNewsletter and never miss an update.