Decagon’s Ashwin Sreenivas: Building a $1.5B AI Support Giant

At just under 30, co-founders Jesse Zhang and Ashwin Sreenivas have built Decagon into one of the fastest-growing AI companies, achieving a $1.5B valuation in just over a year out of stealth. Backed by industry giants like Accel and Andreessen Horowitz, Decagon is redefining enterprise-grade customer support with its advanced AI agents, earning a spot on Forbes’ prestigious AI 50 list for 2025. In this episode of the Startup Project, host Nataraj sits down with Ashwin to explore the secrets behind their explosive growth. They discuss how Decagon moved beyond the rigid, decision-tree-based chatbots of the past by creating AI agents that follow complex instructions, how they found product-market fit by tackling intricate enterprise workflows, and the company’s long-term vision to build AI concierges that transform customer interaction.

👉 Subscribe to the podcast: startupproject.substack.com

Nataraj: So let’s get right into it. What is Decagon AI? What does the product do, and talk a little bit about the technology behind Decagon.

Ashwin Sreenivas: You can think of Decagon as an AI customer support agent. For our customers, Decagon talks directly to their customers and has great conversations with them over chat, phone calls, SMS, and email. Our goal is to build these AI concierges for these customers. This idea of AI for customer support isn’t necessarily new; you’ve had chatbots for 10 years now, probably. But the thing that’s really different this time is if you look at the chatbots from as late as three or four years ago, it wasn’t a great experience. The reason it wasn’t a great experience is because you had these decision trees that everybody had to build, and it was a pain to build them and a pain to maintain. From a customer perspective, if you have a question or a problem that is one degree off from the decision tree that was built out, it was completely useless. That’s when you have people saying, “agent, agent, agent.” The thing that’s changed, and a lot of the core of what we’ve built, is a way to train these AI agents like humans are trained. Humans have standard operating procedures that they follow, and our AI agents have agent operating procedures that they follow. We’re able to essentially build these AI agents that can have much more fluid, natural conversations like a human agent would.

Nataraj: Talking a little bit about the products, you mentioned chat, phone calls, emails. Do you have products for everything? If a company is coming to adopt Decagon, are they first starting with chat and then expanding to everything else? How does the customer journey look?

Ashwin Sreenivas: This is actually very driven by our customers. For a lot of the more tech-native brands, think like a Notion or a Figma, you would never think about picking up the phone and calling them. You’d want to chat or email. Whereas some of our other customers like Hertz, you don’t really email Hertz. If you need a car, you’re going to call them up on the phone. So a lot of our deployment model is guided by our customers and how their customers want to reach out to them. Typically, most customers start with the method by which most of their customers reach out, and then they expand to all the other ones. It’s very common to start with chat and then expand to email and voice, or start with voice and expand to chat and email.

Nataraj: I want to double-click on the point you mentioned about the decision tree model. I think around 2015, during the Alexa peak, everyone was building chatbots. I remember the app ecosystem where you had to build apps on Alexa or Microsoft’s Cortana. Conversational bots were the hype for two or three years, but they quickly stagnated when we realized all we were doing was replicating the “press one for this, press two for this” system on a chat interface. You define a decision tree, and anything outside of that is basically an if-else command line that ends with a catch-all driving you to a human. There are obviously a lot of players in customer support with existing tools. Do they have a specific edge on creating something like what Decagon is doing because of their existing data?

Ashwin Sreenivas: No, I actually think, interestingly enough, because these customer service bots went through a few generations of tech, the tech is different enough that you don’t get too much of an advantage starting with the old tech. In fact, you start with a lot of tech debt that you then have to undo. Let’s say 10 years ago, you had to start with explicit decision trees where you program every single line. Then about five years ago, you had the Alexas of the world. It was a little bit of an improvement, but essentially all it did was allow a user to express what they want. They could say, “I want to return my order,” and the models were good at detecting intent—classifying a natural language inquiry into one of 50 things it knew how to do. But beyond that, everything became decision trees. The thing is now with these new models, because you have so much flexibility and the ability for them to follow more complex instructions and multi-step workflows, you can actually rebuild this from the ground up. It’s not just classifying an intent and then following a decision tree; we want the whole thing to be much more interactive for a better user experience. We had to rebuild it to ask, how does a human being learn? You have standard operating procedures. You say, “Hey, if a customer asks to return their order, first check this database to see what tier of customer they are. If they’re a platinum customer, you have a more generous return policy. If they’re not, you have a stricter one. You need to check the fraud database.” You go through many of these steps and then work with the customer. The core of what we’ve done is build out AI agents that can follow instructions very well, like a human does.

Nataraj: This whole concept of AOPs (Agent Operating Procedures) that you guys introduced is very fascinating. You mentioned SOPs, which humans read, and then you have AOPs, which is sort of a protocol for the agent. Who is converting the SOP into an AOP? How easy is it to create this agent? Are you giving a generic agent that adapts to a customer’s SOP, or do I as a customer have to build the agent?

Ashwin Sreenivas: The core Decagon product is one agent that is very good at following instructions and AOPs. We built this for time to value. If you have to train an agent from scratch for every single customer, it’s going to take a lot of time for that customer to get onboarded. And two, it’s very difficult for that customer to iterate on their experiences. If you build one agent, like a human, that’s very good at following instructions, they can come to that customer and say, “Here are the instructions I need to follow,” and you can be up and running immediately. In terms of how these AOPs are created, most customers tend to have some set of SOPs already, and AOPs are actually extremely close to these. The only thing you need to change is you need to instruct it on how to use a company’s internal systems. It’s 99% English, and then there are a few keywords to tell it, “At this point, you need to call this API endpoint to load the user’s details,” or “At this point, you need to issue a refund using the Stripe endpoint.” That’s the primary difference from SOPs.

Nataraj: If you talk about the technology stack, are you using external models, or are you training your own models? What is the difference between a base model and what you’re delivering to a customer?

Ashwin Sreenivas: We spend a lot of time thinking about models. We do use some external models, but we also train a lot of models in-house. The reason is, if you’re using external models, most of what you can do is through prompt tuning, and we found that models are only so steerable with just prompt tuning. We’ve spent a lot of time in-house taking open-source models and fine-tuning them, using RL on top of them, and using all of these techniques to steer them. To get these models to follow instructions well, you have to decompose the task. A customer comes in with a question, and I have all of these AOPs I could select from. The first decision is, is any of these AOPs relevant? If a user is continuing the conversation, are they on the same topic or should I switch to another AOP? At every step, there are a hundred micro-decisions to make. A lot of what we do is break down these micro-decisions and have models that are very, very good at each one.

Nataraj: The industry narrative has been that only companies with very large capital can train models. Are you seeing that cost drop? When you mentioned you’re training open-source models, is that becoming more accessible?

Ashwin Sreenivas: We’re not pre-training our models from scratch. We take open-source models and then do things on top of those. The thing that has changed dramatically is that the quality of the open-source models has gotten so good that this is now viable to do pretty quickly.

Nataraj: Which models are better for your use case?

Ashwin Sreenivas: We use a whole mix of models for different things because we found that different base models perform differently for different tasks. The Google Gemma models are great at very specific things. The Llama models are great at very specific things. The Qwen models are great at very specific things. Even for one customer service message that comes in, it’s not one message going to one model. It’s one message going to a whole sequence of models, each of which is good at doing different things to finally generate the final response.

Nataraj: It’s often debated that as bigger models like GPT-5 or Gemini improve, they will gain the specialized capabilities that smaller, fine-tuned models have. What is the reality you’re seeing?

Ashwin Sreenivas: I would push back against that argument for two reasons. Number one, while the bigger models will all have the capabilities, the level of performance will change. If you have a well-defined task, you can have a model that’s 100 times smaller achieve a higher degree of performance if you just fine-tune it on that one task. I don’t want it to code in Python and write poems; I just want it to get really good at this one thing. When measured on that one task, it will probably outperform models a hundred times its size. Number two, which is equally important, is latency. A giant model might take five seconds to generate a response. A really small model cuts that time by a factor of 10. Over text, five seconds might not matter, but on a phone call, if it’s silent for five seconds, that’s a really bad experience. For that, you have to go toward the smaller models.

Nataraj: Can you talk about why you and your co-founder picked customer service as a segment when you decided to start a company?

Ashwin Sreenivas: When we started this company, it was around the time when GPT-3.5 Turbo and GPT-4 were out. We were looking at the capabilities and thought, wow, this is getting just about good enough that it can start doing things that people do. As we looked at the enterprise, we asked, where is there a lot of demand for repetitive, text-based tasks? Number one was customer support teams, and number two was operations teams. As we talked to operations leaders, the number one demand was in customer support. They told us, “Look, we’re growing so quickly, our customer support volume is scaling really quickly, which means we need to hire a lot more people, and we can’t afford to do that. We are desperate.” Initially, it looked like a very crowded space, but as we talked to customers, we found it was crowded for smaller companies with simple tasks, where 90% of their volume was, “I want to return my order.” But for more complex enterprises, there wasn’t anything built that could really follow their intricate support flows. That was the wedge we took—to build exclusively for companies with very complex workflows. The other thing that was interesting was our long-term thinking. If you build an agent that can instruction-follow very well, you enable businesses to eventually grow this from customer support into a customer concierge.

What I mean by that is, let’s say you want to fly from San Francisco to New York. You go to your favorite airline’s website, type in your search, and it gives you 30 different flights to pick from. That’s a lot of annoying steps. A much better experience would be to text your airline and say, “I want to go to New York next weekend.” An AI agent on the other side knows who you are, your preferences, and your budget. It looks through everything and says, “Hey, here are two options, which one do you like?” This AI agent also knows where you like to sit and says, “By the way, I have a free upgrade available for you. Is that okay?” You say yes, and it says, “Booked.” The big difference is this is a much more seamless experience. Most websites today shift the burden of work onto the user. Now, it shifts to a world where you express your intent to an AI agent that then does the work for you. That was a really interesting shift for us. Building these customer support agents is the first step to building these broader customer concierges.

Nataraj: How did you acquire your first five customers? What did that journey look like?

Ashwin Sreenivas: Early customer acquisition is always very manual. There’s no silver bullet. It’s just a lot of finding everyone in your networks, getting introductions, and doing cold emailing and cold LinkedIn messaging. It’s brute force work. But the other thing for us is we never did free design pilots; we charged for our software from day one. This doesn’t mean we charged them on day one of the contract. We’d typically say, “There’ll be a four-week pilot. At the end of four weeks, we’ll decide upfront if you like it, this is what it’s going to cost.” We never had an open-ended, long-term period where we did things for free because, in the early days, the number one thing you’re trying to validate is, am I building something that people will pay money for? If it’s truly valuable, you should be able to tell your potential customer, “Hey, if I accomplish A, B, and C, will you pay me this much in four weeks?” If it’s a painful enough problem, they should say yes. This helped us weed through bad business models and bad initial ideas quickly.

Nataraj: What business impact and success metrics do your customers look at when using Decagon?

Ashwin Sreenivas: Customers think about value in two ways primarily. One is what percentage of conversations we are able to handle ourselves successfully—meaning the user is satisfied and we have actually solved their problem. If we can solve a greater percentage of those, fewer support tickets ultimately make their way to human agents, who can then focus their time on more complicated problems. The second benefit, which was a little counterintuitive, was that a lot of these companies expanded the amount of support they offered. It’s not that companies want to minimize support; they want to give as much as they can economically. If it cost me $10 for one customer interaction and all of a sudden that becomes 80 cents, I’m not just going to save all that money. I’m going to reinvest some of that in providing more support. We’ve noticed that their end customers actually want that increased level of support. So now, instead of phone lines being open only from 9 a.m. to 5 p.m., it becomes 24 hours a day. Instead of offering support only to paid members, we offer support to everybody. There’s this latent demand for increased support, and by making it much cheaper, businesses can now offer more. At the end of the day, this leads to higher retention and better customer happiness.

Nataraj: You also have support for voice agents, which is particularly interesting. What has the traction been like? Do customers realize they’re talking to an AI?

Ashwin Sreenivas: In general, all our voice agents say, “Hi, I’m a virtual agent here to help you” or something like that. But the other interesting thing is most customers calling about a problem don’t want to talk to a human; they want their problem solved. They don’t care how, they just want it solved. For us, making it sound more human is not about giving the impression they’re talking to a human; it’s to make the interaction feel more seamless. You want responses to be fast. At the end of the day, the primary goal is, how can we solve the customer’s problem? Even if the customer is very aware they’re talking to an AI agent, but that agent solves their problem in 10 seconds, that’s a good experience. Versus talking to a human who takes 45 minutes, which is a bad experience. We have several customers now where the NPS for the voice agents is as good or higher than human agents because if the AI agent can solve their problem, it solves it immediately. And if it can’t, it hands it over to a human immediately. Either way, you end up having a reasonably good experience.

Nataraj: Has there been a drop in hiring in support departments? Are agents replacing humans or augmenting them?

Ashwin Sreenivas: It really depends on the business. If AI agents can handle a bigger chunk of customer inquiries, you can do one of three things. One, you can handle more incoming support volume. You put it on every page, you give support to every member, you do it 24 hours a day. Your top-line support volume will go up, but your customers have a better experience, and you can keep the number of human agents the same. Other people might say, “I’m going to keep the amount of customer support I do the same. There are fewer tickets going to human agents, so now I can have those agents do other higher-value things,” like go through the high-priority queue more quickly or move to a different part of the operations team.

Nataraj: Can you talk about the UX of the product? People have different definitions of agents. What kind of agent are we talking about here?

Ashwin Sreenivas: Interacting with Decagon is exactly like interacting with a human being. From the end user’s perspective, it’s as though they were talking to a human over a chat screen or on the phone. Behind the scenes, the way Decagon works is that each business has a set of AOPs that these AI agents have access to. The AOPs allow the agents to do different things—refund an order, upgrade a subscription, change billing dates. The Decagon agent is just saying, “Okay, this question has come in. Do I need to work through an AOP with the customer to solve this problem?” And it executes the AOPs behind the scenes.

Nataraj: Before your product, a support manager would look at their team’s activities. How does that management look now on your customer’s side?

Ashwin Sreenivas: There’s been an interesting shift. Rather than training new human agents, I’ve trained this AI agent once, and now my job becomes, how can I improve this agent very quickly? We ended up building a number of things in the product to support this. If the AI agent had one million conversations this month, no human can read through all of that. We had to build a lot of product to answer, what went well? What went poorly? What feedback should I take to the rest of the business? What should I now teach the agent so that instead of handling 80% of conversations, it can handle 85%? The primary workflow of the support manager has changed from supervising to being more of an investigator and agent improver, asking, “What didn’t go well and how can I improve that?”

Nataraj: Are the learnings from one mature customer flowing back into the overall agent that you’re building for all companies?

Ashwin Sreenivas: We don’t take learnings from one customer and apply them to another because most of our customers are enterprises, and we have very strong data and model training guarantees. But the learning we can take is what kinds of things people need these agents to do. For instance, we learned early on that sometimes an asynchronous task needs to happen. Decagon didn’t have support for that, so we realized that use case was important and extended the agent to be able to do tasks like that. It’s those kinds of learnings on how agents are constructed that we can take cross-customer. But for a lot of these customers, the way they do customer service is a big part of their secret sauce, so we have very strong guarantees on data isolation.

Nataraj: How are you acquiring customers right now?

Ashwin Sreenivas: We have customers through three big channels. Number one is referrals from existing customers. Support teams will often say, “Hey, we bought this thing, it’s helping our support team,” and they’ll tell their friends at other companies. Number two is general inbound that we get because people have heard of Decagon. And three, we also have a sales team now that reaches out to people and goes to conferences.

Nataraj: Both you and your co-founder had companies before. How did the operating dynamics of the company change from your last company to now? Did access to AI tools increase the pace?

Ashwin Sreenivas: A lot of things changed. For both of our first companies, we were both first-time founders figuring things out. I think the biggest thing that changed was how driven by customer needs we were. We didn’t overthink the exact right two-year strategy or how we were going to build moats over three years. We said, the only thing we’re going to worry about now is, how do we build something that someone will pay us real money for in four weeks? That was the only problem. That simplifies things, and we learned that all the other things you can figure out over time. For instance, with competitive moats, when we sold a deal in the early days, we would ask, “Why did you buy us?” They would tell us, “This competitor didn’t have this feature we needed.” And we were like, great, so we should do more of that because clearly this is valuable.

Nataraj: It’s almost like you just listen to the market rather than putting your own thesis on it.

Ashwin Sreenivas: Yeah. I think there was a very old Marc Andreessen essay about this: good markets will pull products out of teams. The market has a need, and the market will pull the product out of you.

Nataraj: What’s your favorite AI product that you use personally?

Ashwin Sreenivas: I use a number of things. For coding co-pilots, Cursor and Supermaven are great. For background coding agents, Devin is great. I like Granola for meeting notes. I used to hate taking meeting notes, and now I just have to jot down things every now and then. I think that captures most of what I do because either I’m writing code or talking to people, and that has become 99% of my life outside of spending time with my wife.

Nataraj: Awesome. I think that’s a good note to end the conversation. Thanks, Ashwin, for coming on the show.

Ashwin Sreenivas: Yeah, great being here. Thanks for having me.

This conversation with Ashwin Sreenivas provides a masterclass in building a category-defining AI company, highlighting the power of focusing on genuine customer pain points and the massive potential for AI to create more seamless, personalized business interactions. His insights reveal a clear roadmap for how AI is moving from simple automation to becoming a core driver of customer experience.

If you enjoyed this conversation with Ashwin Sreenivas, listen to the full episode here on Spotify, Apple or YouTube.
Subscribe to our newsletter: startupproject.substack.com