Category: Podcast Episode Transcript

Full transcripts of the Startup Project podcasts.

  • Glean AI Founder Arvind Jain on the Future of Enterprise AI Agents

    Arvind Jain, CEO of Glean AI and co-founder of the multi-billion dollar company Rubrik, is a veteran of Silicon Valley’s most demanding engineering environments. After a decade as a distinguished engineer at Google, he experienced firsthand the productivity ceiling that fast-growing companies hit when internal knowledge becomes fragmented and inaccessible. This pain point led him to create Glean AI, initially conceived as a “Google for your workplace.” In this conversation with Nataraj, Arvind discusses Glean’s evolution from an advanced enterprise search tool into a sophisticated conversational AI assistant and agent platform. He dives into the technical challenges of building reliable AI for business, how companies are deploying AI agents across sales, legal, and engineering, and his vision for a future where proactive AI companions are embedded into our daily workflows. He also shares valuable lessons on company building and fostering an AI-first culture.

    👉 Subscribe to the podcast: startupproject.substack.com


    Nataraj: My wife’s company actually uses Glean, so I was playing around to prepare for this conversation. But for most people, if their company is not using it, they might not be aware of what Glean is and how it works. Can you give a pitch of what Glean does today and how it is helping enterprises?

    Arvind Jain: Most simply, think of Glean as ChatGPT, but inside your company. It’s a conversational AI assistant. Employees can go to Glean and ask any questions that they have, and Glean will answer those questions for them using all of their internal company context, data, and information, as well as all of the world’s knowledge.

    The only difference between ChatGPT and Glean is that while ChatGPT is great and knows everything about the world’s knowledge, it doesn’t know anything internally about your company—who the different people are, what the different projects are, who’s working on what. That context is not available in ChatGPT, and that’s the additional power that Glean has. That’s the core of what we do. We started out as a search company. Before these AI models got so good, we didn’t have the ability to take people’s questions and just produce the right answers back for them using all of that internal and external knowledge. In the past, I would call ourselves more like Google for your workplace, where you would ask questions, and we’ll surface the right information. But as the AI got better, we got the ability to actually go and read that knowledge and instead of pointing you to 10 different links for relevant content, we could just give you the answer right away. That’s the evolution of how we went from being a Google for your workplace to being a ChatGPT for your workplace. We’re also an AI agent platform. The same underlying platform that powers our ChatGPT-like experience is also available to our customers to build all kinds of AI agents across their different functions and departments and ensure that they’re delivering AI in a safe and secure way to their employees.

    Nataraj: You started in 2019 as an AI search company. Now, it feels very natural to build a ChatGPT-like product for enterprise because the value is instantaneous. But why did you pick the problem of solving enterprise AI search back then? It was not the hot thing or an obvious problem. What was your initial thesis?

    Arvind Jain: For me, it was obvious because I was suffering from that pain. Before Glean, I was one of the founders of Rubrik. We had great success and grew very fast; in four years, we had more than 1,500 people. As we grew, we ran into a productivity problem. There was one year where we had doubled our engineering team and tripled our sales force, but our metrics—how much code we were writing, how fast we were releasing software—were flatlining. We just couldn’t produce more, no matter how many people we had.

    One key reason was that the company grew so fast, and there was so much knowledge and information fragmented across many different systems. Our employees were complaining that they couldn’t find the information needed to do their jobs. They also didn’t know who to ask for help because there was no concept of who was working on what. When we saw this as the number one problem, I decided to solve it. My first instinct as a search engineer was to just go and buy a search product that could connect to all of our hundred different systems. That revealed to us that there was nothing to buy. There was no product on the market that would connect to all our SaaS applications and give people one place where they could simply ask their questions and get the right information. That was the origin. I felt nobody had tried to solve the search problem inside businesses, even though Google solved it in the consumer world. That got me excited. At that time, we were not thinking about building a ChatGPT-like experience; nobody knew how fast AI would evolve.

    Nataraj: I think almost pre-ChatGPT no one called AI as AI; it was called ML or some other technical term. I remember watching Google’s Pixel phone launches in 2020-2021, and they were doing a lot of work creating AI-first products very early on. But for some reason, the tragedy is Google is considered as not doing enough with AI. That was a narrative versus experience difference.

    Arvind Jain: In 2021, we launched our company to the public and we called ourselves the Work AI Assistant. We didn’t call ourselves a search product because we could do more than search. We could answer questions and be proactive. But it was a big problem from a marketing perspective because nobody understood what an assistant was. Nobody had really seen ChatGPT. It was a big failure, and we rebranded ourselves as a search company. Then, of course, with ChatGPT launching, people realized how capable AI is and that it can really be a companion, which is when we came back to our original vision.

    Nataraj: One CEO I spoke with mentioned that when you pick a really hard problem to work on, a couple of things become easier. It’s easier to convince investors because the returns will be very high if you’re successful, and you can attract people who want to solve hard problems. What’s your take on picking a problem when starting a company?

    Arvind Jain: I agree with that assessment. It’s not that you’re just trying to pick something super hard to solve as the main criterion. The main criterion still has to be that you add value and build a useful product. I’m always attracted to working on problems that are very universal, where we can bring a product to everybody. I like it both because of the impact you’re going to make and because building a startup is a difficult journey. You have to have something that makes you go through that, and for me, that something is impact—solving a problem that builds a product useful to a very large number of people.

    Second, when you think about solving problems, you have to think about your strengths. If you are a technologist, it’s a gift if the problem you’re trying to solve is a difficult one because you’ll be able to build that technology with the best team, and you won’t get commoditized quickly. With Search, I knew how hard and difficult the problem is. That was definitely an exciting part of why I started Glean—I knew that if we solved the problem, it would be a super useful product and a technology that others wouldn’t be able to replicate quickly.

    Nataraj: One thing I often see with tools like ChatGPT or Glean AI in the enterprise context is that when you’re working on certain types of data, it’s not enough to be 90% accurate. If I’m reporting revenue numbers to my leadership, I want it to be 99.9% accurate. Can you talk a little bit about the techniques you are using to reduce hallucination?

    Arvind Jain: AI is progressing quite quickly. There’s a lot of work that the platforms we use, like OpenAI, Anthropic, and Google, are doing. The models today are significantly different from the models we had last year in terms of their ability to reason, think, and review their own work, giving you more confident, higher accuracy answers. There’s a general improvement at the model layer, which is reducing hallucinations significantly.

    Then, coming into the enterprise, none of these models know anything about your company. When you solve for specific business tasks, the typical workflow is that you have a model that is thinking and retrieving information from your different enterprise systems. It uses that as the source of truth to perform its work. It becomes very important for your AI product to ensure that for any given task, you are picking the right, up-to-date, high-quality information written by subject matter experts. Otherwise, you end up with garbage in, garbage out. That is what most people are struggling with right now. They build AI applications, dabble in retrieving information, and then complain to their customers that their data is bad. That’s not the right answer because AI should be smart enough to understand what information is old versus new. As a human, you have judgment. You look for recent information. If you can’t find it, you talk to an expert. AI has to work the same way, and that is what Glean does. We connect with all the different systems, understand knowledge at a deep level, identify what is high quality and fresh, and ensure that models are being provided the right input so they can produce the right output. Our entire company is focused on that.

    Nataraj: You mentioned an AI agent platform. What are the typical use cases for which enterprises are creating agents?

    Arvind Jain: I’ll pick some key ones across a few departments. For sales teams, much of their time is spent on prospecting and lead generation. You can build a really good AI agent that does that faster and with higher quality than a human in many cases. People have built an agent on Glean where a salesperson says, “I would like to prospect these five accounts today,” and Glean will do a good amount of research, identify the right contacts, and generate personalized outreach messages. Our salespeople then review the work of AI with a thumbs up or thumbs down, and the messages get sent out. They can now prospect at a rate five times greater than before. Similarly, after a customer call, an agent can generate the meeting follow-up with action items and supporting materials, a task that used to take hours.

    For customer service, the job is to answer customer questions and help with support tickets. AI is pretty good at that. People have built agents to auto-resolve tickets. For engineering teams, AI can be a really good code reviewer. The Glean AI code review agent is quite popular; it’s the first one to review any code an engineer uploads and can handle basics like following style guides. The use cases are exploding. Last year it was all about engineering and customer support, but now it’s all departments. Legal teams are using a redlining agent that automatically creates the first version of redlines on third-party papers like MSAs or NDAs. It’s a huge time and cost saver. The democratization is happening now.

    Nataraj: It feels like a better way to describe agents is as ‘workflow agents,’ similar to Zapier but with an intelligence layer. This can only work if you’re integrated well with different apps, and today every company uses hundreds of SaaS tools. Can you talk about that challenge?

    Arvind Jain: You’re spot on. Agents have to work on your enterprise data, use model intelligence to mimic human work, and take actions in your enterprise systems. There’s a strong dependence on your ability to both read information and take actions. The good news for Glean is that we’ve been working on that for the last six and a half years. We have hundreds of these integrations and thousands of actions we can support, which becomes the raw material for building these agents.

    It’s interesting how hard it is to get that to work because enterprise systems are very bespoke. One major challenge is security and governance. You can’t have an agent platform where agents just read any data from any system. You have to follow the governance architecture and rules inside the company, like permissions and access control. You have to not only build these integrations but also work upwards from that to handle agent security and ensure you deliver the right data to these agents, not stale or out-of-date information.

    Nataraj: We’ve seen a few form factors: the chat bar, then RAG on the engineering side, and now everyone is talking about agents. What is the next form factor or use case you see coming up?

    Arvind Jain: One big shift from the initial ChatGPT-like experience, which is very conversational and reactive, is that agents are becoming more proactive. You can build an agent that runs every day or when a certain trigger condition is met. The next big thing I see is AI becoming even more proactive and embedded in your day-to-day life. You won’t think of AI as a tool you go to; it will just come to you when it detects you need help.

    Our vision for the future of work is that every person will have an incredible personal companion. A companion that knows everything about you and your work life: your role, your company, your OKRs, your career ambitions, your weekly tasks, your daily schedule. It’s walking with you, listening to every word you say and hear. With all that deep knowledge, it’s ready to help proactively. For example, imagine I’m commuting to work. My companion detects I’m unprepared for my meetings. It knows the commute is 38 minutes, so it can offer to brief me as I drive, summarizing the documents I need to read so I feel prepared for my day. That’s where we are headed. AI is going to become a lot more proactive.

    Nataraj: Does that mean Glean is going into cross-platform and cross-application to make us more productive? I can imagine a floating bubble on my mobile where I can just hit a button and narrate a task.

    Arvind Jain: Absolutely. We already have these different interfaces. Glean works on your devices—we have an iOS app and an Android app—and it gets embedded in other applications. If you’re building the world’s best assistant or companion for everybody at work, you have to travel with them. From a form factor perspective, you’re going to see more interesting devices, whether it’s a smartwatch or a smart pen. Our goal would be to make sure we’re running on them.

    Nataraj: I want to shift gears and talk about the business. You mentioned a marketing failure pre-ChatGPT, then a rebrand. Now that you’re a fast-growing company, with AI increasing productivity, does that mean you’re hiring less? If you had X salespeople at Rubrik, are you hiring fewer now for the same level of growth?

    Arvind Jain: First, a company is a group of people building something together. I firmly believe the scale of your business is proportional to the number of people you have. I don’t personally believe I can have a five-person company and generate a billion dollars. The productivity per employee is going to grow at a relatively linear pace. It’s just that to survive as a company, you have to do 10 times more work than you did before with the same number of people, because everyone is benefiting from AI.

    You have to be able to build products and experiences we couldn’t dream of before. You shouldn’t be thinking, “Can I have fewer people?” You have to think, “How do I achieve more with the number of people I can absorb?” You don’t have a choice. If you deliver the same kind of products as pre-AI, you won’t survive. We are growing very fast and investing in our people. We fundamentally believe the larger we are, the more we’ll be able to do. But at the same time, I’m a minimalist. I always try to ensure we are enabling every employee with the right tools and that they are fully capitalizing on AI to deliver way more than expected in the pre-AI world.

    Nataraj: What does it mean to be more AI-first? Do you do more AI education or align incentives?

    Arvind Jain: We started by just talking about the importance of AI in town halls. I don’t think we saw the results because people were too busy. Then we tried setting goals like “get 20% more productive,” which was a complete failure. Our third iteration was to just do one thing with AI. We don’t care about the ROI; just show that you’re trying to learn and get one meaningful thing done. That’s the top-down approach. From a bottom-up perspective, we allow people to bring in the right AI tools and we celebrate wins. We created a program called “Glean on Glean.” Every new hire, for their first month, ignores their hired role and instead plays with AI tools to build one workflow or agent. It’s been very effective, especially for new grads who don’t know the traditional way of working and are more well-versed with AI.

    Nataraj: What are one or two metrics you consistently watch that tell you whether you’re going in the right direction?

    Arvind Jain: For us, number one is customer satisfaction. We look at user engagement—how often our users use the product on a daily basis. That’s the most important metric. Number two, on the product side, we look at the type of things people are trying to do with it and if that set is expanding. For example, are more people becoming creators on Glean and building different sets of agents? From the business side, we look at standard metrics like retention rate and tracking our pipeline for demand. But as a CEO, probably the most important thing to watch is how our organization is feeling internally. What are the signs from the team? Are we ensuring we have mission alignment? Are people committed and motivated? Are we creating the right environment for them to grow and succeed? Those are the top-of-mind things for me.


    This conversation with Arvind Jain offers a clear look into how enterprise AI is moving beyond simple chat interfaces to create tangible value through sophisticated workflow agents. His insights provide a roadmap for how businesses can leverage AI to solve core productivity challenges.

    → If you enjoyed this conversation with Arvind Jain, listen to the full episode here on Spotify, Apple, or YouTube.
    → Subscribe to our Newsletter and never miss an update.

  • Decagon’s Ashwin Sreenivas: Building a $1.5B AI Support Giant

    At just under 30, co-founders Jesse Zhang and Ashwin Sreenivas have built Decagon into one of the fastest-growing AI companies, achieving a $1.5B valuation in just over a year out of stealth. Backed by industry giants like Accel and Andreessen Horowitz, Decagon is redefining enterprise-grade customer support with its advanced AI agents, earning a spot on Forbes’ prestigious AI 50 list for 2025. In this episode of the Startup Project, host Nataraj sits down with Ashwin to explore the secrets behind their explosive growth. They discuss how Decagon moved beyond the rigid, decision-tree-based chatbots of the past by creating AI agents that follow complex instructions, how they found product-market fit by tackling intricate enterprise workflows, and the company’s long-term vision to build AI concierges that transform customer interaction.

    👉 Subscribe to the podcast: startupproject.substack.com

    Nataraj: So let’s get right into it. What is Decagon AI? What does the product do, and talk a little bit about the technology behind Decagon.

    Ashwin Sreenivas: You can think of Decagon as an AI customer support agent. For our customers, Decagon talks directly to their customers and has great conversations with them over chat, phone calls, SMS, and email. Our goal is to build these AI concierges for these customers. This idea of AI for customer support isn’t necessarily new; you’ve had chatbots for 10 years now, probably. But the thing that’s really different this time is if you look at the chatbots from as late as three or four years ago, it wasn’t a great experience. The reason it wasn’t a great experience is because you had these decision trees that everybody had to build, and it was a pain to build them and a pain to maintain. From a customer perspective, if you have a question or a problem that is one degree off from the decision tree that was built out, it was completely useless. That’s when you have people saying, “agent, agent, agent.” The thing that’s changed, and a lot of the core of what we’ve built, is a way to train these AI agents like humans are trained. Humans have standard operating procedures that they follow, and our AI agents have agent operating procedures that they follow. We’re able to essentially build these AI agents that can have much more fluid, natural conversations like a human agent would.

    Nataraj: Talking a little bit about the products, you mentioned chat, phone calls, emails. Do you have products for everything? If a company is coming to adopt Decagon, are they first starting with chat and then expanding to everything else? How does the customer journey look?

    Ashwin Sreenivas: This is actually very driven by our customers. For a lot of the more tech-native brands, think like a Notion or a Figma, you would never think about picking up the phone and calling them. You’d want to chat or email. Whereas some of our other customers like Hertz, you don’t really email Hertz. If you need a car, you’re going to call them up on the phone. So a lot of our deployment model is guided by our customers and how their customers want to reach out to them. Typically, most customers start with the method by which most of their customers reach out, and then they expand to all the other ones. It’s very common to start with chat and then expand to email and voice, or start with voice and expand to chat and email.

    Nataraj: I want to double-click on the point you mentioned about the decision tree model. I think around 2015, during the Alexa peak, everyone was building chatbots. I remember the app ecosystem where you had to build apps on Alexa or Microsoft’s Cortana. Conversational bots were the hype for two or three years, but they quickly stagnated when we realized all we were doing was replicating the “press one for this, press two for this” system on a chat interface. You define a decision tree, and anything outside of that is basically an if-else command line that ends with a catch-all driving you to a human. There are obviously a lot of players in customer support with existing tools. Do they have a specific edge on creating something like what Decagon is doing because of their existing data?

    Ashwin Sreenivas: No, I actually think, interestingly enough, because these customer service bots went through a few generations of tech, the tech is different enough that you don’t get too much of an advantage starting with the old tech. In fact, you start with a lot of tech debt that you then have to undo. Let’s say 10 years ago, you had to start with explicit decision trees where you program every single line. Then about five years ago, you had the Alexas of the world. It was a little bit of an improvement, but essentially all it did was allow a user to express what they want. They could say, “I want to return my order,” and the models were good at detecting intent—classifying a natural language inquiry into one of 50 things it knew how to do. But beyond that, everything became decision trees. The thing is now with these new models, because you have so much flexibility and the ability for them to follow more complex instructions and multi-step workflows, you can actually rebuild this from the ground up. It’s not just classifying an intent and then following a decision tree; we want the whole thing to be much more interactive for a better user experience. We had to rebuild it to ask, how does a human being learn? You have standard operating procedures. You say, “Hey, if a customer asks to return their order, first check this database to see what tier of customer they are. If they’re a platinum customer, you have a more generous return policy. If they’re not, you have a stricter one. You need to check the fraud database.” You go through many of these steps and then work with the customer. The core of what we’ve done is build out AI agents that can follow instructions very well, like a human does.

    Nataraj: This whole concept of AOPs (Agent Operating Procedures) that you guys introduced is very fascinating. You mentioned SOPs, which humans read, and then you have AOPs, which is sort of a protocol for the agent. Who is converting the SOP into an AOP? How easy is it to create this agent? Are you giving a generic agent that adapts to a customer’s SOP, or do I as a customer have to build the agent?

    Ashwin Sreenivas: The core Decagon product is one agent that is very good at following instructions and AOPs. We built this for time to value. If you have to train an agent from scratch for every single customer, it’s going to take a lot of time for that customer to get onboarded. And two, it’s very difficult for that customer to iterate on their experiences. If you build one agent, like a human, that’s very good at following instructions, they can come to that customer and say, “Here are the instructions I need to follow,” and you can be up and running immediately. In terms of how these AOPs are created, most customers tend to have some set of SOPs already, and AOPs are actually extremely close to these. The only thing you need to change is you need to instruct it on how to use a company’s internal systems. It’s 99% English, and then there are a few keywords to tell it, “At this point, you need to call this API endpoint to load the user’s details,” or “At this point, you need to issue a refund using the Stripe endpoint.” That’s the primary difference from SOPs.

    Nataraj: If you talk about the technology stack, are you using external models, or are you training your own models? What is the difference between a base model and what you’re delivering to a customer?

    Ashwin Sreenivas: We spend a lot of time thinking about models. We do use some external models, but we also train a lot of models in-house. The reason is, if you’re using external models, most of what you can do is through prompt tuning, and we found that models are only so steerable with just prompt tuning. We’ve spent a lot of time in-house taking open-source models and fine-tuning them, using RL on top of them, and using all of these techniques to steer them. To get these models to follow instructions well, you have to decompose the task. A customer comes in with a question, and I have all of these AOPs I could select from. The first decision is, is any of these AOPs relevant? If a user is continuing the conversation, are they on the same topic or should I switch to another AOP? At every step, there are a hundred micro-decisions to make. A lot of what we do is break down these micro-decisions and have models that are very, very good at each one.

    Nataraj: The industry narrative has been that only companies with very large capital can train models. Are you seeing that cost drop? When you mentioned you’re training open-source models, is that becoming more accessible?

    Ashwin Sreenivas: We’re not pre-training our models from scratch. We take open-source models and then do things on top of those. The thing that has changed dramatically is that the quality of the open-source models has gotten so good that this is now viable to do pretty quickly.

    Nataraj: Which models are better for your use case?

    Ashwin Sreenivas: We use a whole mix of models for different things because we found that different base models perform differently for different tasks. The Google Gemma models are great at very specific things. The Llama models are great at very specific things. The Qwen models are great at very specific things. Even for one customer service message that comes in, it’s not one message going to one model. It’s one message going to a whole sequence of models, each of which is good at doing different things to finally generate the final response.

    Nataraj: It’s often debated that as bigger models like GPT-5 or Gemini improve, they will gain the specialized capabilities that smaller, fine-tuned models have. What is the reality you’re seeing?

    Ashwin Sreenivas: I would push back against that argument for two reasons. Number one, while the bigger models will all have the capabilities, the level of performance will change. If you have a well-defined task, you can have a model that’s 100 times smaller achieve a higher degree of performance if you just fine-tune it on that one task. I don’t want it to code in Python and write poems; I just want it to get really good at this one thing. When measured on that one task, it will probably outperform models a hundred times its size. Number two, which is equally important, is latency. A giant model might take five seconds to generate a response. A really small model cuts that time by a factor of 10. Over text, five seconds might not matter, but on a phone call, if it’s silent for five seconds, that’s a really bad experience. For that, you have to go toward the smaller models.

    Nataraj: Can you talk about why you and your co-founder picked customer service as a segment when you decided to start a company?

    Ashwin Sreenivas: When we started this company, it was around the time when GPT-3.5 Turbo and GPT-4 were out. We were looking at the capabilities and thought, wow, this is getting just about good enough that it can start doing things that people do. As we looked at the enterprise, we asked, where is there a lot of demand for repetitive, text-based tasks? Number one was customer support teams, and number two was operations teams. As we talked to operations leaders, the number one demand was in customer support. They told us, “Look, we’re growing so quickly, our customer support volume is scaling really quickly, which means we need to hire a lot more people, and we can’t afford to do that. We are desperate.” Initially, it looked like a very crowded space, but as we talked to customers, we found it was crowded for smaller companies with simple tasks, where 90% of their volume was, “I want to return my order.” But for more complex enterprises, there wasn’t anything built that could really follow their intricate support flows. That was the wedge we took—to build exclusively for companies with very complex workflows. The other thing that was interesting was our long-term thinking. If you build an agent that can instruction-follow very well, you enable businesses to eventually grow this from customer support into a customer concierge.

    What I mean by that is, let’s say you want to fly from San Francisco to New York. You go to your favorite airline’s website, type in your search, and it gives you 30 different flights to pick from. That’s a lot of annoying steps. A much better experience would be to text your airline and say, “I want to go to New York next weekend.” An AI agent on the other side knows who you are, your preferences, and your budget. It looks through everything and says, “Hey, here are two options, which one do you like?” This AI agent also knows where you like to sit and says, “By the way, I have a free upgrade available for you. Is that okay?” You say yes, and it says, “Booked.” The big difference is this is a much more seamless experience. Most websites today shift the burden of work onto the user. Now, it shifts to a world where you express your intent to an AI agent that then does the work for you. That was a really interesting shift for us. Building these customer support agents is the first step to building these broader customer concierges.

    Nataraj: How did you acquire your first five customers? What did that journey look like?

    Ashwin Sreenivas: Early customer acquisition is always very manual. There’s no silver bullet. It’s just a lot of finding everyone in your networks, getting introductions, and doing cold emailing and cold LinkedIn messaging. It’s brute force work. But the other thing for us is we never did free design pilots; we charged for our software from day one. This doesn’t mean we charged them on day one of the contract. We’d typically say, “There’ll be a four-week pilot. At the end of four weeks, we’ll decide upfront if you like it, this is what it’s going to cost.” We never had an open-ended, long-term period where we did things for free because, in the early days, the number one thing you’re trying to validate is, am I building something that people will pay money for? If it’s truly valuable, you should be able to tell your potential customer, “Hey, if I accomplish A, B, and C, will you pay me this much in four weeks?” If it’s a painful enough problem, they should say yes. This helped us weed through bad business models and bad initial ideas quickly.

    Nataraj: What business impact and success metrics do your customers look at when using Decagon?

    Ashwin Sreenivas: Customers think about value in two ways primarily. One is what percentage of conversations we are able to handle ourselves successfully—meaning the user is satisfied and we have actually solved their problem. If we can solve a greater percentage of those, fewer support tickets ultimately make their way to human agents, who can then focus their time on more complicated problems. The second benefit, which was a little counterintuitive, was that a lot of these companies expanded the amount of support they offered. It’s not that companies want to minimize support; they want to give as much as they can economically. If it cost me $10 for one customer interaction and all of a sudden that becomes 80 cents, I’m not just going to save all that money. I’m going to reinvest some of that in providing more support. We’ve noticed that their end customers actually want that increased level of support. So now, instead of phone lines being open only from 9 a.m. to 5 p.m., it becomes 24 hours a day. Instead of offering support only to paid members, we offer support to everybody. There’s this latent demand for increased support, and by making it much cheaper, businesses can now offer more. At the end of the day, this leads to higher retention and better customer happiness.

    Nataraj: You also have support for voice agents, which is particularly interesting. What has the traction been like? Do customers realize they’re talking to an AI?

    Ashwin Sreenivas: In general, all our voice agents say, “Hi, I’m a virtual agent here to help you” or something like that. But the other interesting thing is most customers calling about a problem don’t want to talk to a human; they want their problem solved. They don’t care how, they just want it solved. For us, making it sound more human is not about giving the impression they’re talking to a human; it’s to make the interaction feel more seamless. You want responses to be fast. At the end of the day, the primary goal is, how can we solve the customer’s problem? Even if the customer is very aware they’re talking to an AI agent, but that agent solves their problem in 10 seconds, that’s a good experience. Versus talking to a human who takes 45 minutes, which is a bad experience. We have several customers now where the NPS for the voice agents is as good or higher than human agents because if the AI agent can solve their problem, it solves it immediately. And if it can’t, it hands it over to a human immediately. Either way, you end up having a reasonably good experience.

    Nataraj: Has there been a drop in hiring in support departments? Are agents replacing humans or augmenting them?

    Ashwin Sreenivas: It really depends on the business. If AI agents can handle a bigger chunk of customer inquiries, you can do one of three things. One, you can handle more incoming support volume. You put it on every page, you give support to every member, you do it 24 hours a day. Your top-line support volume will go up, but your customers have a better experience, and you can keep the number of human agents the same. Other people might say, “I’m going to keep the amount of customer support I do the same. There are fewer tickets going to human agents, so now I can have those agents do other higher-value things,” like go through the high-priority queue more quickly or move to a different part of the operations team.

    Nataraj: Can you talk about the UX of the product? People have different definitions of agents. What kind of agent are we talking about here?

    Ashwin Sreenivas: Interacting with Decagon is exactly like interacting with a human being. From the end user’s perspective, it’s as though they were talking to a human over a chat screen or on the phone. Behind the scenes, the way Decagon works is that each business has a set of AOPs that these AI agents have access to. The AOPs allow the agents to do different things—refund an order, upgrade a subscription, change billing dates. The Decagon agent is just saying, “Okay, this question has come in. Do I need to work through an AOP with the customer to solve this problem?” And it executes the AOPs behind the scenes.

    Nataraj: Before your product, a support manager would look at their team’s activities. How does that management look now on your customer’s side?

    Ashwin Sreenivas: There’s been an interesting shift. Rather than training new human agents, I’ve trained this AI agent once, and now my job becomes, how can I improve this agent very quickly? We ended up building a number of things in the product to support this. If the AI agent had one million conversations this month, no human can read through all of that. We had to build a lot of product to answer, what went well? What went poorly? What feedback should I take to the rest of the business? What should I now teach the agent so that instead of handling 80% of conversations, it can handle 85%? The primary workflow of the support manager has changed from supervising to being more of an investigator and agent improver, asking, “What didn’t go well and how can I improve that?”

    Nataraj: Are the learnings from one mature customer flowing back into the overall agent that you’re building for all companies?

    Ashwin Sreenivas: We don’t take learnings from one customer and apply them to another because most of our customers are enterprises, and we have very strong data and model training guarantees. But the learning we can take is what kinds of things people need these agents to do. For instance, we learned early on that sometimes an asynchronous task needs to happen. Decagon didn’t have support for that, so we realized that use case was important and extended the agent to be able to do tasks like that. It’s those kinds of learnings on how agents are constructed that we can take cross-customer. But for a lot of these customers, the way they do customer service is a big part of their secret sauce, so we have very strong guarantees on data isolation.

    Nataraj: How are you acquiring customers right now?

    Ashwin Sreenivas: We have customers through three big channels. Number one is referrals from existing customers. Support teams will often say, “Hey, we bought this thing, it’s helping our support team,” and they’ll tell their friends at other companies. Number two is general inbound that we get because people have heard of Decagon. And three, we also have a sales team now that reaches out to people and goes to conferences.

    Nataraj: Both you and your co-founder had companies before. How did the operating dynamics of the company change from your last company to now? Did access to AI tools increase the pace?

    Ashwin Sreenivas: A lot of things changed. For both of our first companies, we were both first-time founders figuring things out. I think the biggest thing that changed was how driven by customer needs we were. We didn’t overthink the exact right two-year strategy or how we were going to build moats over three years. We said, the only thing we’re going to worry about now is, how do we build something that someone will pay us real money for in four weeks? That was the only problem. That simplifies things, and we learned that all the other things you can figure out over time. For instance, with competitive moats, when we sold a deal in the early days, we would ask, “Why did you buy us?” They would tell us, “This competitor didn’t have this feature we needed.” And we were like, great, so we should do more of that because clearly this is valuable.

    Nataraj: It’s almost like you just listen to the market rather than putting your own thesis on it.

    Ashwin Sreenivas: Yeah. I think there was a very old Marc Andreessen essay about this: good markets will pull products out of teams. The market has a need, and the market will pull the product out of you.

    Nataraj: What’s your favorite AI product that you use personally?

    Ashwin Sreenivas: I use a number of things. For coding co-pilots, Cursor and Supermaven are great. For background coding agents, Devin is great. I like Granola for meeting notes. I used to hate taking meeting notes, and now I just have to jot down things every now and then. I think that captures most of what I do because either I’m writing code or talking to people, and that has become 99% of my life outside of spending time with my wife.

    Nataraj: Awesome. I think that’s a good note to end the conversation. Thanks, Ashwin, for coming on the show.

    Ashwin Sreenivas: Yeah, great being here. Thanks for having me.

    This conversation with Ashwin Sreenivas provides a masterclass in building a category-defining AI company, highlighting the power of focusing on genuine customer pain points and the massive potential for AI to create more seamless, personalized business interactions. His insights reveal a clear roadmap for how AI is moving from simple automation to becoming a core driver of customer experience.

    If you enjoyed this conversation with Ashwin Sreenivas, listen to the full episode here on Spotify, Apple or YouTube.
    Subscribe to our newsletter: startupproject.substack.com